Using Web-Based Peer Benchmarking to Manage the Client-Based Project
ERIC Educational Resources Information Center
Raska, David; Keller, Eileen Weisenbach; Shaw, Doris
2013-01-01
The complexities of integrating client-based projects into marketing courses provide challenges for the instructor but produce richness of context and active learning for the student. This paper explains the integration of Web-based peer benchmarking as a means of improving student performance on client-based projects within a single semester in…
Transaction Processing Performance Council (TPC): State of the Council 2010
NASA Astrophysics Data System (ADS)
Nambiar, Raghunath; Wakou, Nicholas; Carman, Forrest; Majdalany, Michael
The Transaction Processing Performance Council (TPC) is a non-profit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable performance data to the industry. Established in August 1988, the TPC has been integral in shaping the landscape of modern transaction processing and database benchmarks over the past twenty-two years. This paper provides an overview of the TPC's existing benchmark standards and specifications, introduces two new TPC benchmarks under development, and examines the TPC's active involvement in the early creation of additional future benchmarks.
Methodology and issues of integral experiments selection for nuclear data validation
NASA Astrophysics Data System (ADS)
Tatiana, Ivanova; Ivanov, Evgeny; Hill, Ian
2017-09-01
Nuclear data validation involves a large suite of Integral Experiments (IEs) for criticality, reactor physics and dosimetry applications. [1] Often benchmarks are taken from international Handbooks. [2, 3] Depending on the application, IEs have different degrees of usefulness in validation, and usually the use of a single benchmark is not advised; indeed, it may lead to erroneous interpretation and results. [1] This work aims at quantifying the importance of benchmarks used in application dependent cross section validation. The approach is based on well-known General Linear Least Squared Method (GLLSM) extended to establish biases and uncertainties for given cross sections (within a given energy interval). The statistical treatment results in a vector of weighting factors for the integral benchmarks. These factors characterize the value added by a benchmark for nuclear data validation for the given application. The methodology is illustrated by one example, selecting benchmarks for 239Pu cross section validation. The studies were performed in the framework of Subgroup 39 (Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files) established at the Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD).
NASA Astrophysics Data System (ADS)
Shokrollahpour, Elsa; Hosseinzadeh Lotfi, Farhad; Zandieh, Mostafa
2016-06-01
Efficiency and quality of services are crucial to today's banking industries. The competition in this section has become increasingly intense, as a result of fast improvements in Technology. Therefore, performance analysis of the banking sectors attracts more attention these days. Even though data envelopment analysis (DEA) is a pioneer approach in the literature as of an efficiency measurement tool and finding benchmarks, it is on the other hand unable to demonstrate the possible future benchmarks. The drawback to it could be that the benchmarks it provides us with, may still be less efficient compared to the more advanced future benchmarks. To cover for this weakness, artificial neural network is integrated with DEA in this paper to calculate the relative efficiency and more reliable benchmarks of one of the Iranian commercial bank branches. Therefore, each branch could have a strategy to improve the efficiency and eliminate the cause of inefficiencies based on a 5-year time forecast.
Implementation and validation of a conceptual benchmarking framework for patient blood management.
Kastner, Peter; Breznik, Nada; Gombotz, Hans; Hofmann, Axel; Schreier, Günter
2015-01-01
Public health authorities and healthcare professionals are obliged to ensure high quality health service. Because of the high variability of the utilisation of blood and blood components, benchmarking is indicated in transfusion medicine. Implementation and validation of a benchmarking framework for Patient Blood Management (PBM) based on the report from the second Austrian Benchmark trial. Core modules for automatic report generation have been implemented with KNIME (Konstanz Information Miner) and validated by comparing the output with the results of the second Austrian benchmark trial. Delta analysis shows a deviation <0.1% for 95% (max. 1.4%). The framework provides a reliable tool for PBM benchmarking. The next step is technical integration with hospital information systems.
Ellis, Judith
2006-07-01
The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.
An Integrated Development Environment for Adiabatic Quantum Programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; McCaskey, Alex; Bennink, Ryan S
2014-01-01
Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation enginemore » that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horowitz, Kelsey A; Ding, Fei; Mather, Barry A
This presentation was given at the 2017 NREL Workshop 'Benchmarking Distribution Grid Integration Costs Under High Distributed PV Penetrations.' It provides a brief overview of recent and ongoing NREL work on distribution system grid integration costs, as well as challenges and needs from the community.
Hospital benchmarking: are U.S. eye hospitals ready?
de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S
2012-01-01
Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added "public" value of benchmarking in health care is questionable.
NASA Technical Reports Server (NTRS)
Bell, Michael A.
1999-01-01
Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munro, J.F.; Kristal, J.; Thompson, G.
The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis.more » One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.« less
NASA Software Engineering Benchmarking Effort
NASA Technical Reports Server (NTRS)
Godfrey, Sally; Rarick, Heather
2012-01-01
Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA
A Privacy-Preserving Platform for User-Centric Quantitative Benchmarking
NASA Astrophysics Data System (ADS)
Herrmann, Dominik; Scheuer, Florian; Feustel, Philipp; Nowey, Thomas; Federrath, Hannes
We propose a centralised platform for quantitative benchmarking of key performance indicators (KPI) among mutually distrustful organisations. Our platform offers users the opportunity to request an ad-hoc benchmarking for a specific KPI within a peer group of their choice. Architecture and protocol are designed to provide anonymity to its users and to hide the sensitive KPI values from other clients and the central server. To this end, we integrate user-centric peer group formation, exchangeable secure multi-party computation protocols, short-lived ephemeral key pairs as pseudonyms, and attribute certificates. We show by empirical evaluation of a prototype that the performance is acceptable for reasonably sized peer groups.
Phase field benchmark problems for dendritic growth and linear elasticity
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...
2018-03-26
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
Phase field benchmark problems for dendritic growth and linear elasticity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
Encoding color information for visual tracking: Algorithms and benchmark.
Liang, Pengpeng; Blasch, Erik; Ling, Haibin
2015-12-01
While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.
Professional Development for Sessional Staff in Higher Education: A Review of Current Evidence
ERIC Educational Resources Information Center
Hitch, Danielle; Mahoney, Paige; Macfarlane, Susie
2018-01-01
The aim of this study was to provide an integrated review of evidence published in the past decade around professional development for sessional staff in higher education. Using the Integrating Theory, Evidence and Action method, the review analysed recent evidence using the three principles of the Benchmarking Leadership and Advancement of…
New Reactor Physics Benchmark Data in the March 2012 Edition of the IRPhEP Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; J. Blair Briggs; Jim Gulliford
2012-11-01
The International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications. Numerous experiments that have been performed worldwide, represent a large investment of infrastructure, expertise, and cost, and are valuable resources of data for present and future research. These valuable assets provide the basis for recording, development, and validation of methods. If the experimental data are lost, the high cost to repeat many of these measurements may be prohibitive. The purpose of the IRPhEP is to provide an extensively peer-reviewed set ofmore » reactor physics-related integral data that can be used by reactor designers and safety analysts to validate the analytical tools used to design next-generation reactors and establish the safety basis for operation of these reactors. Contributors from around the world collaborate in the evaluation and review of selected benchmark experiments for inclusion in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [1]. Several new evaluations have been prepared for inclusion in the March 2012 edition of the IRPhEP Handbook.« less
INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Blair Briggs; Lori Scott; Yolanda Rugama
The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, butmore » focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.« less
A web-based system architecture for ontology-based data integration in the domain of IT benchmarking
NASA Astrophysics Data System (ADS)
Pfaff, Matthias; Krcmar, Helmut
2018-03-01
In the domain of IT benchmarking (ITBM), a variety of data and information are collected. Although these data serve as the basis for business analyses, no unified semantic representation of such data yet exists. Consequently, data analysis across different distributed data sets and different benchmarks is almost impossible. This paper presents a system architecture and prototypical implementation for an integrated data management of distributed databases based on a domain-specific ontology. To preserve the semantic meaning of the data, the ITBM ontology is linked to data sources and functions as the central concept for database access. Thus, additional databases can be integrated by linking them to this domain-specific ontology and are directly available for further business analyses. Moreover, the web-based system supports the process of mapping ontology concepts to external databases by introducing a semi-automatic mapping recommender and by visualizing possible mapping candidates. The system also provides a natural language interface to easily query linked databases. The expected result of this ontology-based approach of knowledge representation and data access is an increase in knowledge and data sharing in this domain, which will enhance existing business analysis methods.
Benchmark Dose Software (BMDS) Development and ...
This report is intended to provide an overview of beta version 1.0 of the implementation of a model of repeated measures data referred to as the Toxicodiffusion model. The implementation described here represents the first steps towards integration of the Toxicodiffusion model into the EPA benchmark dose software (BMDS). This version runs from within BMDS 2.0 using an option screen for making model selection, as is done for other models in the BMDS 2.0 suite. This report is intended to provide an overview of beta version 1.0 of the implementation of a model of repeated measures data referred to as the Toxicodiffusion model.
International Space Station Alpha (ISSA) Integrated Traffic Model
NASA Technical Reports Server (NTRS)
Gates, R. E.
1995-01-01
The paper discusses the development process of the International Space Station Alpha (ISSA) Integrated Traffic Model which is a subsystem analyses tool utilized in the ISSA design analysis cycles. Fast-track prototyping of the detailed relationships between daily crew and station consumables, propellant needs, maintenance requirements and crew rotation via spread sheets provide adequate benchmarks to assess cargo vehicle design and performance characteristics.
DE-NE0008277_PROTEUS final technical report 2018
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Andreas
This project details re-evaluations of experiments of gas-cooled fast reactor (GCFR) core designs performed in the 1970s at the PROTEUS reactor and create a series of International Reactor Physics Experiment Evaluation Project (IRPhEP) benchmarks. Currently there are no gas-cooled fast reactor (GCFR) experiments available in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). These experiments are excellent candidates for reanalysis and development of multiple benchmarks because these experiments provide high-quality integral nuclear data relevant to the validation and refinement of thorium, neptunium, uranium, plutonium, iron, and graphite cross sections. It would be cost prohibitive to reproduce suchmore » a comprehensive suite of experimental data to support any future GCFR endeavors.« less
New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)
NASA Astrophysics Data System (ADS)
Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.
2017-09-01
Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by adjustment of the original ENDF format file.
Diffusion-based recommendation with trust relations on tripartite graphs
NASA Astrophysics Data System (ADS)
Wang, Ximeng; Liu, Yun; Zhang, Guangquan; Xiong, Fei; Lu, Jie
2017-08-01
The diffusion-based recommendation approach is a vital branch in recommender systems, which successfully applies physical dynamics to make recommendations for users on bipartite or tripartite graphs. Trust links indicate users’ social relations and can provide the benefit of reducing data sparsity. However, traditional diffusion-based algorithms only consider rating links when making recommendations. In this paper, the complementarity of users’ implicit and explicit trust is exploited, and a novel resource-allocation strategy is proposed, which integrates these two kinds of trust relations on tripartite graphs. Through empirical studies on three benchmark datasets, our proposed method obtains better performance than most of the benchmark algorithms in terms of accuracy, diversity and novelty. According to the experimental results, our method is an effective and reasonable way to integrate additional features into the diffusion-based recommendation approach.
Improving Federal Education Programs through an Integrated Performance and Benchmarking System.
ERIC Educational Resources Information Center
Department of Education, Washington, DC. Office of the Under Secretary.
This document highlights the problems with current federal education program data collection activities and lists several factors that make movement toward a possible solution, then discusses the vision for the Integrated Performance and Benchmarking System (IPBS), a vision of an Internet-based system for harvesting information from states about…
Quality Enhancement on E-Learning
ERIC Educational Resources Information Center
Ossiannilsson, E. S. I.
2012-01-01
Purpose: Benchmarking, a method for quality assurance has not been very commonly used in higher education with regard to e-learning. Today, e-learning is an integral part of higher education, and so should also be an integral part of quality assurance systems. However, quality indicators, benchmarks and critical success factors on e-learning have…
Integral Full Core Multi-Physics PWR Benchmark with Measured Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forget, Benoit; Smith, Kord; Kumar, Shikhar
In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevantmore » multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.« less
How well does your model capture the terrestrial ecosystem dynamics of the Arctic-Boreal Region?
NASA Astrophysics Data System (ADS)
Stofferahn, E.; Fisher, J. B.; Hayes, D. J.; Huntzinger, D. N.; Schwalm, C.
2016-12-01
The Arctic-Boreal Region (ABR) is a major source of uncertainties for terrestrial biosphere model (TBM) simulations. These uncertainties are precipitated by a lack of observational data from the region, affecting the parameterizations of cold environment processes in the models. Addressing these uncertainties requires a coordinated effort of data collection and integration of the following key indicators of the ABR ecosystem: disturbance, flora / fauna and related ecosystem function, carbon pools and biogeochemistry, permafrost, and hydrology. We are developing a model-data integration framework for NASA's Arctic Boreal Vulnerability Experiment (ABoVE), wherein data collection for the key ABoVE indicators is driven by matching observations and model outputs to the ABoVE indicators. The data are used as reference datasets for a benchmarking system which evaluates TBM performance with respect to ABR processes. The benchmarking system utilizes performance metrics to identify intra-model and inter-model strengths and weaknesses, which in turn provides guidance to model development teams for reducing uncertainties in TBM simulations of the ABR. The system is directly connected to the International Land Model Benchmarking (ILaMB) system, as an ABR-focused application.
Integrating CFD, CAA, and Experiments Towards Benchmark Datasets for Airframe Noise Problems
NASA Technical Reports Server (NTRS)
Choudhari, Meelan M.; Yamamoto, Kazuomi
2012-01-01
Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. The combination of geometric complexity, high Reynolds number turbulence, multiple regions of separation, and a strong coupling with adjacent physical components makes the problem of airframe noise highly challenging. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate state of the art computational fluid dynamics, computational aeroacoustics, and in depth, holistic, and multifacility measurements targeting a selected set of canonical yet realistic configurations. This paper provides a brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far.
Liebe, J D; Hübner, U
2013-01-01
Continuous improvements of IT-performance in healthcare organisations require actionable performance indicators, regularly conducted, independent measurements and meaningful and scalable reference groups. Existing IT-benchmarking initiatives have focussed on the development of reliable and valid indicators, but less on the questions about how to implement an environment for conducting easily repeatable and scalable IT-benchmarks. This study aims at developing and trialling a procedure that meets the afore-mentioned requirements. We chose a well established, regularly conducted (inter-) national IT-survey of healthcare organisations (IT-Report Healthcare) as the environment and offered the participants of the 2011 survey (CIOs of hospitals) to enter a benchmark. The 61 structural and functional performance indicators covered among others the implementation status and integration of IT-systems and functions, global user satisfaction and the resources of the IT-department. Healthcare organisations were grouped by size and ownership. The benchmark results were made available electronically and feedback on the use of these results was requested after several months. Fifty-ninehospitals participated in the benchmarking. Reference groups consisted of up to 141 members depending on the number of beds (size) and the ownership (public vs. private). A total of 122 charts showing single indicator frequency views were sent to each participant. The evaluation showed that 94.1% of the CIOs who participated in the evaluation considered this benchmarking beneficial and reported that they would enter again. Based on the feedback of the participants we developed two additional views that provide a more consolidated picture. The results demonstrate that establishing an independent, easily repeatable and scalable IT-benchmarking procedure is possible and was deemed desirable. Based on these encouraging results a new benchmarking round which includes process indicators is currently conducted.
High School Marine Science and Scientific Literacy: The Promise of an Integrated Science Course
ERIC Educational Resources Information Center
Lambert, Julie
2006-01-01
This descriptive study provides a comparison of existing high school marine science curricula and instructional practices used by nine teachers across seven schools districts in Florida and their students' level of scientific literacy, as defined by the national science standards and benchmarks. To measure understandings of science concepts and…
ERIC Educational Resources Information Center
Nicholls, Jeananne; Hair, Joseph F., Jr.; Ragland, Charles B.; Schimmel, Kurt E.
2013-01-01
AACSB International advocates integration of ethics, corporate social responsibility, and sustainability in all business school disciplines. This study provides an overview of the implementation of these three topics in teaching initiatives and assessment in business schools accredited by AACSB International. Since no comprehensive studies have…
ERIC Educational Resources Information Center
Atchison, Eric S.; Hosch, Braden J.
2015-01-01
This chapter synthesizes the national discussion on other solutions to the Integrated Postsecondary Education Data System (IPEDS), such as a national student record system, and complications. The authors will briefly examine the pros and cons of IPEDS while primarily focusing on national alternatives, as well providing specific examples for…
"Winsight"™ Assessment System: Preliminary Theory of Action. Research Report. ETS RR-17-26
ERIC Educational Resources Information Center
Wylie, E. Caroline
2017-01-01
The "Winsight"™Assessment System integrates summative, interim (including both benchmark assessments and testlets), and formative assessment components initially focused on mathematics and English language arts (ELA) in Grades 3-8 and high school.This report provides a preliminary theory of action for the Winsight Assessment System. A…
Contributions to Integral Nuclear Data in ICSBEP and IRPhEP since ND 2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Briggs, J. Blair; Gulliford, Jim
2016-09-01
The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the international nuclear data community at ND2013. Since ND2013, integral benchmark data that are available for nuclear data testing has continued to increase. The status of the international benchmark efforts and the latest contributions to integral nuclear data for testing is discussed. Select benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2013 are highlighted. The 2015 edition of the ICSBEP Handbook now contains 567 evaluations with benchmark specifications for 4,874more » critical, near-critical, or subcritical configurations, 31 criticality alarm placement/shielding configuration with multiple dose points apiece, and 207 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. The 2015 edition of the IRPhEP Handbook contains data from 143 different experimental series that were performed at 50 different nuclear facilities. Currently 139 of the 143 evaluations are published as approved benchmarks with the remaining four evaluations published in draft format only. Measurements found in the IRPhEP Handbook include criticality, buckling and extrapolation length, spectral characteristics, reactivity effects, reactivity coefficients, kinetics, reaction-rate distributions, power distributions, isotopic compositions, and/or other miscellaneous types of measurements for various types of reactor systems. Annual technical review meetings for both projects were held in April 2016; additional approved benchmark evaluations will be included in the 2016 editions of these handbooks.« less
NASA Astrophysics Data System (ADS)
Fensin, Michael Lorne
Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model complex 3-dimesional geometries and better track the evolution of temporal nuclide inventory by simulating the actual physical process utilizing continuous energy coefficients. The integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a high-fidelity completely self-contained Monte-Carlo-linked depletion capability in a well established, widely accepted Monte Carlo radiation transport code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. This work chronicles relevant nuclear history, surveys current methodologies of depletion theory, details the methodology in applied MCNPX and provides benchmark results for three independent OECD/NEA benchmarks. Relevant nuclear history, from the Oklo reactor two billion years ago to the current major United States nuclear fuel cycle development programs, is addressed in order to supply the motivation for the development of this technology. A survey of current reaction rate and temporal nuclide inventory techniques is then provided to offer justification for the depletion strategy applied within MCNPX. The MCNPX depletion strategy is then dissected and each code feature is detailed chronicling the methodology development from the original linking of MONTEBURNS and MCNP to the most recent public release of the integrated capability (MCNPX 2.6.F). Calculation results of the OECD/NEA Phase IB benchmark, H. B. Robinson benchmark and OECD/NEA Phase IVB are then provided. The acceptable results of these calculations offer sufficient confidence in the predictive capability of the MCNPX depletion method. This capability sets up a significant foundation, in a well established and supported radiation transport code, for further development of a Monte Carlo-linked depletion methodology which is essential to the future development of advanced reactor technologies that exceed the limitations of current deterministic based methods.
Benchmarking the Integration of WAVEWATCH III Results into HAZUS-MH: Preliminary Results
NASA Technical Reports Server (NTRS)
Berglund, Judith; Holland, Donald; McKellip, Rodney; Sciaudone, Jeff; Vickery, Peter; Wang, Zhanxian; Ying, Ken
2005-01-01
The report summarizes the results from the preliminary benchmarking activities associated with the use of WAVEWATCH III (WW3) results in the HAZUS-MH MR1 flood module. Project partner Applied Research Associates (ARA) is integrating the WW3 model into HAZUS. The current version of HAZUS-MH predicts loss estimates from hurricane-related coastal flooding by using values of surge only. Using WW3, wave setup can be included with surge. Loss estimates resulting from the use of surge-only and surge-plus-wave-setup were compared. This benchmarking study is preliminary because the HAZUS-MH MR1 flood module was under development at the time of the study. In addition, WW3 is not scheduled to be fully integrated with HAZUS-MH and available for public release until 2008.
Integrated materials design of organic semiconductors for field-effect transistors.
Mei, Jianguo; Diao, Ying; Appleton, Anthony L; Fang, Lei; Bao, Zhenan
2013-05-08
The past couple of years have witnessed a remarkable burst in the development of organic field-effect transistors (OFETs), with a number of organic semiconductors surpassing the benchmark mobility of 10 cm(2)/(V s). In this perspective, we highlight some of the major milestones along the way to provide a historical view of OFET development, introduce the integrated molecular design concepts and process engineering approaches that lead to the current success, and identify the challenges ahead to make OFETs applicable in real applications.
Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; Leland M. Montierth
2014-06-01
PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 editionmore » of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Sterbentz, James W.; Snoj, Luka
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
An automated benchmarking platform for MHC class II binding prediction methods.
Andreatta, Massimo; Trolle, Thomas; Yan, Zhen; Greenbaum, Jason A; Peters, Bjoern; Nielsen, Morten
2018-05-01
Computational methods for the prediction of peptide-MHC binding have become an integral and essential component for candidate selection in experimental T cell epitope discovery studies. The sheer amount of published prediction methods-and often discordant reports on their performance-poses a considerable quandary to the experimentalist who needs to choose the best tool for their research. With the goal to provide an unbiased, transparent evaluation of the state-of-the-art in the field, we created an automated platform to benchmark peptide-MHC class II binding prediction tools. The platform evaluates the absolute and relative predictive performance of all participating tools on data newly entered into the Immune Epitope Database (IEDB) before they are made public, thereby providing a frequent, unbiased assessment of available prediction tools. The benchmark runs on a weekly basis, is fully automated, and displays up-to-date results on a publicly accessible website. The initial benchmark described here included six commonly used prediction servers, but other tools are encouraged to join with a simple sign-up procedure. Performance evaluation on 59 data sets composed of over 10 000 binding affinity measurements suggested that NetMHCIIpan is currently the most accurate tool, followed by NN-align and the IEDB consensus method. Weekly reports on the participating methods can be found online at: http://tools.iedb.org/auto_bench/mhcii/weekly/. mniel@bioinformatics.dtu.dk. Supplementary data are available at Bioinformatics online.
Resonance Parameter Adjustment Based on Integral Experiments
Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; ...
2016-06-02
Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, suchmore » as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.« less
Bertzbach, F; Franz, T; Möller, K
2012-01-01
This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.
Achieving excellence in veterans healthcare--a balanced scorecard approach.
Biro, Lawrence A; Moreland, Michael E; Cowgill, David E
2003-01-01
This article provides healthcare administrators and managers with a framework and model for developing a balanced scorecard and demonstrates the remarkable success of this process, which brings focus to leadership decisions about the allocation of resources. This scorecard was developed as a top management tool designed to structure multiple priorities of a large, complex, integrated healthcare system and to establish benchmarks to measure success in achieving targets for performance in identified areas. Significant benefits and positive results were derived from the implementation of the balanced scorecard, based upon benchmarks considered to be critical success factors. The network's chief executive officer and top leadership team set and articulated the network's primary operating principles: quality and efficiency in the provision of comprehensive healthcare and support services. Under the weighted benchmarks of the balanced scorecard, the facilities in the network were mandated to adhere to one non-negotiable tenet: providing care that is second to none. The balanced scorecard approach to leadership continuously ensures that this is the primary goal and focal point for all activity within the network. To that end, systems are always in place to ensure that the network is fully successful on all performance measures relating to quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thrower, A.W.; Patric, J.; Keister, M.
2008-07-01
The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how thesemore » findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in safely and efficiently shipping spent nuclear fuel and other radioactive materials. Additional business processes may be examined in this phase. The findings of these benchmarking efforts will help determine the organizational structure and requirements of the national transportation system. (authors)« less
241-AZ Tank Farm Construction Extent of Condition Review for Tank Integrity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, Travis J.; Boomer, Kayle D.; Gunter, Jason R.
2013-07-30
This report provides the results of an extent of condition construction history review for tanks 241-AZ-101 and 241-AZ-102. The construction history of the 241-AZ tank farm has been reviewed to identify issues similar to those experienced during tank AY-102 construction. Those issues and others impacting integrity are discussed based on information found in available construction records, using tank AY-102 as the comparison benchmark. In the 241-AZ tank farm, the second DST farm constructed, both refractory quality and tank and liner fabrication were improved.
Accurate ω-ψ Spectral Solution of the Singular Driven Cavity Problem
NASA Astrophysics Data System (ADS)
Auteri, F.; Quartapelle, L.; Vigevano, L.
2002-08-01
This article provides accurate spectral solutions of the driven cavity problem, calculated in the vorticity-stream function representation without smoothing the corner singularities—a prima facie impossible task. As in a recent benchmark spectral calculation by primitive variables of Botella and Peyret, closed-form contributions of the singular solution for both zero and finite Reynolds numbers are subtracted from the unknown of the problem tackled here numerically in biharmonic form. The method employed is based on a split approach to the vorticity and stream function equations, a Galerkin-Legendre approximation of the problem for the perturbation, and an evaluation of the nonlinear terms by Gauss-Legendre numerical integration. Results computed for Re=0, 100, and 1000 compare well with the benchmark steady solutions provided by the aforementioned collocation-Chebyshev projection method. The validity of the proposed singularity subtraction scheme for computing time-dependent solutions is also established.
241-AW Tank Farm Construction Extent of Condition Review for Tank Integrity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, Travis J.; Gunter, Jason R.; Reeploeg, Gretchen E.
2013-11-19
This report provides the results of an extent of condition construction history review for the 241-AW tank farm. The construction history of the 241-AW tank farm has been reviewed to identify issues similar to those experienced during tank AY-102 construction. Those issues and others impacting integrity are discussed based on information found in available construction records, using tank AY-102 as the comparison benchmark. In the 241-AW tank farm, the fourth double-shell tank farm constructed, similar issues as those with tank 241-AY-102 construction occured. The overall extent of similary and affect on 241-AW tank farm integrity is described herein.
241-AY-101 Tank Construction Extent of Condition Review for Tank Integrity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, Travis J.; Gunter, Jason R.
2013-08-26
This report provides the results of an extent of condition construction history review for tank 241-AY-101. The construction history of tank 241-AY-101 has been reviewed to identify issues similar to those experienced during tank AY-102 construction. Those issues and others impacting integrity are discussed based on information found in available construction records, using tank AY-102 as the comparison benchmark. In tank 241-AY-101, the second double-shell tank constructed, similar issues as those with tank 241-AY-102 construction reoccurred. The overall extent of similary and affect on tank 241-AY-101 integrity is described herein.
Quellmalz, Edys S; Pellegrino, James W
2009-01-02
Large-scale testing of educational outcomes benefits already from technological applications that address logistics such as development, administration, and scoring of tests, as well as reporting of results. Innovative applications of technology also provide rich, authentic tasks that challenge the sorts of integrated knowledge, critical thinking, and problem solving seldom well addressed in paper-based tests. Such tasks can be used on both large-scale and classroom-based assessments. Balanced assessment systems can be developed that integrate curriculum-embedded, benchmark, and summative assessments across classroom, district, state, national, and international levels. We discuss here the potential of technology to launch a new era of integrated, learning-centered assessment systems.
Jacobs, Stephen P; Parsons, Matthew; Rouse, Paul; Parsons, John; Gunderson-Reid, Michelle
2018-04-01
Service providers and funders need ways to work together to improve services. Identifying critical performance variables provides a mechanism by which funders can understand what they are purchasing without getting caught up in restrictive service specifications that restrict the ability of service providers to meet the needs of the clients. An implementation pathway and benchmarking programme called IN TOUCH provided contracted providers of home support and funders with a consistent methodology to follow when developing and implementing new restorative approaches for service delivery. Data from performance measurement was used to triangulate the personal and social worlds of the stakeholders enabling them to develop a shared understanding of what is working and what is not. The initial implementation of IN TOUCH involved five District Health Boards. The recursive dialogue encouraged by the IN TOUCH programme supports better and more sustainable service development because performance management is anchored to agreed data that has meaning to all stakeholders. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Army Pollution Prevention Program: Improving Performance Through Benchmarking.
1995-06-01
Washington, DC 20503. 1. AGENCY USE ONLY (Leave Blank) 2. REPORT DATE June 1995 3. REPORT TYPE AND DATES COVERED Final 4. TITLE AND SUBTITLE...unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (Maximum 200 words) This report investigates the feasibility of using benchmarking as a method for...could use to determine to what degree it should integrate benchmarking with other quality management tools to support the pollution prevention program
Can data-driven benchmarks be used to set the goals of healthy people 2010?
Allison, J; Kiefe, C I; Weissman, N W
1999-01-01
OBJECTIVES: Expert panels determined the public health goals of Healthy People 2000 subjectively. The present study examined whether data-driven benchmarks provide a better alternative. METHODS: We developed the "pared-mean" method to define from data the best achievable health care practices. We calculated the pared-mean benchmark for screening mammography from the 1994 National Health Interview Survey, using the metropolitan statistical area as the "provider" unit. Beginning with the best-performing provider and adding providers in descending sequence, we established the minimum provider subset that included at least 10% of all women surveyed on this question. The pared-mean benchmark is then the proportion of women in this subset who received mammography. RESULTS: The pared-mean benchmark for screening mammography was 71%, compared with the Healthy People 2000 goal of 60%. CONCLUSIONS: For Healthy People 2010, benchmarks derived from data reflecting the best available care provide viable alternatives to consensus-derived targets. We are currently pursuing additional refinements to the data-driven pared-mean benchmark approach. PMID:9987466
Benchmark solution of the dynamic response of a spherical shell at finite strain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Versino, Daniele; Brock, Jerry S.
2016-09-28
Our paper describes the development of high fidelity solutions for the study of homogeneous (elastic and inelastic) spherical shells subject to dynamic loading and undergoing finite deformations. The goal of the activity is to provide high accuracy results that can be used as benchmark solutions for the verification of computational physics codes. Furthermore, the equilibrium equations for the geometrically non-linear problem are solved through mode expansion of the displacement field and the boundary conditions are enforced in a strong form. Time integration is performed through high-order implicit Runge–Kutta schemes. Finally, we evaluate accuracy and convergence of the proposed method bymore » means of numerical examples with finite deformations and material non-linearities and inelasticity.« less
Benchmarking in pathology: development of an activity-based costing model.
Burnett, Leslie; Wilson, Roger; Pfeffer, Sally; Lowry, John
2012-12-01
Benchmarking in Pathology (BiP) allows pathology laboratories to determine the unit cost of all laboratory tests and procedures, and also provides organisational productivity indices allowing comparisons of performance with other BiP participants. We describe 14 years of progressive enhancement to a BiP program, including the implementation of 'avoidable costs' as the accounting basis for allocation of costs rather than previous approaches using 'total costs'. A hierarchical tree-structured activity-based costing model distributes 'avoidable costs' attributable to the pathology activities component of a pathology laboratory operation. The hierarchical tree model permits costs to be allocated across multiple laboratory sites and organisational structures. This has enabled benchmarking on a number of levels, including test profiles and non-testing related workload activities. The development of methods for dealing with variable cost inputs, allocation of indirect costs using imputation techniques, panels of tests, and blood-bank record keeping, have been successfully integrated into the costing model. A variety of laboratory management reports are produced, including the 'cost per test' of each pathology 'test' output. Benchmarking comparisons may be undertaken at any and all of the 'cost per test' and 'cost per Benchmarking Complexity Unit' level, 'discipline/department' (sub-specialty) level, or overall laboratory/site and organisational levels. We have completed development of a national BiP program. An activity-based costing methodology based on avoidable costs overcomes many problems of previous benchmarking studies based on total costs. The use of benchmarking complexity adjustment permits correction for varying test-mix and diagnostic complexity between laboratories. Use of iterative communication strategies with program participants can overcome many obstacles and lead to innovations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.
2014-03-01
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess
2013-03-01
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess
2013-03-01
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
Fisk-based criteria to support validation of detection methods for drinking water and air.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacDonell, M.; Bhattacharyya, M.; Finster, M.
2009-02-18
This report was prepared to support the validation of analytical methods for threat contaminants under the U.S. Environmental Protection Agency (EPA) National Homeland Security Research Center (NHSRC) program. It is designed to serve as a resource for certain applications of benchmark and fate information for homeland security threat contaminants. The report identifies risk-based criteria from existing health benchmarks for drinking water and air for potential use as validation targets. The focus is on benchmarks for chronic public exposures. The priority sources are standard EPA concentration limits for drinking water and air, along with oral and inhalation toxicity values. Many contaminantsmore » identified as homeland security threats to drinking water or air would convert to other chemicals within minutes to hours of being released. For this reason, a fate analysis has been performed to identify potential transformation products and removal half-lives in air and water so appropriate forms can be targeted for detection over time. The risk-based criteria presented in this report to frame method validation are expected to be lower than actual operational targets based on realistic exposures following a release. Note that many target criteria provided in this report are taken from available benchmarks without assessing the underlying toxicological details. That is, although the relevance of the chemical form and analogues are evaluated, the toxicological interpretations and extrapolations conducted by the authoring organizations are not. It is also important to emphasize that such targets in the current analysis are not health-based advisory levels to guide homeland security responses. This integrated evaluation of chronic public benchmarks and contaminant fate has identified more than 200 risk-based criteria as method validation targets across numerous contaminants and fate products in drinking water and air combined. The gap in directly applicable values is considerable across the full set of threat contaminants, so preliminary indicators were developed from other well-documented benchmarks to serve as a starting point for validation efforts. By this approach, at least preliminary context is available for water or air, and sometimes both, for all chemicals on the NHSRC list that was provided for this evaluation. This means that a number of concentrations presented in this report represent indirect measures derived from related benchmarks or surrogate chemicals, as described within the many results tables provided in this report.« less
Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.
Student Interactives--A new Tool for Exploring Science.
NASA Astrophysics Data System (ADS)
Turner, C.
2005-05-01
Science NetLinks (SNL), a national program that provides online teacher resources created by the American Association for the Advancement of Science (AAAS), has proven to be a leader among educational resource providers in bringing free, high-quality, grade-appropriate materials to the national teaching community in a format that facilitates classroom integration. Now in its ninth year on the Web, Science NetLinks is part of the MarcoPolo Consortium of Web sites and associated state-based training initiatives that help teachers integrate Internet content into the classroom. SNL is a national presence in the K-12 science education community serving over 700,000 teachers each year, who visit the site at least three times a month. SNL features: High-quality, innovative, original lesson plans aligned to Project 2061 Benchmarks for Science Literacy, Original Internet-based interactives and learning challenges, Reviewed Web resources and demonstrations, Award winning, 60-second audio news features (Science Updates). Science NetLinks has an expansive and growing library of this educational material, aligned and sortable by grade band or benchmark. The program currently offers over 500 lessons, covering 72% of the Benchmarks for Science Literacy content areas in grades K-12. Over the past several years, there has been a strong movement to create online resources that support earth and space science education. Funding for various online educational materials has been available from many sources and has produced a variety of useful products for the education community. Teachers, through the Internet, potentially have access to thousands of activities, lessons and multimedia interactive applications for use in the classroom. But, with so many resources available, it is increasingly more difficult for educators to locate quality resources that are aligned to standards and learning goals. To ensure that the education community utilizes the resources, the material must conform to a format that allows easy understanding, evaluation and integration. Science NetLinks' material has been proven to satisfy these criteria and serve thousands of teachers every year. All online interactive materials that are created by AAAS are aligned to AAAS Project 2061 Benchmarks, which mirror National Science Standards, and are developed based on a rigorous set of criteria. For the purpose of this forum we will provide an overview that explains the need for more of these materials in the earth and space education, a review of the criteria for creating these materials and show examples of online materials created by AAAS that support earth and space science.
2014-11-01
Paradigm ............................................................................19 3.4 Collaborative BCI for Improving Overall Performance...interfaces ( BCIs ) provide the biggest improvement in performance? Can we demonstrate clear advantages with BCIs ? 2 2. Simulator Development and...stimuli in real time. Fig. 18 ROC curves for each subject after the combination of 2 trials 3.4 Collaborative BCI for Improving Overall
Benchmarking expert system tools
NASA Technical Reports Server (NTRS)
Riley, Gary
1988-01-01
As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.
Integration of oncology and palliative care: setting a benchmark.
Vayne-Bossert, P; Richard, E; Good, P; Sullivan, K; Hardy, J R
2017-10-01
Integration of oncology and palliative care (PC) should be the standard model of care for patients with advanced cancer. An expert panel developed criteria that constitute integration. This study determined whether the PC service within this Health Service, which is considered to be fully "integrated", could be benchmarked against these criteria. A survey was undertaken to determine the perceived level of integration of oncology and palliative care by all health care professionals (HCPs) within our cancer centre. An objective determination of integration was obtained from chart reviews of deceased patients. Integration was defined as >70% of all respondents answered "agree" or "strongly agree" to each indicator and >70% of patient charts supported each criteria. Thirty-four HCPs participated in the survey (response rate 69%). Over 90% were aware of the outpatient PC clinic, interdisciplinary and consultation team, PC senior leadership, and the acceptance of concurrent anticancer therapy. None of the other criteria met the 70% agreement mark but many respondents lacked the necessary knowledge to respond. The chart review included 67 patients, 92% of whom were seen by the PC team prior to death. The median time from referral to death was 103 days (range 0-1347). The level of agreement across all criteria was below our predefined definition of integration. The integration criteria relating to service delivery are medically focused and do not lend themselves to interdisciplinary review. The objective criteria can be audited and serve both as a benchmark and a basis for improvement activities.
Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.
A benchmark testing ground for integrating homology modeling and protein docking.
Bohnuud, Tanggis; Luo, Lingqi; Wodak, Shoshana J; Bonvin, Alexandre M J J; Weng, Zhiping; Vajda, Sandor; Schueler-Furman, Ora; Kozakov, Dima
2017-01-01
Protein docking procedures carry out the task of predicting the structure of a protein-protein complex starting from the known structures of the individual protein components. More often than not, however, the structure of one or both components is not known, but can be derived by homology modeling on the basis of known structures of related proteins deposited in the Protein Data Bank (PDB). Thus, the problem is to develop methods that optimally integrate homology modeling and docking with the goal of predicting the structure of a complex directly from the amino acid sequences of its component proteins. One possibility is to use the best available homology modeling and docking methods. However, the models built for the individual subunits often differ to a significant degree from the bound conformation in the complex, often much more so than the differences observed between free and bound structures of the same protein, and therefore additional conformational adjustments, both at the backbone and side chain levels need to be modeled to achieve an accurate docking prediction. In particular, even homology models of overall good accuracy frequently include localized errors that unfavorably impact docking results. The predicted reliability of the different regions in the model can also serve as a useful input for the docking calculations. Here we present a benchmark dataset that should help to explore and solve combined modeling and docking problems. This dataset comprises a subset of the experimentally solved 'target' complexes from the widely used Docking Benchmark from the Weng Lab (excluding antibody-antigen complexes). This subset is extended to include the structures from the PDB related to those of the individual components of each complex, and hence represent potential templates for investigating and benchmarking integrated homology modeling and docking approaches. Template sets can be dynamically customized by specifying ranges in sequence similarity and in PDB release dates, or using other filtering options, such as excluding sets of specific structures from the template list. Multiple sequence alignments, as well as structural alignments of the templates to their corresponding subunits in the target are also provided. The resource is accessible online or can be downloaded at http://cluspro.org/benchmark, and is updated on a weekly basis in synchrony with new PDB releases. Proteins 2016; 85:10-16. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cutler, Dylan; Frank, Stephen; Slovensky, Michelle
Rich, well-organized building performance and energy consumption data enable a host of analytic capabilities for building owners and operators, from basic energy benchmarking to detailed fault detection and system optimization. Unfortunately, data integration for building control systems is challenging and costly in any setting. Large portfolios of buildings--campuses, cities, and corporate portfolios--experience these integration challenges most acutely. These large portfolios often have a wide array of control systems, including multiple vendors and nonstandard communication protocols. They typically have complex information technology (IT) networks and cybersecurity requirements and may integrate distributed energy resources into their infrastructure. Although the challenges are significant,more » the integration of control system data has the potential to provide proportionally greater value for these organizations through portfolio-scale analytics, comprehensive demand management, and asset performance visibility. As a large research campus, the National Renewable Energy Laboratory (NREL) experiences significant data integration challenges. To meet them, NREL has developed an architecture for effective data collection, integration, and analysis, providing a comprehensive view of data integration based on functional layers. The architecture is being evaluated on the NREL campus through deployment of three pilot implementations.« less
Benchmarking Gas Path Diagnostic Methods: A Public Approach
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene
2008-01-01
Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.
241-AP Tank Farm Construction Extent of Condition Review for Tank Integrity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, Travis J.; Gunter, Jason R.; Reeploeg, Gretchen E.
2014-04-04
This report provides the results of an extent of condition construction history review for the 241-AP tank farm. The construction history of the 241-AP tank farm has been reviewed to identify issues similar to those experienced during tank AY-102 construction. Those issues and others impacting integrity are discussed based on information found in available construction records, using tank AY-102 as the comparison benchmark. In the 241-AP tank farm, the sixth double-shell tank farm constructed, tank bottom flatness, refractory material quality, post-weld stress relieving, and primary tank bottom weld rejection were improved.
Brandenburg, Marcus; Hahn, Gerd J
2018-06-01
Process industries typically involve complex manufacturing operations and thus require adequate decision support for aggregate production planning (APP). The need for powerful and efficient approaches to solve complex APP problems persists. Problem-specific solution approaches are advantageous compared to standardized approaches that are designed to provide basic decision support for a broad range of planning problems but inadequate to optimize under consideration of specific settings. This in turn calls for methods to compare different approaches regarding their computational performance and solution quality. In this paper, we present a benchmarking problem for APP in the chemical process industry. The presented problem focuses on (i) sustainable operations planning involving multiple alternative production modes/routings with specific production-related carbon emission and the social dimension of varying operating rates and (ii) integrated campaign planning with production mix/volume on the operational level. The mutual trade-offs between economic, environmental and social factors can be considered as externalized factors (production-related carbon emission and overtime working hours) as well as internalized ones (resulting costs). We provide data for all problem parameters in addition to a detailed verbal problem statement. We refer to Hahn and Brandenburg [1] for a first numerical analysis based on and for future research perspectives arising from this benchmarking problem.
The KMAT: Benchmarking Knowledge Management.
ERIC Educational Resources Information Center
de Jager, Martha
Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…
NASA Astrophysics Data System (ADS)
Moriarty, Patrick; Sanz Rodrigo, Javier; Gancarski, Pawel; Chuchfield, Matthew; Naughton, Jonathan W.; Hansen, Kurt S.; Machefaux, Ewan; Maguire, Eoghan; Castellani, Francesco; Terzi, Ludovico; Breton, Simon-Philippe; Ueda, Yuko
2014-06-01
Researchers within the International Energy Agency (IEA) Task 31: Wakebench have created a framework for the evaluation of wind farm flow models operating at the microscale level. The framework consists of a model evaluation protocol integrated with a web-based portal for model benchmarking (www.windbench.net). This paper provides an overview of the building-block validation approach applied to wind farm wake models, including best practices for the benchmarking and data processing procedures for validation datasets from wind farm SCADA and meteorological databases. A hierarchy of test cases has been proposed for wake model evaluation, from similarity theory of the axisymmetric wake and idealized infinite wind farm, to single-wake wind tunnel (UMN-EPFL) and field experiments (Sexbierum), to wind farm arrays in offshore (Horns Rev, Lillgrund) and complex terrain conditions (San Gregorio). A summary of results from the axisymmetric wake, Sexbierum, Horns Rev and Lillgrund benchmarks are used to discuss the state-of-the-art of wake model validation and highlight the most relevant issues for future development.
Baquero, Oswaldo Santos; Santana, Lidia Maria Reis; Chiaravalloti-Neto, Francisco
2018-01-01
Globally, the number of dengue cases has been on the increase since 1990 and this trend has also been found in Brazil and its most populated city-São Paulo. Surveillance systems based on predictions allow for timely decision making processes, and in turn, timely and efficient interventions to reduce the burden of the disease. We conducted a comparative study of dengue predictions in São Paulo city to test the performance of trained seasonal autoregressive integrated moving average models, generalized additive models and artificial neural networks. We also used a naïve model as a benchmark. A generalized additive model with lags of the number of cases and meteorological variables had the best performance, predicted epidemics of unprecedented magnitude and its performance was 3.16 times higher than the benchmark and 1.47 higher that the next best performing model. The predictive models captured the seasonal patterns but differed in their capacity to anticipate large epidemics and all outperformed the benchmark. In addition to be able to predict epidemics of unprecedented magnitude, the best model had computational advantages, since its training and tuning was straightforward and required seconds or at most few minutes. These are desired characteristics to provide timely results for decision makers. However, it should be noted that predictions are made just one month ahead and this is a limitation that future studies could try to reduce.
ERIC Educational Resources Information Center
Coyle, H. Elizabeth
2008-01-01
A substantial body of research indicates that positive school culture benchmarks are integrally tied to the success of school reform and change in general. Additionally, an emerging body of research suggests a similar role for school culture in effective implementation of school violence prevention and intervention efforts. However, little…
QCDLoop: A comprehensive framework for one-loop scalar integrals
NASA Astrophysics Data System (ADS)
Carrazza, Stefano; Ellis, R. Keith; Zanderighi, Giulia
2016-12-01
We present a new release of the QCDLoop library based on a modern object-oriented framework. We discuss the available new features such as the extension to the complex masses, the possibility to perform computations in double and quadruple precision simultaneously, and useful caching mechanisms to improve the computational speed. We benchmark the performance of the new library, and provide practical examples of phenomenological implementations by interfacing this new library to Monte Carlo programs.
MaNGA: Mapping Nearby Galaxies at Apache Point Observatory
NASA Astrophysics Data System (ADS)
Weijmans, A.-M.; MaNGA Team
2016-10-01
MaNGA (Mapping Nearby Galaxies at APO) is a galaxy integral-field spectroscopic survey within the fourth generation Sloan Digital Sky Survey (SDSS-IV). It will be mapping the composition and kinematics of gas and stars in 10,000 nearby galaxies, using 17 differently sized fiber bundles. MaNGA's goal is to provide new insights in galaxy formation and evolution, and to deliver a local benchmark for current and future high-redshift studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lell, R. M.; Schaefer, R. W.; McKnight, R. D.
Over a period of 30 years more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited to form the basis for criticality safety benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactormore » physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. The term 'benchmark' in a ZPR program connotes a particularly simple loading aimed at gaining basic reactor physics insight, as opposed to studying a reactor design. In fact, the ZPR-6/7 Benchmark Assembly (Reference 1) had a very simple core unit cell assembled from plates of depleted uranium, sodium, iron oxide, U3O8, and plutonium. The ZPR-6/7 core cell-average composition is typical of the interior region of liquid-metal fast breeder reactors (LMFBRs) of the era. It was one part of the Demonstration Reactor Benchmark Program,a which provided integral experiments characterizing the important features of demonstration-size LMFBRs. As a benchmark, ZPR-6/7 was devoid of many 'real' reactor features, such as simulated control rods and multiple enrichment zones, in its reference form. Those kinds of features were investigated experimentally in variants of the reference ZPR-6/7 or in other critical assemblies in the Demonstration Reactor Benchmark Program.« less
Benchmarking reference services: step by step.
Buchanan, H S; Marshall, J G
1996-01-01
This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana
2017-02-01
In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount.more » Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next-generation reactor design, safety analysis requirements, and all other front- and back-end activities contributing to the overall nuclear fuel cycle where quality neutronics calculations are paramount.« less
Kamel Boulos, M N; Roudsari, A V; Gordon, C; Muir Gray, J A
2001-01-01
In 1998, the U.K. National Health Service Information for Health Strategy proposed the implementation of a National electronic Library for Health to provide clinicians, healthcare managers and planners, patients and the public with easy, round the clock access to high quality, up-to-date electronic information on health and healthcare. The Virtual Branch Libraries are among the most important components of the National electronic Library for Health. They aim at creating online knowledge based communities, each concerned with some specific clinical and other health-related topics. This study is about the envisaged Dermatology Virtual Branch Libraries of the National electronic Library for Health. It aims at selecting suitable dermatology Web resources for inclusion in the forthcoming Virtual Branch Libraries after establishing preliminary quality benchmarking rules for this task. Psoriasis, being a common dermatological condition, has been chosen as a starting point. Because quality is a principal concern of the National electronic Library for Health, the study includes a review of the major quality benchmarking systems available today for assessing health-related Web sites. The methodology of developing a quality benchmarking system has been also reviewed. Aided by metasearch Web tools, candidate resources were hand-selected in light of the reviewed benchmarking systems and specific criteria set by the authors. Over 90 professional and patient-oriented Web resources on psoriasis and dermatology in general are suggested for inclusion in the forthcoming Dermatology Virtual Branch Libraries. The idea of an all-in knowledge-hallmarking instrument for the National electronic Library for Health is also proposed based on the reviewed quality benchmarking systems. Skilled, methodical, organized human reviewing, selection and filtering based on well-defined quality appraisal criteria seems likely to be the key ingredient in the envisaged National electronic Library for Health service. Furthermore, by promoting the application of agreed quality guidelines and codes of ethics by all health information providers and not just within the National electronic Library for Health, the overall quality of the Web will improve with time and the Web will ultimately become a reliable and integral part of the care space.
Mamo, Dereje; Hazel, Elizabeth; Lemma, Israel; Guenther, Tanya; Bekele, Abeba; Demeke, Berhanu
2014-10-01
Program managers require feasible, timely, reliable, and valid measures of iCCM implementation to identify problems and assess progress. The global iCCM Task Force developed benchmark indicators to guide implementers to develop or improve monitoring and evaluation (M&E) systems. To assesses Ethiopia's iCCM M&E system by determining the availability and feasibility of the iCCM benchmark indicators. We conducted a desk review of iCCM policy documents, monitoring tools, survey reports, and other rele- vant documents; and key informant interviews with government and implementing partners involved in iCCM scale-up and M&E. Currently, Ethiopia collects data to inform most (70% [33/47]) iCCM benchmark indicators, and modest extra effort could boost this to 83% (39/47). Eight (17%) are not available given the current system. Most benchmark indicators that track coordination and policy, human resources, service delivery and referral, supervision, and quality assurance are available through the routine monitoring systems or periodic surveys. Indicators for supply chain management are less available due to limited consumption data and a weak link with treatment data. Little information is available on iCCM costs. Benchmark indicators can detail the status of iCCM implementation; however, some indicators may not fit country priorities, and others may be difficult to collect. The government of Ethiopia and partners should review and prioritize the benchmark indicators to determine which should be included in the routine M&E system, especially since iCCMdata are being reviewed for addition to the HMIS. Moreover, the Health Extension Worker's reporting burden can be minimized by an integrated reporting approach.
The philosophy of benchmark testing a standards-based picture archiving and communications system.
Richardson, N E; Thomas, J A; Lyche, D K; Romlein, J; Norton, G S; Dolecek, Q E
1999-05-01
The Department of Defense issued its requirements for a Digital Imaging Network-Picture Archiving and Communications System (DIN-PACS) in a Request for Proposals (RFP) to industry in January 1997, with subsequent contracts being awarded in November 1997 to the Agfa Division of Bayer and IBM Global Government Industry. The Government's technical evaluation process consisted of evaluating a written technical proposal as well as conducting a benchmark test of each proposed system at the vendor's test facility. The purpose of benchmark testing was to evaluate the performance of the fully integrated system in a simulated operational environment. The benchmark test procedures and test equipment were developed through a joint effort between the Government, academic institutions, and private consultants. Herein the authors discuss the resources required and the methods used to benchmark test a standards-based PACS.
Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data
NASA Astrophysics Data System (ADS)
Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki
2017-09-01
There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.
Current Issues for Higher Education Information Resources Management.
ERIC Educational Resources Information Center
CAUSE/EFFECT, 1996
1996-01-01
Issues identified as important to the future of information resources management and use in higher education include information policy in a networked environment, distributed computing, integrating information resources and college planning, benchmarking information technology, integrated digital libraries, technology integration in teaching,…
ERIC Educational Resources Information Center
Hawaii Univ., Honolulu. Institutional Research Office.
This report presents information comparing the University of Hawaii Community Colleges (UHCC) to benchmark and peer-group institutions on selected financial measures. The primary data sources for this report were the Integrated Postsecondary Education Data System (IPEDS) Finance Survey for the 1995-1996 fiscal year and the IPEDS Fall Enrollment…
ERIC Educational Resources Information Center
Hawaii Univ., Honolulu.
The University of Hawaii's (UH) three university and seven community college campuses are compared with benchmark and peer group institutions with regard to selected financial measures. The primary data sources for this report were the Integrated Postsecondary Education Data System (IPEDS) Finance Survey, Fiscal Year 1994-95. Tables show data on…
BEST Winery Guidebook: Benchmarking and Energy and Water SavingsTool for the Wine Industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galitsky, Christina; Worrell, Ernst; Radspieler, Anthony
2005-10-15
Not all industrial facilities have the staff or the opportunity to perform a detailed audit of their operations. The lack of knowledge of energy efficiency opportunities provides an important barrier to improving efficiency. Benchmarking has demonstrated to help energy users understand energy use and the potential for energy efficiency improvement, reducing the information barrier. In California, the wine making industry is not only one of the economic pillars of the economy; it is also a large energy consumer, with a considerable potential for energy-efficiency improvement. Lawrence Berkeley National Laboratory and Fetzer Vineyards developed an integrated benchmarking and self-assessment tool formore » the California wine industry called ''BEST''(Benchmarking and Energy and water Savings Tool) Winery. BEST Winery enables a winery to compare its energy efficiency to a best practice winery, accounting for differences in product mix and other characteristics of the winery. The tool enables the user to evaluate the impact of implementing energy and water efficiency measures. The tool facilitates strategic planning of efficiency measures, based on the estimated impact of the measures, their costs and savings. BEST Winery is available as a software tool in an Excel environment. This report serves as background material, documenting assumptions and information on the included energy and water efficiency measures. It also serves as a user guide for the software package.« less
The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool
Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.
Benchmark matrix and guide: Part III.
1992-01-01
The final article in the "Benchmark Matrix and Guide" series developed by Headquarters Air Force Logistics Command completes the discussion of the last three categories that are essential ingredients of a successful total quality management (TQM) program. Detailed behavioral objectives are listed in the areas of recognition, process improvement, and customer focus. These vertical categories are meant to be applied to the levels of the matrix that define the progressive stages of the TQM: business as usual, initiation, implementation, expansion, and integration. By charting the horizontal progress level and the vertical TQM category, the quality management professional can evaluate the current state of TQM in any given organization. As each category is completed, new goals can be defined in order to advance to a higher level. The benchmarking process is integral to quality improvement efforts because it focuses on the highest possible standards to evaluate quality programs.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-29
... to provide more efficient, cost-effective, and timely benchmarking and other market information about.... This market analysis (commonly referred to as ``benchmarking'') would allow users of this service to... determine to be most useful. The benchmarking portion of the service would provide information on an...
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...
Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja
2015-01-01
The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.
Integrative care for the management of low back pain: use of a clinical care pathway.
Maiers, Michele J; Westrom, Kristine K; Legendre, Claire G; Bronfort, Gert
2010-10-29
For the treatment of chronic back pain, it has been theorized that integrative care plans can lead to better outcomes than those achieved by monodisciplinary care alone, especially when using a collaborative, interdisciplinary, and non-hierarchical team approach. This paper describes the use of a care pathway designed to guide treatment by an integrative group of providers within a randomized controlled trial. A clinical care pathway was used by a multidisciplinary group of providers, which included acupuncturists, chiropractors, cognitive behavioral therapists, exercise therapists, massage therapists and primary care physicians. Treatment recommendations were based on an evidence-informed practice model, and reached by group consensus. Research study participants were empowered to select one of the treatment recommendations proposed by the integrative group. Common principles and benchmarks were established to guide treatment management throughout the study. Thirteen providers representing 5 healthcare professions collaborated to provide integrative care to study participants. On average, 3 to 4 treatment plans, each consisting of 2 to 3 modalities, were recommended to study participants. Exercise, massage, and acupuncture were both most commonly recommended by the team and selected by study participants. Changes to care commonly incorporated cognitive behavioral therapy into treatment plans. This clinical care pathway was a useful tool for the consistent application of evidence-based care for low back pain in the context of an integrative setting. ClinicalTrials.gov NCT00567333.
NASA Technical Reports Server (NTRS)
Stohlgren, Tom; Schnase, John; Morisette, Jeffrey; Most, Neal; Sheffner, Ed; Hutchinson, Charles; Drake, Sam; Van Leeuwen, Willem; Kaupp, Verne
2005-01-01
The National Institute of Invasive Species Science (NIISS), through collaboration with NASA's Goddard Space Flight Center (GSFC), recently began incorporating NASA observations and predictive modeling tools to fulfill its mission. These enhancements, labeled collectively as the Invasive Species Forecasting System (ISFS), are now in place in the NIISS in their initial state (V1.0). The ISFS is the primary decision support tool of the NIISS for the management and control of invasive species on Department of Interior and adjacent lands. The ISFS is the backbone for a unique information services line-of-business for the NIISS, and it provides the means for delivering advanced decision support capabilities to a wide range of management applications. This report describes the operational characteristics of the ISFS, a decision support tool of the United States Geological Survey (USGS). Recent enhancements to the performance of the ISFS, attained through the integration of observations, models, and systems engineering from the NASA are benchmarked; i.e., described quantitatively and evaluated in relation to the performance of the USGS system before incorporation of the NASA enhancements. This report benchmarks Version 1.0 of the ISFS.
Farzandipour, Mehrdad; Meidani, Zahra
2014-06-01
Websites as one of the initial steps towards an e-government adoption do facilitate delivery of online and customer-oriented services. In this study we intended to investigate the role of the websites of medical universities in providing educational and research services following the E-government maturity model in the Iranian universities. This descriptive and cross- sectional study was conducted through content analysis and benchmarking the websites in 2012. The research population included the entire medical university website (37). Delivery of educational and research services through these university websites including information, interaction, transaction, and Integration were investigated using a checklist. The data were then analyzed by means of descriptive statistics and using SPSS software. Level of educational and research services by websites of the medical universities type I and II was evaluated medium as 1.99 and 1.89, respectively. All the universities gained a mean score of 1 out of 3 in terms of integration of educational and research services. Results of the study indicated that Iranian universities have passed information and interaction stages, but they have not made much progress in transaction and integration stages. Failure to adapt to e-government in Iranian medical universities in which limiting factors such as users' e-literacy, access to the internet and ICT infrastructure are not so crucial as in other organizations, suggest that e-government realization goes beyond technical challenges.
Oduro-Appiah, Kwaku; Scheinberg, Anne; Mensah, Anthony; Afful, Abraham; Boadu, Henry Kofi; de Vries, Nanne
2017-11-01
This article assesses the performance of the city of Accra, Ghana, in municipal solid waste management as defined by the integrated sustainable waste management framework. The article reports on a participatory process to socialise the Wasteaware benchmark indicators and apply them to an upgraded set of data and information. The process has engaged 24 key stakeholders for 9 months, to diagram the flow of materials and benchmark three physical components and three governance aspects of the city's municipal solid waste management system. The results indicate that Accra is well below some other lower middle-income cities regarding sustainable modernisation of solid waste services. Collection coverage and capture of 75% and 53%, respectively, are a disappointing result, despite (or perhaps because of) 20 years of formal private sector involvement in service delivery. A total of 62% of municipal solid waste continues to be disposed of in controlled landfills and the reported recycling rate of 5% indicates both a lack of good measurement and a lack of interest in diverting waste from disposal. Drains, illegal dumps and beaches are choked with discarded bottles and plastic packaging. The quality of collection, disposal and recycling score between low and medium on the Wasteaware indicators, and the scores for user inclusivity, financial sustainability and local institutional coherence are low. The analysis suggests that waste and recycling would improve through greater provider inclusivity, especially the recognition and integration of the informal sector, and interventions that respond to user needs for more inclusive decision-making.
Towards the quantitative evaluation of visual attention models.
Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K
2015-11-01
Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. Copyright © 2015 Elsevier Ltd. All rights reserved.
A review of genomic data warehousing systems.
Triplet, Thomas; Butler, Gregory
2014-07-01
To facilitate the integration and querying of genomics data, a number of generic data warehousing frameworks have been developed. They differ in their design and capabilities, as well as their intended audience. We provide a comprehensive and quantitative review of those genomic data warehousing frameworks in the context of large-scale systems biology. We reviewed in detail four genomic data warehouses (BioMart, BioXRT, InterMine and PathwayTools) freely available to the academic community. We quantified 20 aspects of the warehouses, covering the accuracy of their responses, their computational requirements and development efforts. Performance of the warehouses was evaluated under various hardware configurations to help laboratories optimize hardware expenses. Each aspect of the benchmark may be dynamically weighted by scientists using our online tool BenchDW (http://warehousebenchmark.fungalgenomics.ca/benchmark/) to build custom warehouse profiles and tailor our results to their specific needs.
NASA Technical Reports Server (NTRS)
1996-01-01
This NASA Science Institute Plan has been produced in response to direction from the NASA Administrator for the benefit of NASA Senior Management, science enterprise leaders, and Center Directors. It is intended to provide a conceptual framework for organizing and planning the conduct of science in support of NASA's mission through the creation of a limited number of science Institutes. This plan is the product of the NASA Science Institute Planning Integration Team (see Figure A). The team worked intensively over a three-month period to review proposed Institutes and produce findings for NASA senior management. The team's activities included visits to current NASA Institutes and associated Centers, as well as approximately a dozen non-NASA research Institutes. In addition to producing this plan, the team published a "Benchmarks" report. The Benchmarks report provides a basis for comparing NASA's proposed activities with those sponsored by other national science agencies, and identifies best practices to be considered in the establishment of NASA Science Institutes. Throughout the team's activities, a Board of Advisors comprised of senior NASA officials (augmented as necessary with other government employees) provided overall advice and counsel.
Evaluation of Graph Pattern Matching Workloads in Graph Analysis Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Seokyong; Lee, Sangkeun; Lim, Seung-Hwan
2016-01-01
Graph analysis has emerged as a powerful method for data scientists to represent, integrate, query, and explore heterogeneous data sources. As a result, graph data management and mining became a popular area of research, and led to the development of plethora of systems in recent years. Unfortunately, the number of emerging graph analysis systems and the wide range of applications, coupled with a lack of apples-to-apples comparisons, make it difficult to understand the trade-offs between different systems and the graph operations for which they are designed. A fair comparison of these systems is a challenging task for the following reasons:more » multiple data models, non-standardized serialization formats, various query interfaces to users, and diverse environments they operate in. To address these key challenges, in this paper we present a new benchmark suite by extending the Lehigh University Benchmark (LUBM) to cover the most common capabilities of various graph analysis systems. We provide the design process of the benchmark, which generalizes the workflow for data scientists to conduct the desired graph analysis on different graph analysis systems. Equipped with this extended benchmark suite, we present performance comparison for nine subgraph pattern retrieval operations over six graph analysis systems, namely NetworkX, Neo4j, Jena, Titan, GraphX, and uRiKA. Through the proposed benchmark suite, this study reveals both quantitative and qualitative findings in (1) implications in loading data into each system; (2) challenges in describing graph patterns for each query interface; and (3) different sensitivity of each system to query selectivity. We envision that this study will pave the road for: (i) data scientists to select the suitable graph analysis systems, and (ii) data management system designers to advance graph analysis systems.« less
Elastic K-means using posterior probability.
Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris
2017-01-01
The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model.
Lean Big Data integration in systems biology and systems pharmacology.
Ma'ayan, Avi; Rouillard, Andrew D; Clark, Neil R; Wang, Zichen; Duan, Qiaonan; Kou, Yan
2014-09-01
Data sets from recent large-scale projects can be integrated into one unified puzzle that can provide new insights into how drugs and genetic perturbations applied to human cells are linked to whole-organism phenotypes. Data that report how drugs affect the phenotype of human cell lines and how drugs induce changes in gene and protein expression in human cell lines can be combined with knowledge about human disease, side effects induced by drugs, and mouse phenotypes. Such data integration efforts can be achieved through the conversion of data from the various resources into single-node-type networks, gene-set libraries, or multipartite graphs. This approach can lead us to the identification of more relationships between genes, drugs, and phenotypes as well as benchmark computational and experimental methods. Overall, this lean 'Big Data' integration strategy will bring us closer toward the goal of realizing personalized medicine. Copyright © 2014 Elsevier Ltd. All rights reserved.
HyspIRI Low Latency Concept and Benchmarks
NASA Technical Reports Server (NTRS)
Mandl, Dan
2010-01-01
Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.
Integrated Sensing Processor, Phase 2
2005-12-01
performance analysis for several baseline classifiers including neural nets, linear classifiers, and kNN classifiers. Use of CCDR as a preprocessing step...below the level of the benchmark non-linear classifier for this problem ( kNN ). Furthermore, the CCDR preconditioned kNN achieved a 10% improvement over...the benchmark kNN without CCDR. Finally, we found an important connection between intrinsic dimension estimation via entropic graphs and the optimal
McCance, Tanya; Wilson, Val; Kornman, Kelly
2016-07-01
The aim of the Paediatric International Nursing Study was to explore the utility of key performance indicators in developing person-centred practice across a range of services provided to sick children. The objective addressed in this paper was evaluating the use of these indicators to benchmark services internationally. This study builds on primary research, which produced indicators that were considered novel both in terms of their positive orientation and use in generating data that privileges the patient voice. This study extends this research through wider testing on an international platform within paediatrics. The overall methodological approach was a realistic evaluation used to evaluate the implementation of the key performance indicators, which combined an integrated development and evaluation methodology. The study involved children's wards/hospitals in Australia (six sites across three states) and Europe (seven sites across four countries). Qualitative and quantitative methods were used during the implementation process, however, this paper reports the quantitative data only, which used survey, observations and documentary review. The findings demonstrate the quality of care being delivered to children and their families across different international sites. The benchmarking does, however, highlight some differences between paediatric and general hospitals, and between the different key performance indicators across all the sites. The findings support the use of the key performance indicators as a novel method to benchmark services internationally. Whilst the data collected across 20 paediatric sites suggest services are more similar than different, benchmarking illuminates variations that encourage a critical dialogue about what works and why. The transferability of the key performance indicators and measurement framework across different settings has significant implications for practice. The findings offer an approach to benchmarking and celebrating the successes within practice, while learning from partners across the globe in further developing person-centred cultures. © 2016 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmiotti, Giuseppe; Salvatores, Massimo
2014-04-01
The Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD) established a Subgroup (called “Subgroup 33”) in 2009 on “Methods and issues for the combined use of integral experiments and covariance data.” The first stage was devoted to producing the description of different adjustment methodologies and assessing their merits. A detailed document related to this first stage has been issued. Nine leading organizations (often with a long and recognized expertise in the field) have contributed: ANL, CEA, INL, IPPE, JAEA, JSI, NRG, IRSN and ORNL. In the second stagemore » a practical benchmark exercise was defined in order to test the reliability of the nuclear data adjustment methodology. A comparison of the results obtained by the participants and major lessons learned in the exercise are discussed in the present paper that summarizes individual contributions which often include several original developments not reported separately. The paper provides the analysis of the most important results of the adjustment of the main nuclear data of 11 major isotopes in a 33-group energy structure. This benchmark exercise was based on a set of 20 well defined integral parameters from 7 fast assembly experiments. The exercise showed that using a common shared set of integral experiments but different starting evaluated libraries and/or different covariance matrices, there is a good convergence of trends for adjustments. Moreover, a significant reduction of the original uncertainties is often observed. Using the a–posteriori covariance data, there is a strong reduction of the uncertainties of integral parameters for reference reactor designs, mainly due to the new correlations in the a–posteriori covariance matrix. Furthermore, criteria have been proposed and applied to verify the consistency of differential and integral data used in the adjustment. Finally, recommendations are given for an appropriate use of sensitivity analysis methods and indications for future work are provided.« less
Benchmarking initiatives in the water industry.
Parena, R; Smeets, E
2001-01-01
Customer satisfaction and service care are every day pushing professionals in the water industry to seek to improve their performance, lowering costs and increasing the provided service level. Process Benchmarking is generally recognised as a systematic mechanism of comparing one's own utility with other utilities or businesses with the intent of self-improvement by adopting structures or methods used elsewhere. The IWA Task Force on Benchmarking, operating inside the Statistics and Economics Committee, has been committed to developing a general accepted concept of Process Benchmarking to support water decision-makers in addressing issues of efficiency. In a first step the Task Force disseminated among the Committee members a questionnaire focused on providing suggestions about the kind, the evolution degree and the main concepts of Benchmarking adopted in the represented Countries. A comparison among the guidelines adopted in The Netherlands and Scandinavia has recently challenged the Task Force in drafting a methodology for a worldwide process benchmarking in water industry. The paper provides a framework of the most interesting benchmarking experiences in the water sector and describes in detail both the final results of the survey and the methodology focused on identification of possible improvement areas.
Benchmarking the Multidimensional Stellar Implicit Code MUSIC
NASA Astrophysics Data System (ADS)
Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.
2017-04-01
We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.
Clinical benchmarking enabled by the digital health record.
Ricciardi, T N; Masarie, F E; Middleton, B
2001-01-01
Office-based physicians are often ill equipped to report aggregate information about their patients and practice of medicine, since their practices have relied upon paper records for the management of clinical information. Physicians who do not have access to large-scale information technology support can now benefit from low-cost clinical documentation and reporting tools. We developed a hosted clinical data mart for users of a web-enabled charting tool, targeting the solo or small group practice. The system uses secure Java Server Pages with a dashboard-like menu to provide point-and-click access to simple reports such as case mix, medications, utilization, productivity, and patient demographics in its first release. The system automatically normalizes user-entered clinical terms to enhance the quality of structured data. Individual providers benefit from rapid patient identification for disease management, quality of care self-assessments, drug recalls, and compliance with clinical guidelines. The system provides knowledge integration by linking to trusted sources of online medical information in context. Information derived from the clinical record is clinically more accurate than billing data. Provider self-assessment and benchmarking empowers physicians, who may resent "being profiled" by external entities. In contrast to large-scale data warehouse projects, the current system delivers immediate value to individual physicians who choose an electronic clinical documentation tool.
ERIC Educational Resources Information Center
Reinhard, Karin; Pogrzeba, Anna
2016-01-01
The role of industry in the higher education system is becoming more prevalent, as universities integrate a practical element into their curricula. However, the level of development of cooperative education and work-integrated learning varies from country to country. In Germany, cooperative education and work-integrated learning has a long…
Roudsari, AV; Gordon, C; Gray, JA Muir
2001-01-01
Background In 1998, the U.K. National Health Service Information for Health Strategy proposed the implementation of a National electronic Library for Health to provide clinicians, healthcare managers and planners, patients and the public with easy, round the clock access to high quality, up-to-date electronic information on health and healthcare. The Virtual Branch Libraries are among the most important components of the National electronic Library for Health . They aim at creating online knowledge based communities, each concerned with some specific clinical and other health-related topics. Objectives This study is about the envisaged Dermatology Virtual Branch Libraries of the National electronic Library for Health . It aims at selecting suitable dermatology Web resources for inclusion in the forthcoming Virtual Branch Libraries after establishing preliminary quality benchmarking rules for this task. Psoriasis, being a common dermatological condition, has been chosen as a starting point. Methods Because quality is a principal concern of the National electronic Library for Health, the study includes a review of the major quality benchmarking systems available today for assessing health-related Web sites. The methodology of developing a quality benchmarking system has been also reviewed. Aided by metasearch Web tools, candidate resources were hand-selected in light of the reviewed benchmarking systems and specific criteria set by the authors. Results Over 90 professional and patient-oriented Web resources on psoriasis and dermatology in general are suggested for inclusion in the forthcoming Dermatology Virtual Branch Libraries. The idea of an all-in knowledge-hallmarking instrument for the National electronic Library for Health is also proposed based on the reviewed quality benchmarking systems. Conclusions Skilled, methodical, organized human reviewing, selection and filtering based on well-defined quality appraisal criteria seems likely to be the key ingredient in the envisaged National electronic Library for Health service. Furthermore, by promoting the application of agreed quality guidelines and codes of ethics by all health information providers and not just within the National electronic Library for Health, the overall quality of the Web will improve with time and the Web will ultimately become a reliable and integral part of the care space. PMID:11720947
CLEAR: Cross-Layer Exploration for Architecting Resilience
2017-03-01
benchmark analysis, also provides cost-effective solutions (~1% additional energy cost for the same 50× improvement). This paper addresses the...core (OoO-core) [Wang 04], across 18 benchmarks . Such extensive exploration enables us to conclusively answer the above cross-layer resilience...analysis of the effects of soft errors on application benchmarks , provides a highly effective soft error resilience approach. 3. The above
Benchmarking to improve the quality of cystic fibrosis care.
Schechter, Michael S
2012-11-01
Benchmarking involves the ascertainment of healthcare programs with most favorable outcomes as a means to identify and spread effective strategies for delivery of care. The recent interest in the development of patient registries for patients with cystic fibrosis (CF) has been fueled in part by an interest in using them to facilitate benchmarking. This review summarizes reports of how benchmarking has been operationalized in attempts to improve CF care. Although certain goals of benchmarking can be accomplished with an exclusive focus on registry data analysis, benchmarking programs in Germany and the United States have supplemented these data analyses with exploratory interactions and discussions to better understand successful approaches to care and encourage their spread throughout the care network. Benchmarking allows the discovery and facilitates the spread of effective approaches to care. It provides a pragmatic alternative to traditional research methods such as randomized controlled trials, providing insights into methods that optimize delivery of care and allowing judgments about the relative effectiveness of different therapeutic approaches.
Developing integrated benchmarks for DOE performance measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.
1992-09-30
The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance couldmore » be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.« less
Benchmark problems for numerical implementations of phase field models
Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...
2016-10-01
Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less
Benchmarking nitrogen removal suspended-carrier biofilm systems using dynamic simulation.
Vanhooren, H; Yuan, Z; Vanrolleghem, P A
2002-01-01
We are witnessing an enormous growth in biological nitrogen removal from wastewater. It presents specific challenges beyond traditional COD (carbon) removal. A possibility for optimised process design is the use of biomass-supporting media. In this paper, attached growth processes (AGP) are evaluated using dynamic simulations. The advantages of these systems that were qualitatively described elsewhere, are validated quantitatively based on a simulation benchmark for activated sludge treatment systems. This simulation benchmark is extended with a biofilm model that allows for fast and accurate simulation of the conversion of different substrates in a biofilm. The economic feasibility of this system is evaluated using the data generated with the benchmark simulations. Capital savings due to volume reduction and reduced sludge production are weighed out against increased aeration costs. In this evaluation, effluent quality is integrated as well.
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.
This presentation will document, benchmark and evalute state-of-the-science research and implementation on BMP performance, monitoring, and integration for green infrastructure applications, to manage wet weather flwo, storm-water-runoff stressor relief and remedial sustainable w...
How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction
NASA Astrophysics Data System (ADS)
Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.
2015-03-01
The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.
Rhodes, Catherine A; Bechtle, Mavis; McNett, Molly
2015-01-01
Advanced practice registered nurses (APRNs) are integral to the provision of quality, cost-effective health care throughout the continuum of care. To promote job satisfaction and ultimately decrease turnover, an APRN incentive plan based on productivity and quality was formulated. Clinical productivity in the incentive plan was measured by national benchmarks for work relative value units for nonphysician providers. After the first year of implementation, APRNs were paid more for additional productivity and quality and the institution had an increase in patient visits and charges. The incentive plan is a win-win for hospitals that employ APRNs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Signe K.; Purohit, Sumit; Boyd, Lauren W.
The Geothermal Technologies Office Code Comparison Study (GTO-CCS) aims to support the DOE Geothermal Technologies Office in organizing and executing a model comparison activity. This project is directed at testing, diagnosing differences, and demonstrating modeling capabilities of a worldwide collection of numerical simulators for evaluating geothermal technologies. Teams of researchers are collaborating in this code comparison effort, and it is important to be able to share results in a forum where technical discussions can easily take place without requiring teams to travel to a common location. Pacific Northwest National Laboratory has developed an open-source, flexible framework called Velo that providesmore » a knowledge management infrastructure and tools to support modeling and simulation for a variety of types of projects in a number of scientific domains. GTO-Velo is a customized version of the Velo Framework that is being used as the collaborative tool in support of the GTO-CCS project. Velo is designed around a novel integration of a collaborative Web-based environment and a scalable enterprise Content Management System (CMS). The underlying framework provides a flexible and unstructured data storage system that allows for easy upload of files that can be in any format. Data files are organized in hierarchical folders and each folder and each file has a corresponding wiki page for metadata. The user interacts with Velo through a web browser based wiki technology, providing the benefit of familiarity and ease of use. High-level folders have been defined in GTO-Velo for the benchmark problem descriptions, descriptions of simulator/code capabilities, a project notebook, and folders for participating teams. Each team has a subfolder with write access limited only to the team members, where they can upload their simulation results. The GTO-CCS participants are charged with defining the benchmark problems for the study, and as each GTO-CCS Benchmark problem is defined, the problem creator can provide a description using a template on the metadata page corresponding to the benchmark problem folder. Project documents, references and videos of the weekly online meetings are shared via GTO-Velo. A results comparison tool allows users to plot their uploaded simulation results on the fly, along with those of other teams, to facilitate weekly discussions of the benchmark problem results being generated by the teams. GTO-Velo is an invaluable tool providing the project coordinators and team members with a framework for collaboration among geographically dispersed organizations.« less
75 FR 68606 - Notice of Submission for OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-08
... provides data for internationally benchmarking U.S. performance in mathematics and science at the fourth... years in more than 50 countries and provides assessment data for internationally benchmarking U.S...
Benchmark measurements and calculations of a 3-dimensional neutron streaming experiment
NASA Astrophysics Data System (ADS)
Barnett, D. A., Jr.
1991-02-01
An experimental assembly known as the Dog-Legged Void assembly was constructed to measure the effect of neutron streaming in iron and void regions. The primary purpose of the measurements was to provide benchmark data against which various neutron transport calculation tools could be compared. The measurements included neutron flux spectra at four places and integral measurements at two places in the iron streaming path as well as integral measurements along several axial traverses. These data have been used in the verification of Oak Ridge National Laboratory's three-dimensional discrete ordinates code, TORT. For a base case calculation using one-half inch mesh spacing, finite difference spatial differencing, an S(sub 16) quadrature and P(sub 1) cross sections in the MUFT multigroup structure, the calculated solution agreed to within 18 percent with the spectral measurements and to within 24 percent of the integral measurements. Variations on the base case using a fewgroup energy structure and P(sub 1) and P(sub 3) cross sections showed similar agreement. Calculations using a linear nodal spatial differencing scheme and fewgroup cross sections also showed similar agreement. For the same mesh size, the nodal method was seen to require 2.2 times as much CPU time as the finite difference method. A nodal calculation using a typical mesh spacing of 2 inches, which had approximately 32 times fewer mesh cells than the base case, agreed with the measurements to within 34 percent and yet required on 8 percent of the CPU time.
NASA Technical Reports Server (NTRS)
Allen, Phillip A.; Wells, Douglas N.
2013-01-01
No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.
Farzandipour, Mehrdad; Meidani, Zahra
2014-01-01
Background: Websites as one of the initial steps towards an e-government adoption do facilitate delivery of online and customer-oriented services. In this study we intended to investigate the role of the websites of medical universities in providing educational and research services following the E-government maturity model in the Iranian universities. Methods: This descriptive and cross- sectional study was conducted through content analysis and benchmarking the websites in 2012. The research population included the entire medical university website (37). Delivery of educational and research services through these university websites including information, interaction, transaction, and Integration were investigated using a checklist. The data were then analyzed by means of descriptive statistics and using SPSS software. Results: Level of educational and research services by websites of the medical universities type I and II was evaluated medium as 1.99 and 1.89, respectively. All the universities gained a mean score of 1 out of 3 in terms of integration of educational and research services. Conclusions: Results of the study indicated that Iranian universities have passed information and interaction stages, but they have not made much progress in transaction and integration stages. Failure to adapt to e-government in Iranian medical universities in which limiting factors such as users’ e-literacy, access to the internet and ICT infrastructure are not so crucial as in other organizations, suggest that e-government realization goes beyond technical challenges. PMID:25132713
PDS: A Performance Database Server
Berry, Michael W.; Dongarra, Jack J.; Larose, Brian H.; ...
1994-01-01
The process of gathering, archiving, and distributing computer benchmark data is a cumbersome task usually performed by computer users and vendors with little coordination. Most important, there is no publicly available central depository of performance data for all ranges of machines from personal computers to supercomputers. We present an Internet-accessible performance database server (PDS) that can be used to extract current benchmark data and literature. As an extension to the X-Windows-based user interface (Xnetlib) to the Netlib archival system, PDS provides an on-line catalog of public domain computer benchmarks such as the LINPACK benchmark, Perfect benchmarks, and the NAS parallelmore » benchmarks. PDS does not reformat or present the benchmark data in any way that conflicts with the original methodology of any particular benchmark; it is thereby devoid of any subjective interpretations of machine performance. We believe that all branches (research laboratories, academia, and industry) of the general computing community can use this facility to archive performance metrics and make them readily available to the public. PDS can provide a more manageable approach to the development and support of a large dynamic database of published performance metrics.« less
Dev, Dipti A; Carraway-Stage, Virginia; Schober, Daniel J; McBride, Brent A; Kok, Car Mun; Ramsay, Samantha
2017-12-01
National childhood obesity prevention policies recommend that child-care providers educate young children about nutrition to improve their nutrition knowledge and eating habits. Yet, the provision of nutrition education (NE) to children in child-care settings is limited. Using the 2011 Academy of Nutrition and Dietetics benchmarks for NE in child care as a guiding framework, researchers assessed child-care providers' perspectives regarding delivery of NE through books, posters, mealtime conversations, hands-on learning, and sensory exploration of foods to young children (aged 2 to 5 years). Using a qualitative design (realist method), individual, semistructured interviews were conducted until saturation was reached. The study was conducted during 2012-2013 and used purposive sampling to select providers. Final sample included 18 providers employed full-time in Head Start or state-licensed center-based child-care programs in Central Illinois. Child-care providers' perspectives regarding implementation of NE. Thematic analysis to derive themes using NVivo software. Three overarching themes emerged, including providers' motivators, barriers, and facilitators for delivering NE to children. Motivators for delivering NE included that NE encourages children to try new foods, NE improves children's knowledge of healthy and unhealthy foods, and NE is consistent with children's tendency for exploration. Barriers for delivering NE included that limited funding and resources for hands-on experiences and restrictive policies. Facilitators for delivering NE included providers obtain access to feasible, low-cost resources and community partners, providers work around restrictive policies to accommodate NE, and mealtime conversations are a feasible avenue to deliver NE. Providers integrated mealtime conversations with NE concepts such as food-based sensory exploration and health benefits of foods. Present study findings offer insights regarding providers' perspectives on implementing NE in child care. Drawing from these perspectives, registered dietitian nutritionists can train providers about the importance of NE for encouraging healthy eating in children, integrating NE with mealtime conversations, and practicing low-cost, hands-on NE activities that meet the food safety standards for state licensing. Such strategies may improve providers' ability to deliver NE in child-care settings. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
This poster presentation will document, benchmark and evaluate state-of-the-science research and implementation on BMP performance, monitoring and integration for green infrastructure applications, to manage wet weather flow, storm-water runoff stressor relief and remedial sustai...
The application of a Web-geographic information system for improving urban water cycle modelling.
Mair, M; Mikovits, C; Sengthaler, M; Schöpf, M; Kinzel, H; Urich, C; Kleidorfer, M; Sitzenfrei, R; Rauch, W
2014-01-01
Research in urban water management has experienced a transition from traditional model applications to modelling water cycles as an integrated part of urban areas. This includes the interlinking of models of many research areas (e.g. urban development, socio-economy, urban water management). The integration and simulation is realized in newly developed frameworks (e.g. DynaMind and OpenMI) and often assumes a high knowledge in programming. This work presents a Web based urban water management modelling platform which simplifies the setup and usage of complex integrated models. The platform is demonstrated with a small application example on a case study within the Alpine region. The used model is a DynaMind model benchmarking the impact of newly connected catchments on the flooding behaviour of an existing combined sewer system. As a result the workflow of the user within a Web browser is demonstrated and benchmark results are shown. The presented platform hides implementation specific aspects behind Web services based technologies such that the user can focus on his main aim, which is urban water management modelling and benchmarking. Moreover, this platform offers a centralized data management, automatic software updates and access to high performance computers accessible with desktop computers and mobile devices.
From concepts to clinical reality: an essay on the benchmarking of biomedical terminologies.
Smith, Barry
2006-06-01
It is only by fixing on agreed meanings of terms in biomedical terminologies that we will be in a position to achieve that accumulation and integration of knowledge that is indispensable to progress at the frontiers of biomedicine. Standardly, the goal of fixing meanings is seen as being realized through the alignment of terms on what are called 'concepts.' Part I addresses three versions of the concept-based approach--by Cimino, by Wüster, and by Campbell and associates--and surveys some of the problems to which they give rise, all of which have to do with a failure to anchor the terms in terminologies to corresponding referents in reality. Part II outlines a new, realist solution to this anchorage problem, which sees terminology construction as being motivated by the goal of alignment not on concepts but on the universals (kinds, types) in reality and thereby also on the corresponding instances (individuals, tokens). We outline the realist approach and show how on its basis we can provide a benchmark of correctness for terminologies which will at the same time allow a new type of integration of terminologies and electronic health records. We conclude by outlining ways in which the framework thus defined might be exploited for purposes of diagnostic decision-support.
Integrative Analysis of High-throughput Cancer Studies with Contrasted Penalization
Shi, Xingjie; Liu, Jin; Huang, Jian; Zhou, Yong; Shia, BenChang; Ma, Shuangge
2015-01-01
In cancer studies with high-throughput genetic and genomic measurements, integrative analysis provides a way to effectively pool and analyze heterogeneous raw data from multiple independent studies and outperforms “classic” meta-analysis and single-dataset analysis. When marker selection is of interest, the genetic basis of multiple datasets can be described using the homogeneity model or the heterogeneity model. In this study, we consider marker selection under the heterogeneity model, which includes the homogeneity model as a special case and can be more flexible. Penalization methods have been developed in the literature for marker selection. This study advances from the published ones by introducing the contrast penalties, which can accommodate the within- and across-dataset structures of covariates/regression coefficients and, by doing so, further improve marker selection performance. Specifically, we develop a penalization method that accommodates the across-dataset structures by smoothing over regression coefficients. An effective iterative algorithm, which calls an inner coordinate descent iteration, is developed. Simulation shows that the proposed method outperforms the benchmark with more accurate marker identification. The analysis of breast cancer and lung cancer prognosis studies with gene expression measurements shows that the proposed method identifies genes different from those using the benchmark and has better prediction performance. PMID:24395534
Elastic K-means using posterior probability
Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris
2017-01-01
The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model. PMID:29240756
The integral line-beam method for gamma skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Bassett, M.S.
1991-03-01
This paper presents a refinement of a simplified method, based on line-beam response functions, for performing skyshine calculations for shielded and collimated gamma-ray sources. New coefficients for an empirical fit to the line-beam response function are provided and a prescription for making the response function continuous in energy and emission direction is introduced. For a shielded source, exponential attenuation and a buildup factor correction for scattered photons in the shield are used. Results of the new integral line-beam method of calculation are compared to a variety of benchmark experimental data and calculations and are found to give generally excellent agreementmore » at a small fraction of the computational expense required by other skyshine methods.« less
FY2012 summary of tasks completed on PROTEUS-thermal work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C.H.; Smith, M.A.
2012-06-06
PROTEUS is a suite of the neutronics codes, both old and new, that can be used within the SHARP codes being developed under the NEAMS program. Discussion here is focused on updates and verification and validation activities of the SHARP neutronics code, DeCART, for application to thermal reactor analysis. As part of the development of SHARP tools, the different versions of the DeCART code created for PWR, BWR, and VHTR analysis were integrated. Verification and validation tests for the integrated version were started, and the generation of cross section libraries based on the subgroup method was revisited for the targetedmore » reactor types. The DeCART code has been reorganized in preparation for an efficient integration of the different versions for PWR, BWR, and VHTR analysis. In DeCART, the old-fashioned common blocks and header files have been replaced by advanced memory structures. However, the changing of variable names was minimized in order to limit problems with the code integration. Since the remaining stability problems of DeCART were mostly caused by the CMFD methodology and modules, significant work was performed to determine whether they could be replaced by more stable methods and routines. The cross section library is a key element to obtain accurate solutions. Thus, the procedure for generating cross section libraries was revisited to provide libraries tailored for the targeted reactor types. To improve accuracy in the cross section library, an attempt was made to replace the CENTRM code by the MCNP Monte Carlo code as a tool obtaining reference resonance integrals. The use of the Monte Carlo code allows us to minimize problems or approximations that CENTRM introduces since the accuracy of the subgroup data is limited by that of the reference solutions. The use of MCNP requires an additional set of libraries without resonance cross sections so that reference calculations can be performed for a unit cell in which only one isotope of interest includes resonance cross sections, among the isotopes in the composition. The OECD MHTGR-350 benchmark core was simulated using DeCART as initial focus of the verification/validation efforts. Among the benchmark problems, Exercise 1 of Phase 1 is a steady-state benchmark case for the neutronics calculation for which block-wise cross sections were provided in 26 energy groups. This type of problem was designed for a homogenized geometry solver like DIF3D rather than the high-fidelity code DeCART. Instead of the homogenized block cross sections given in the benchmark, the VHTR-specific 238-group ENDF/B-VII.0 library of DeCART was directly used for preliminary calculations. Initial results showed that the multiplication factors of a fuel pin and a fuel block with or without a control rod hole were off by 6, -362, and -183 pcm Dk from comparable MCNP solutions, respectively. The 2-D and 3-D one-third core calculations were also conducted for the all-rods-out (ARO) and all-rods-in (ARI) configurations, producing reasonable results. Figure 1 illustrates the intermediate (1.5 eV - 17 keV) and thermal (below 1.5 eV) group flux distributions. As seen from VHTR cores with annular fuels, the intermediate group fluxes are relatively high in the fuel region, but the thermal group fluxes are higher in the inner and outer graphite reflector regions than in the fuel region. To support the current project, a new three-year I-NERI collaboration involving ANL and KAERI was started in November 2011, focused on performing in-depth verification and validation of high-fidelity multi-physics simulation codes for LWR and VHTR. The work scope includes generating improved cross section libraries for the targeted reactor types, developing benchmark models for verification and validation of the neutronics code with or without thermo-fluid feedback, and performing detailed comparisons of predicted reactor parameters against both Monte Carlo solutions and experimental measurements. The following list summarizes the work conducted so far for PROTEUS-Thermal Tasks: Unification of different versions of DeCART was initiated, and at the same time code modernization was conducted to make code unification efficient; (2) Regeneration of cross section libraries was attempted for the targeted reactor types, and the procedure for generating cross section libraries was updated by replacing CENTRM with MCNP for reference resonance integrals; (3) The MHTGR-350 benchmark core was simulated using DeCART with VHTR-specific 238-group ENDF/B-VII.0 library, and MCNP calculations were performed for comparison; and (4) Benchmark problems for PWR and BWR analysis were prepared for the DeCART verification/validation effort. In the coming months, the work listed above will be completed. Cross section libraries will be generated with optimized group structures for specific reactor types.« less
Use of integral experiments in support to the validation of JEFF-3.2 nuclear data evaluation
NASA Astrophysics Data System (ADS)
Leclaire, Nicolas; Cochet, Bertrand; Jinaphanh, Alexis; Haeck, Wim
2017-09-01
For many years now, IRSN has developed its own Monte Carlo continuous energy capability, which allows testing various nuclear data libraries. In that prospect, a validation database of 1136 experiments was built from cases used for the validation of the APOLLO2-MORET 5 multigroup route of the CRISTAL V2.0 package. In this paper, the keff obtained for more than 200 benchmarks using the JEFF-3.1.1 and JEFF-3.2 libraries are compared to benchmark keff values and main discrepancies are analyzed regarding the neutron spectrum. Special attention is paid on benchmarks for which the results have been highly modified between both JEFF-3 versions.
Benchmarking Discount Rate in Natural Resource Damage Assessment with Risk Aversion.
Wu, Desheng; Chen, Shuzhen
2017-08-01
Benchmarking a credible discount rate is of crucial importance in natural resource damage assessment (NRDA) and restoration evaluation. This article integrates a holistic framework of NRDA with prevailing low discount rate theory, and proposes a discount rate benchmarking decision support system based on service-specific risk aversion. The proposed approach has the flexibility of choosing appropriate discount rates for gauging long-term services, as opposed to decisions based simply on duration. It improves injury identification in NRDA since potential damages and side-effects to ecosystem services are revealed within the service-specific framework. A real embankment case study demonstrates valid implementation of the method. © 2017 Society for Risk Analysis.
Results of the GABLS3 diurnal-cycle benchmark for wind energy applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigo, J. Sanz; Allaerts, D.; Avila, M.
We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less
Results of the GABLS3 diurnal-cycle benchmark for wind energy applications
Rodrigo, J. Sanz; Allaerts, D.; Avila, M.; ...
2017-06-13
We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less
Multisynchronization of chaotic oscillators via nonlinear observer approach.
Aguilar-López, Ricardo; Martínez-Guerra, Rafael; Mata-Machuca, Juan L
2014-01-01
The goal of this work is to synchronize a class of chaotic oscillators in a master-slave scheme, under different initial conditions, considering several slaves systems. The Chen oscillator is employed as a benchmark model and a nonlinear observer is proposed to reach synchronicity between the master and the slaves' oscillators. The proposed observer contains a proportional and integral form of a bounded function of the synchronization error in order to provide asymptotic synchronization with a satisfactory performance. Numerical experiments were carried out to show the operation of the considered methodology.
Multisynchronization of Chaotic Oscillators via Nonlinear Observer Approach
Aguilar-López, Ricardo; Martínez-Guerra, Rafael; Mata-Machuca, Juan L.
2014-01-01
The goal of this work is to synchronize a class of chaotic oscillators in a master-slave scheme, under different initial conditions, considering several slaves systems. The Chen oscillator is employed as a benchmark model and a nonlinear observer is proposed to reach synchronicity between the master and the slaves' oscillators. The proposed observer contains a proportional and integral form of a bounded function of the synchronization error in order to provide asymptotic synchronization with a satisfactory performance. Numerical experiments were carried out to show the operation of the considered methodology. PMID:24578671
Developing a dashboard for benchmarking the productivity of a medication therapy management program.
Umbreit, Audrey; Holm, Emily; Gander, Kelsey; Davis, Kelsie; Dittrich, Kristina; Jandl, Vanda; Odell, Laura; Sweeten, Perry
To describe a method for internal benchmarking of medication therapy management (MTM) pharmacist activities. Multisite MTM pharmacist practices within an integrated health care system. MTM pharmacists are located within primary care clinics and provide medication management through collaborative practice. MTM pharmacist activity is grouped into 3 categories: direct patient care, nonvisit patient care, and professional activities. MTM pharmacist activities were tracked with the use of the computer-based application Pharmacist Ambulatory Resource Management System (PhARMS) over a 12-month period to measure growth during a time of expansion. A total of 81% of MTM pharmacist time was recorded. A total of 1655.1 hours (41%) was nonvisit patient care, 1185.2 hours (29%) was direct patient care, and 1190.4 hours (30%) was professional activities. The number of patient visits per month increased during the study period. There were 1496 direct patient care encounters documented. Of those, 1051 (70.2%) were face-to-face visits, 257 (17.2%) were by telephone, and 188 (12.6%) were chart reviews. Nonvisit patient care and professional activities also increased during the period. PhARMS reported MTM pharmacist activities and captured nonvisit patient care work not tracked elsewhere. Internal benchmarking data proved to be useful for justifying increases in MTM pharmacist personnel resources. Reviewing data helped to identify best practices from high-performing sites. Limitations include potential for self-reporting bias and lack of patient outcomes data. Implementing PhARMS facilitated internal benchmarking of patient care and nonpatient care activities in a regional MTM program. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Mark; Brown, Jed; Shalf, John
2014-05-05
This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.
A large-scale benchmark of gene prioritization methods.
Guala, Dimitri; Sonnhammer, Erik L L
2017-04-21
In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.
Elastic and inelastic scattering of neutrons on 238U nucleus
NASA Astrophysics Data System (ADS)
Capote, R.; Trkov, A.; Sin, M.; Herman, M. W.; Soukhovitskiĩ, E. Sh.
2014-04-01
Advanced modelling of neutron induced reactions on the 238U nucleus is aimed at improving our knowledge of neutron scattering. Capture and fission channels are well constrained by available experimental data and neutron standard evaluation. A focus of this contribution is on elastic and inelastic scattering cross sections. The employed nuclear reaction model includes - a new rotational-vibrational dispersive optical model potential coupling the low-lying collective bands of vibrational character observed in even-even actinides; - the Engelbrecht-Weidenmüller transformation allowing for inclusion of compound-direct interference effects; - and a multi-humped fission barrier with absorption in the secondary well described within the optical model for fission. Impact of the advanced modelling on elastic and inelastic scattering cross sections including angular distributions and emission spectra is assessed both by comparison with selected microscopic experimental data and integral criticality benchmarks including measured reaction rates (e.g. JEMIMA, FLAPTOP and BIG TEN). Benchmark calculations provided feedback to improve the reaction modelling. Improvement of existing libraries will be discussed.
Benchmarking: a method for continuous quality improvement in health.
Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe
2012-05-01
Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.
Using ontology databases for scalable query answering, inconsistency detection, and data integration
Dou, Dejing
2011-01-01
An ontology database is a basic relational database management system that models an ontology plus its instances. To reason over the transitive closure of instances in the subsumption hierarchy, for example, an ontology database can either unfold views at query time or propagate assertions using triggers at load time. In this paper, we use existing benchmarks to evaluate our method—using triggers—and we demonstrate that by forward computing inferences, we not only improve query time, but the improvement appears to cost only more space (not time). However, we go on to show that the true penalties were simply opaque to the benchmark, i.e., the benchmark inadequately captures load-time costs. We have applied our methods to two case studies in biomedicine, using ontologies and data from genetics and neuroscience to illustrate two important applications: first, ontology databases answer ontology-based queries effectively; second, using triggers, ontology databases detect instance-based inconsistencies—something not possible using views. Finally, we demonstrate how to extend our methods to perform data integration across multiple, distributed ontology databases. PMID:22163378
NASA Astrophysics Data System (ADS)
Naafs, B. D. A.; Martínez-García, A.; Grützner, J.; Higgins, S.
2014-11-01
Integrated Ocean Drilling Project (IODP) Site U1313 is regarded as a benchmark site for Plio/Pleistocene North Atlantic palaeoceanography. In volume 93 of Quaternary Science Reviews, Lang et al. (2014) provide a record of terrigenous input across the Plio/Pleistocene estimated from variations in sedimentary lightness (L*). The paper provides an elegant addition to the growing number of high-resolution records from Site U1313. Although we support the majority of their findings, we disagree with the conclusion that "glacial grinding and transport of fine grained sediments to mid latitude outwash plains is not the fundamental mechanism controlling the magnitude of the flux of higher plant leaf waxes from North America to Site U1313 during iNHG.", which is predominantly based on their observation that the relationship between L*-based terrigenous input and dust-derived biomarkers, which is linear at other sites (Martínez-Garcia et al., 2011), is non-linear at Site U1313.
Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.
2015-07-01
This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in themore » neutron field are reported.« less
Benchmarking reference services: an introduction.
Marshall, J G; Buchanan, H S
1995-01-01
Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.
Computational Chemistry Comparison and Benchmark Database
National Institute of Standards and Technology Data Gateway
SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access) The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.
Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W
2017-08-28
The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
ERIC Educational Resources Information Center
Herman, Joan L.; Baker, Eva L.
2005-01-01
Many schools are moving to develop benchmark tests to monitor their students' progress toward state standards throughout the academic year. Benchmark tests can provide the ongoing information that schools need to guide instructional programs and to address student learning problems. The authors discuss six criteria that educators can use to…
Simulation of networks of spiking neurons: A review of tools and strategies
Brette, Romain; Rudolph, Michelle; Carnevale, Ted; Hines, Michael; Beeman, David; Bower, James M.; Diesmann, Markus; Morrison, Abigail; Goodman, Philip H.; Harris, Frederick C.; Zirpe, Milind; Natschläger, Thomas; Pecevski, Dejan; Ermentrout, Bard; Djurfeldt, Mikael; Lansner, Anders; Rochel, Olivier; Vieville, Thierry; Muller, Eilif; Davison, Andrew P.; El Boustani, Sami
2009-01-01
We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin–Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks. PMID:17629781
Technical Report: Installed Cost Benchmarks and Deployment Barriers for
Cost Benchmarks and Deployment Barriers for Residential Solar Photovoltaics with Energy Storage Q1 2016 Installed Cost Benchmarks and Deployment Barriers for Residential Solar with Energy Storage Researchers from NREL published a report that provides detailed component and system-level cost breakdowns for
ERIC Educational Resources Information Center
Canadian Health Libraries Association.
Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…
Beyond Benchmarking: Value-Adding Metrics
ERIC Educational Resources Information Center
Fitz-enz, Jac
2007-01-01
HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…
42 CFR 440.320 - State plan requirements: Optional enrollment for exempt individuals.
Code of Federal Regulations, 2010 CFR
2010-10-01
... benchmark or benchmark-equivalent benefit package, the State must effectively inform the individual prior to...-equivalent benefit package and the costs under such a package and provide a comparison of how they differ... benchmark-equivalent benefit package. (4) For individuals who the State determines have become exempt...
42 CFR 440.320 - State plan requirements: Optional enrollment for exempt individuals.
Code of Federal Regulations, 2011 CFR
2011-10-01
... benchmark or benchmark-equivalent benefit package, the State must effectively inform the individual prior to...-equivalent benefit package and the costs under such a package and provide a comparison of how they differ... benchmark-equivalent benefit package. (4) For individuals who the State determines have become exempt...
42 CFR 440.320 - State plan requirements: Optional enrollment for exempt individuals.
Code of Federal Regulations, 2012 CFR
2012-10-01
... benchmark or benchmark-equivalent benefit package, the State must effectively inform the individual prior to...-equivalent benefit package and the costs under such a package and provide a comparison of how they differ... benchmark-equivalent benefit package. (4) For individuals who the State determines have become exempt...
42 CFR 440.320 - State plan requirements: Optional enrollment for exempt individuals.
Code of Federal Regulations, 2013 CFR
2013-10-01
... benchmark or benchmark-equivalent benefit package, the State must effectively inform the individual prior to...-equivalent benefit package and the costs under such a package and provide a comparison of how they differ... benchmark-equivalent benefit package. (4) For individuals who the State determines have become exempt...
42 CFR 440.320 - State plan requirements: Optional enrollment for exempt individuals.
Code of Federal Regulations, 2014 CFR
2014-10-01
... benchmark or benchmark-equivalent benefit package, the State must effectively inform the individual prior to...-equivalent benefit package and the costs under such a package and provide a comparison of how they differ... benchmark-equivalent benefit package. (4) For individuals who the State determines have become exempt...
A benchmark for fault tolerant flight control evaluation
NASA Astrophysics Data System (ADS)
Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.
2013-12-01
A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.
NASA Astrophysics Data System (ADS)
Jonas, S.; Murtagh, W. J.; Clarke, S. W.
2017-12-01
The National Space Weather Action Plan identifies approximately 100 distinct activities across six strategic goals. Many of these activities depend on the identification of a series of benchmarks that describe the physical characteristics of space weather events on or near Earth. My talk will provide an overview of Goal 1 (Establish Benchmarks for Space-Weather Events) of the National Space Weather Action Plan which will provide an introduction to the panel presentations and discussions.
Validation of Shielding Analysis Capability of SuperMC with SINBAD
NASA Astrophysics Data System (ADS)
Chen, Chaobin; Yang, Qi; Wu, Bin; Han, Yuncheng; Song, Jing
2017-09-01
Abstract: The shielding analysis capability of SuperMC was validated with the Shielding Integral Benchmark Archive Database (SINBAD). The SINBAD was compiled by RSICC and NEA, it includes numerous benchmark experiments performed with the D-T fusion neutron source facilities of OKTAVIAN, FNS, IPPE, etc. The results from SuperMC simulation were compared with experimental data and MCNP results. Very good agreement with deviation lower than 1% was achieved and it suggests that SuperMC is reliable in shielding calculation.
Manktelow, Bradley N; Seaton, Sarah E; Evans, T Alun
2016-12-01
There is an increasing use of statistical methods, such as funnel plots, to identify poorly performing healthcare providers. Funnel plots comprise the construction of control limits around a benchmark and providers with outcomes falling outside the limits are investigated as potential outliers. The benchmark is usually estimated from observed data but uncertainty in this estimate is usually ignored when constructing control limits. In this paper, the use of funnel plots in the presence of uncertainty in the value of the benchmark is reviewed for outcomes from a Binomial distribution. Two methods to derive the control limits are shown: (i) prediction intervals; (ii) tolerance intervals Tolerance intervals formally include the uncertainty in the value of the benchmark while prediction intervals do not. The probability properties of 95% control limits derived using each method were investigated through hypothesised scenarios. Neither prediction intervals nor tolerance intervals produce funnel plot control limits that satisfy the nominal probability characteristics when there is uncertainty in the value of the benchmark. This is not necessarily to say that funnel plots have no role to play in healthcare, but that without the development of intervals satisfying the nominal probability characteristics they must be interpreted with care. © The Author(s) 2014.
Electrochemical Characterization Laboratory | Energy Systems Integration
proton exchange membrane fuel cells. Photo of an NREL researcher evaluating catalyst activity in the the following capabilities: Determination and benchmarking of novel electrocatalyst activity
Integral measurements of neutron and gamma-ray leakage fluxes from the Little Boy replica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muckenthaler, F.J.
This report presents integral measurements of neutron and gamma-ray leakage fluxes from a critical mockup of the Hiroshima bomb Little Boy at Los Alamos National Laobratory with detector systems developed by Oak Ridge National Laboratory. Bonner ball detectors were used to map the neutron fluxes in the horizontal midplane at various distances from the mockup and for selected polar angles, keeping the source-detector separation constant. Gamma-ray energy deposition measurements were made with thermoluminescent detectors at several locations on the iron shell of the source mockup. The measurements were performed as part of a larger progam to provide benchmark data formore » testing the methods used to calculate the radiation released from the Little Boy bomb over Hiroshima. 3 references, 10 figures.« less
42 CFR 440.345 - EPSDT services requirement.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the additional benefits will be provided, how access to additional benefits will be coordinated and... (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit and Benchmark... plan benefits or as additional benefits provided by the State for any child under 21 years of age...
Method and system for benchmarking computers
Gustafson, John L.
1993-09-14
A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.
Comparing the performance of two CBIRS indexing schemes
NASA Astrophysics Data System (ADS)
Mueller, Wolfgang; Robbert, Guenter; Henrich, Andreas
2003-01-01
Content based image retrieval (CBIR) as it is known today has to deal with a number of challenges. Quickly summarized, the main challenges are firstly, to bridge the semantic gap between high-level concepts and low-level features using feedback, secondly to provide performance under adverse conditions. High-dimensional spaces, as well as a demanding machine learning task make the right way of indexing an important issue. When indexing multimedia data, most groups opt for extraction of high-dimensional feature vectors from the data, followed by dimensionality reduction like PCA (Principal Components Analysis) or LSI (Latent Semantic Indexing). The resulting vectors are indexed using spatial indexing structures such as kd-trees or R-trees, for example. Other projects, such as MARS and Viper propose the adaptation of text indexing techniques, notably the inverted file. Here, the Viper system is the most direct adaptation of text retrieval techniques to quantized vectors. However, while the Viper query engine provides decent performance together with impressive user-feedback behavior, as well as the possibility for easy integration of long-term learning algorithms, and support for potentially infinite feature vectors, there has been no comparison of vector-based methods and inverted-file-based methods under similar conditions. In this publication, we compare a CBIR query engine that uses inverted files (Bothrops, a rewrite of the Viper query engine based on a relational database), and a CBIR query engine based on LSD (Local Split Decision) trees for spatial indexing using the same feature sets. The Benchathlon initiative works on providing a set of images and ground truth for simulating image queries by example and corresponding user feedback. When performing the Benchathlon benchmark on a CBIR system (the System Under Test, SUT), a benchmarking harness connects over internet to the SUT, performing a number of queries using an agreed-upon protocol, the multimedia retrieval markup language (MRML). Using this benchmark one can measure the quality of retrieval, as well as the overall (speed) performance of the benchmarked system. Our Benchmarks will draw on the Benchathlon"s work for documenting the retrieval performance of both inverted file-based and LSD tree based techniques. However in addition to these results, we will present statistics, that can be obtained only inside the system under test. These statistics will include the number of complex mathematical operations, as well as the amount of data that has to be read from disk during operation of a query.
Benchmarking specialty hospitals, a scoping review on theory and practice.
Wind, A; van Harten, W H
2017-04-04
Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.
Key performance indicators to benchmark hospital information systems - a delphi study.
Hübner-Bloder, G; Ammenwerth, E
2009-01-01
To identify the key performance indicators for hospital information systems (HIS) that can be used for HIS benchmarking. A Delphi survey with one qualitative and two quantitative rounds. Forty-four HIS experts from health care IT practice and academia participated in all three rounds. Seventy-seven performance indicators were identified and organized into eight categories: technical quality, software quality, architecture and interface quality, IT vendor quality, IT support and IT department quality, workflow support quality, IT outcome quality, and IT costs. The highest ranked indicators are related to clinical workflow support and user satisfaction. Isolated technical indicators or cost indicators were not seen as useful. The experts favored an interdisciplinary group of all the stakeholders, led by hospital management, to conduct the HIS benchmarking. They proposed benchmarking activities both in regular (annual) intervals as well as at defined events (for example after IT introduction). Most of the experts stated that in their institutions no HIS benchmarking activities are being performed at the moment. In the context of IT governance, IT benchmarking is gaining importance in the healthcare area. The found indicators reflect the view of health care IT professionals and researchers. Research is needed to further validate and operationalize key performance indicators, to provide an IT benchmarking framework, and to provide open repositories for a comparison of the HIS benchmarks of different hospitals.
TRUST. I. A 3D externally illuminated slab benchmark for dust radiative transfer
NASA Astrophysics Data System (ADS)
Gordon, K. D.; Baes, M.; Bianchi, S.; Camps, P.; Juvela, M.; Kuiper, R.; Lunttila, T.; Misselt, K. A.; Natale, G.; Robitaille, T.; Steinacker, J.
2017-07-01
Context. The radiative transport of photons through arbitrary three-dimensional (3D) structures of dust is a challenging problem due to the anisotropic scattering of dust grains and strong coupling between different spatial regions. The radiative transfer problem in 3D is solved using Monte Carlo or Ray Tracing techniques as no full analytic solution exists for the true 3D structures. Aims: We provide the first 3D dust radiative transfer benchmark composed of a slab of dust with uniform density externally illuminated by a star. This simple 3D benchmark is explicitly formulated to provide tests of the different components of the radiative transfer problem including dust absorption, scattering, and emission. Methods: The details of the external star, the slab itself, and the dust properties are provided. This benchmark includes models with a range of dust optical depths fully probing cases that are optically thin at all wavelengths to optically thick at most wavelengths. The dust properties adopted are characteristic of the diffuse Milky Way interstellar medium. This benchmark includes solutions for the full dust emission including single photon (stochastic) heating as well as two simplifying approximations: One where all grains are considered in equilibrium with the radiation field and one where the emission is from a single effective grain with size-distribution-averaged properties. A total of six Monte Carlo codes and one Ray Tracing code provide solutions to this benchmark. Results: The solution to this benchmark is given as global spectral energy distributions (SEDs) and images at select diagnostic wavelengths from the ultraviolet through the infrared. Comparison of the results revealed that the global SEDs are consistent on average to a few percent for all but the scattered stellar flux at very high optical depths. The image results are consistent within 10%, again except for the stellar scattered flux at very high optical depths. The lack of agreement between different codes of the scattered flux at high optical depths is quantified for the first time. Convergence tests using one of the Monte Carlo codes illustrate the sensitivity of the solutions to various model parameters. Conclusions: We provide the first 3D dust radiative transfer benchmark and validate the accuracy of this benchmark through comparisons between multiple independent codes and detailed convergence tests.
ERIC Educational Resources Information Center
Rubin, Allen; Washburn, Micki; Schieszler, Christine
2017-01-01
Purpose: This article provides benchmark data on within-group effect sizes from published randomized clinical trials (RCTs) supporting the efficacy of trauma-focused cognitive behavioral therapy (TF-CBT) for traumatized children. Methods: Within-group effect-size benchmarks for symptoms of trauma, anxiety, and depression were calculated via the…
ERIC Educational Resources Information Center
Ellis, Robert A.; Moore, Roger R.
2006-01-01
This study discusses benchmarking the use of information and communication technologies (ICT) in teaching and learning between two universities with different missions: one an Australian campus-based metropolitan university and the other a British distance-education provider. It argues that the differences notwithstanding, it is possible to…
Outcome Benchmarks for Adaptations of Research-Supported Treatments for Adult Traumatic Stress
ERIC Educational Resources Information Center
Rubin, Allen; Parrish, Danielle E.; Washburn, Micki
2016-01-01
This article provides benchmark data on within-group effect sizes from published randomized controlled trials (RCTs) that evaluated the efficacy of research-supported treatments (RSTs) for adult traumatic stress. Agencies can compare these benchmarks to their treatment group effect size to inform their decisions as to whether the way they are…
Bini, Barbara; Ruggieri, Tommaso Grillo; Piaggesi, Alberto; Ricci, Lucia
2016-01-01
Introduction and Background: As diabetic foot (DF) care benefits from integration, monitoring geographic variations in lower limb Major Amputation rate enables to highlight potential lack of Integrated Care. In Tuscany (Italy), these DF outcomes were good on average but they varied within the region. In order to stimulate an improvement process towards integration, the project aimed to shift health professionals’ focus on the geographic variation issue, promote the Population Medicine approach, and engage professionals in a community of practice. Method: Three strategies were thus carried out: the use of a transparent performance evaluation system based on benchmarking; the use of patient stories and benchmarking analyses on outcomes, service utilization and costs that cross-checked delivery- and population-based perspectives; the establishment of a stable community of professionals to discuss data and practices. Results: The project enabled professionals to shift their focus on geographic variation and to a joint accountability on outcomes and costs for the entire patient pathways. Organizational best practices and gaps in integration were identified and improvement actions towards Integrated Care were implemented. Conclusion and Discussion: For the specific category of care pathways whose geographic variation is related to a lack of Integrated Care, a comprehensive strategy to improve outcomes and reduce equity gaps by diffusing integration should be carried out. PMID:29042842
Benchmarking: A Method for Continuous Quality Improvement in Health
Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe
2012-01-01
Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.
2015-07-01
This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the 44 inch Lead-Boron (LB44) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results ofmore » 31 integral dosimetry measurements in the neutron field are reported.« less
The national hydrologic bench-mark network
Cobb, Ernest D.; Biesecker, J.E.
1971-01-01
The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.
Shift Verification and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G
2016-09-07
This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less
Benchmark matrix and guide: Part II.
1991-01-01
In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.
Particle swarm optimization with recombination and dynamic linkage discovery.
Chen, Ying-Ping; Peng, Wen-Chih; Jian, Ming-Chung
2007-12-01
In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 Institute of Electrical and Electronics Engineers Congress on Evolutionary Computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system.
Daniels, Norman; Flores, Walter; Pannarunothai, Supasit; Ndumbe, Peter N.; Bryant, John H.; Ngulube, T. J.; Wang, Yuankun
2005-01-01
The Benchmarks of Fairness instrument is an evidence-based policy tool developed in generic form in 2000 for evaluating the effects of health-system reforms on equity, efficiency and accountability. By integrating measures of these effects on the central goal of fairness, the approach fills a gap that has hampered reform efforts for more than two decades. Over the past three years, projects in developing countries on three continents have adapted the generic version of these benchmarks for use at both national and subnational levels. Interdisciplinary teams of managers, providers, academics and advocates agree on the relevant criteria for assessing components of fairness and, depending on which aspects of reform they wish to evaluate, select appropriate indicators that rely on accessible information; they also agree on scoring rules for evaluating the diverse changes in the indicators. In contrast to a comprehensive index that aggregates all measured changes into a single evaluation or rank, the pattern of changes revealed by the benchmarks is used to inform policy deliberation aboutwhich aspects of the reforms have been successfully implemented, and it also allows for improvements to be made in the reforms. This approach permits useful evidence about reform to be gathered in settings where existing information is underused and where there is a weak information infrastructure. Brief descriptions of early results from Cameroon, Ecuador, Guatemala, Thailand and Zambia demonstrate that the method can produce results that are useful for policy and reveal the variety of purposes to which the approach can be put. Collaboration across sites can yield a catalogue of indicators that will facilitate further work. PMID:16175828
Ontology for Semantic Data Integration in the Domain of IT Benchmarking.
Pfaff, Matthias; Neubig, Stefan; Krcmar, Helmut
2018-01-01
A domain-specific ontology for IT benchmarking has been developed to bridge the gap between a systematic characterization of IT services and their data-based valuation. Since information is generally collected during a benchmark exercise using questionnaires on a broad range of topics, such as employee costs, software licensing costs, and quantities of hardware, it is commonly stored as natural language text; thus, this information is stored in an intrinsically unstructured form. Although these data form the basis for identifying potentials for IT cost reductions, neither a uniform description of any measured parameters nor the relationship between such parameters exists. Hence, this work proposes an ontology for the domain of IT benchmarking, available at https://w3id.org/bmontology. The design of this ontology is based on requirements mainly elicited from a domain analysis, which considers analyzing documents and interviews with representatives from Small- and Medium-Sized Enterprises and Information and Communications Technology companies over the last eight years. The development of the ontology and its main concepts is described in detail (i.e., the conceptualization of benchmarking events, questionnaires, IT services, indicators and their values) together with its alignment with the DOLCE-UltraLite foundational ontology.
BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...
The purpose of this document is to provide guidance for the Agency on the application of the benchmark dose approach in determining the point of departure (POD) for health effects data, whether a linear or nonlinear low dose extrapolation is used. The guidance includes discussion on computation of benchmark doses and benchmark concentrations (BMDs and BMCs) and their lower confidence limits, data requirements, dose-response analysis, and reporting requirements. This guidance is based on today's knowledge and understanding, and on experience gained in using this approach.
Benchmarking for Excellence and the Nursing Process
NASA Technical Reports Server (NTRS)
Sleboda, Claire
1999-01-01
Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-22
...-AA73 International Services Surveys: BE-180, Benchmark Survey of Financial Services Transactions Between U.S. Financial Services Providers and Foreign Persons AGENCY: Bureau of Economic Analysis... Survey of Financial Services Transactions between U.S. Financial Services Providers and Foreign Persons...
Within-Group Effect-Size Benchmarks for Problem-Solving Therapy for Depression in Adults
ERIC Educational Resources Information Center
Rubin, Allen; Yu, Miao
2017-01-01
This article provides benchmark data on within-group effect sizes from published randomized clinical trials that supported the efficacy of problem-solving therapy (PST) for depression among adults. Benchmarks are broken down by type of depression (major or minor), type of outcome measure (interview or self-report scale), whether PST was provided…
ERIC Educational Resources Information Center
Furbish, Dale S.; Bailey, Robyn; Trought, David
2016-01-01
Benchmarks for career development services at tertiary institutions have been developed by Careers New Zealand. The benchmarks are intended to provide standards derived from international best practices to guide career development services. A new career development service was initiated at a large New Zealand university just after the benchmarks…
ERIC Educational Resources Information Center
Clark, Hope
2013-01-01
In this report, ACT presents a definition of "work readiness" along with empirically driven ACT Work Readiness Standards and Benchmarks. The introduction of standards and benchmarks for workplace success provides a more complete picture of the factors that are important in establishing readiness for success throughout a lifetime. While…
Implementing Cognitive Strategy Instruction across the School: The Benchmark Manual for Teachers.
ERIC Educational Resources Information Center
Gaskins, Irene; Elliot, Thorne
Improving reading instruction has been the primary focus at the Benchmark School in Media, Pennsylvania. This book describes the various phases of Benchmark's development of a program to create strategic learners, thinkers, and problem solvers across the curriculum. The goal is to provide teachers and administrators with a handbook that can be…
A Field-Based Aquatic Life Benchmark for Conductivity in ...
EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate genera and salinity (measured as conductivity) and from that relationship derives a freshwater aquatic life benchmark. This benchmark of 300 µS/cm may be applied to waters in Appalachian streams that are dominated by calcium and magnesium salts of sulfate and bicarbonate at circum-neutral to mildly alkaline pH. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.
The Isprs Benchmark on Indoor Modelling
NASA Astrophysics Data System (ADS)
Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.
2017-09-01
Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.
Benchmarking in Academic Pharmacy Departments
Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O.; Ross, Leigh Ann
2010-01-01
Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation. PMID:21179251
Benchmarking in academic pharmacy departments.
Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann
2010-10-11
Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.
ERIC Educational Resources Information Center
Lewin, Heather S.; Passonneau, Sarah M.
2012-01-01
This research provides the first review of publicly available assessment information found on Association of Research Libraries (ARL) members' websites. After providing an overarching review of benchmarking assessment data, and of professionally recommended assessment models, this paper examines if libraries contextualized their assessment…
Security and confidentiality of health information systems: implications for physicians.
Dorodny, V S
1998-01-01
Adopting and developing the new generation of information systems will be essential to remain competitive in a quality conscious health care environment. These systems enable physicians to document patient encounters and aggregate the information from the population they treat, while capturing detailed data on chronic medical conditions, medications, treatment plans, risk factors, severity of conditions, and health care resource utilization and management. Today, the knowledge-based information systems should offer instant, around-the-clock access for the provider, support simple order entry, facilitate data capture and retrieval, and provide eligibility verification, electronic authentication, prescription writing, security, and reporting that benchmarks outcomes management based upon clinical/financial decisions and treatment plans. It is an integral part of any information system to incorporate and integrate transactional (financial/administrative) information, as well as analytical (clinical/medical) data in a user-friendly, readily accessible, and secure form. This article explores the technical, financial, logistical, and behavioral obstacles on the way to the Promised Land.
Hot Cell Installation and Demonstration of the Severe Accident Test Station
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linton, Kory D.; Burns, Zachary M.; Terrani, Kurt A.
A Severe Accident Test Station (SATS) capable of examining the oxidation kinetics and accident response of irradiated fuel and cladding materials for design basis accident (DBA) and beyond design basis accident (BDBA) scenarios has been successfully installed and demonstrated in the Irradiated Fuels Examination Laboratory (IFEL), a hot cell facility at Oak Ridge National Laboratory. The two test station modules provide various temperature profiles, steam, and the thermal shock conditions necessary for integral loss of coolant accident (LOCA) testing, defueled oxidation quench testing and high temperature BDBA testing. The installation of the SATS system restores the domestic capability to examinemore » postulated and extended LOCA conditions on spent fuel and cladding and provides a platform for evaluation of advanced fuel and accident tolerant fuel (ATF) cladding concepts. This document reports on the successful in-cell demonstration testing of unirradiated Zircaloy-4. It also contains descriptions of the integral test facility capabilities, installation activities, and out-of-cell benchmark testing to calibrate and optimize the system.« less
Neutron skyshine calculations with the integral line-beam method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gui, A.A.; Shultis, J.K.; Faw, R.E.
1997-10-01
Recently developed line- and conical-beam response functions are used to calculate neutron skyshine doses for four idealized source geometries. These calculations, which can serve as benchmarks, are compared with MCNP calculations, and the excellent agreement indicates that the integral conical- and line-beam method is an effective alternative to more computationally expensive transport calculations.
A Web-Based System for Bayesian Benchmark Dose Estimation.
Shao, Kan; Shapiro, Andrew J
2018-01-11
Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.
EBR-II Reactor Physics Benchmark Evaluation Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, Chad L.; Lum, Edward S; Stewart, Ryan
This report provides a reactor physics benchmark evaluation with associated uncertainty quantification for the critical configuration of the April 1986 Experimental Breeder Reactor II Run 138B core configuration.
NASA Technical Reports Server (NTRS)
Padovan, J.; Adams, M.; Fertis, J.; Zeid, I.; Lam, P.
1982-01-01
Finite element codes are used in modelling rotor-bearing-stator structure common to the turbine industry. Engine dynamic simulation is used by developing strategies which enable the use of available finite element codes. benchmarking the elements developed are benchmarked by incorporation into a general purpose code (ADINA); the numerical characteristics of finite element type rotor-bearing-stator simulations are evaluated through the use of various types of explicit/implicit numerical integration operators. Improving the overall numerical efficiency of the procedure is improved.
ERIC Educational Resources Information Center
Coughlin, David C.; Bielen, Rhonda P.
This paper has been prepared to assist the United States Department of Labor to explore new approaches to evaluating and measuring the performance of employment and training activities for youth. As one of several tools for evaluating success of local youth training programs, "benchmarking" provides a system for measuring the development…
NASA Technical Reports Server (NTRS)
Padovan, J.; Adams, M.; Lam, P.; Fertis, D.; Zeid, I.
1982-01-01
Second-year efforts within a three-year study to develop and extend finite element (FE) methodology to efficiently handle the transient/steady state response of rotor-bearing-stator structure associated with gas turbine engines are outlined. The two main areas aim at (1) implanting the squeeze film damper element into a general purpose FE code for testing and evaluation; and (2) determining the numerical characteristics of the FE-generated rotor-bearing-stator simulation scheme. The governing FE field equations are set out and the solution methodology is presented. The choice of ADINA as the general-purpose FE code is explained, and the numerical operational characteristics of the direct integration approach of FE-generated rotor-bearing-stator simulations is determined, including benchmarking, comparison of explicit vs. implicit methodologies of direct integration, and demonstration problems.
Tiao, J; Moore, L; Porgo, T V; Belcaid, A
2016-06-01
To assess whether the definition of an IHF used as an exclusion criterion influences the results of trauma center benchmarking. We conducted a multicenter retrospective cohort study with data from an integrated Canadian trauma system. The study population included all patients admitted between 1999 and 2010 to any of the 57 adult trauma centers. Seven definitions of IHF based on diagnostic codes, age, mechanism of injury, and secondary injuries, identified in a systematic review, were used. Trauma centers were benchmarked using risk-adjusted mortality estimates generated using the Trauma Risk Adjustment Model. The agreement between benchmarking results generated under different IHF definitions was evaluated with correlation coefficients on adjusted mortality estimates. Correlation coefficients >0.95 were considered to convey acceptable agreement. The study population consisted of 172,872 patients before exclusion of IHF and between 128,094 and 139,588 patients after exclusion. Correlation coefficients between risk-adjusted mortality estimates generated in populations including and excluding IHF varied between 0.86 and 0.90. Correlation coefficients of estimates generated under different definitions of IHF varied between 0.97 and 0.99, even when analyses were restricted to patients aged ≥65 years. Although the exclusion of patients with IHF has an influence on the results of trauma center benchmarking based on mortality, the definition of IHF in terms of diagnostic codes, age, mechanism of injury and secondary injury has no significant impact on benchmarking results. Results suggest that there is no need to obtain formal consensus on the definition of IHF for benchmarking activities.
NASA Astrophysics Data System (ADS)
Salmaso, Veronica; Sturlese, Mattia; Cuzzolin, Alberto; Moro, Stefano
2018-01-01
Molecular docking is a powerful tool in the field of computer-aided molecular design. In particular, it is the technique of choice for the prediction of a ligand pose within its target binding site. A multitude of docking methods is available nowadays, whose performance may vary depending on the data set. Therefore, some non-trivial choices should be made before starting a docking simulation. In the same framework, the selection of the target structure to use could be challenging, since the number of available experimental structures is increasing. Both issues have been explored within this work. The pose prediction of a pool of 36 compounds provided by D3R Grand Challenge 2 organizers was preceded by a pipeline to choose the best protein/docking-method couple for each blind ligand. An integrated benchmark approach including ligand shape comparison and cross-docking evaluations was implemented inside our DockBench software. The results are encouraging and show that bringing attention to the choice of the docking simulation fundamental components improves the results of the binding mode predictions.
Gold emissivities for hydrocode applications
NASA Astrophysics Data System (ADS)
Bowen, C.; Wagon, F.; Galmiche, D.; Loiseau, P.; Dattolo, E.; Babonneau, D.
2004-10-01
The Radiom model [M. Busquet, Phys Fluids B 5, 4191 (1993)] is designed to provide a radiative-hydrodynamic code with non-local thermodynamic equilibrium (non-LTE) data efficiently by using LTE tables. Comparison with benchmark data [M. Klapisch and A. Bar-Shalom, J. Quant. Spectrosc. Radiat. Transf. 58, 687 (1997)] has shown Radiom to be inaccurate far from LTE and for heavy ions. In particular, the emissivity was found to be strongly underestimated. A recent algorithm, Gondor [C. Bowen and P. Kaiser, J. Quant. Spectrosc. Radiat. Transf. 81, 85 (2003)], was introduced to improve the gold non-LTE ionization and corresponding opacity. It relies on fitting the collisional ionization rate to reproduce benchmark data given by the Averroès superconfiguration code [O. Peyrusse, J. Phys. B 33, 4303 (2000)]. Gondor is extended here to gold emissivity calculations, with two simple modifications of the two-level atom line source function used by Radiom: (a) a larger collisional excitation rate and (b) the addition of a Planckian source term, fitted to spectrally integrated Averroès emissivity data. This approach improves the agreement between experiments and hydrodynamic simulations.
Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis
Ollenschläger, Malte; Roth, Nils; Klucken, Jochen
2017-01-01
Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis. PMID:28832511
NASA Astrophysics Data System (ADS)
Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah; Vick, Andy; Schnetler, Hermine
2014-08-01
We present wavefront reconstruction acceleration of high-order AO systems using an Intel Xeon Phi processor. The Xeon Phi is a coprocessor providing many integrated cores and designed for accelerating compute intensive, numerical codes. Unlike other accelerator technologies, it allows virtually unchanged C/C++ to be recompiled to run on the Xeon Phi, giving the potential of making development, upgrade and maintenance faster and less complex. We benchmark the Xeon Phi in the context of AO real-time control by running a matrix vector multiply (MVM) algorithm. We investigate variability in execution time and demonstrate a substantial speed-up in loop frequency. We examine the integration of a Xeon Phi into an existing RTC system and show that performance improvements can be achieved with limited development effort.
The MCNP6 Analytic Criticality Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-06-16
Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less
NASA Astrophysics Data System (ADS)
Favata, Antonino; Micheletti, Andrea; Ryu, Seunghwa; Pugno, Nicola M.
2016-10-01
An analytical benchmark and a simple consistent Mathematica program are proposed for graphene and carbon nanotubes, that may serve to test any molecular dynamics code implemented with REBO potentials. By exploiting the benchmark, we checked results produced by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) when adopting the second generation Brenner potential, we made evident that this code in its current implementation produces results which are offset from those of the benchmark by a significant amount, and provide evidence of the reason.
Putting people on the map through an approach that integrates social data in conservation planning.
Stephanson, Sheri L; Mascia, Michael B
2014-10-01
Conservation planning is integral to strategic and effective operations of conservation organizations. Drawing upon biological sciences, conservation planning has historically made limited use of social data. We offer an approach for integrating data on social well-being into conservation planning that captures and places into context the spatial patterns and trends in human needs and capacities. This hierarchical approach provides a nested framework for characterizing and mapping data on social well-being in 5 domains: economic well-being, health, political empowerment, education, and culture. These 5 domains each have multiple attributes; each attribute may be characterized by one or more indicators. Through existing or novel data that display spatial and temporal heterogeneity in social well-being, conservation scientists, planners, and decision makers may measure, benchmark, map, and integrate these data within conservation planning processes. Selecting indicators and integrating these data into conservation planning is an iterative, participatory process tailored to the local context and planning goals. Social well-being data complement biophysical and threat-oriented social data within conservation planning processes to inform decisions regarding where and how to conserve biodiversity, provide a structure for exploring socioecological relationships, and to foster adaptive management. Building upon existing conservation planning methods and insights from multiple disciplines, this approach to putting people on the map can readily merge with current planning practices to facilitate more rigorous decision making. © 2014 Society for Conservation Biology.
Benchmarking of venous thromboembolism prophylaxis practice with ENT.UK guidelines.
Al-Qahtani, Ali S
2017-05-01
The aim of this study was to benchmark our guidelines of prevention of venous thromboembolism (VTE) in ENT surgical population against ENT.UK guidelines, and also to encourage healthcare providers to utilize benchmarking as an effective method of improving performance. The study design is prospective descriptive analysis. The setting of this study is tertiary referral centre (Assir Central Hospital, Abha, Saudi Arabia). In this study, we are benchmarking our practice guidelines of the prevention of VTE in the ENT surgical population against that of ENT.UK guidelines to mitigate any gaps. ENT guidelines 2010 were downloaded from the ENT.UK Website. Our guidelines were compared with the possibilities that either our performance meets or fall short of ENT.UK guidelines. Immediate corrective actions will take place if there is quality chasm between the two guidelines. ENT.UK guidelines are evidence-based and updated which may serve as role-model for adoption and benchmarking. Our guidelines were accordingly amended to contain all factors required in providing a quality service to ENT surgical patients. While not given appropriate attention, benchmarking is a useful tool in improving quality of health care. It allows learning from others' practices and experiences, and works towards closing any quality gaps. In addition, benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning service provision. It is recommended to be included on the list of quality improvement methods of healthcare services.
Integration of heterogeneous features for remote sensing scene classification
NASA Astrophysics Data System (ADS)
Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang
2018-01-01
Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.
Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set
NASA Astrophysics Data System (ADS)
Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.
2017-05-01
A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.
Quantifying complexity in translational research: an integrated approach.
Munoz, David A; Nembhard, Harriet Black; Kraschnewski, Jennifer L
2014-01-01
The purpose of this paper is to quantify complexity in translational research. The impact of major operational steps and technical requirements is calculated with respect to their ability to accelerate moving new discoveries into clinical practice. A three-phase integrated quality function deployment (QFD) and analytic hierarchy process (AHP) method was used to quantify complexity in translational research. A case study in obesity was used to usability. Generally, the evidence generated was valuable for understanding various components in translational research. Particularly, the authors found that collaboration networks, multidisciplinary team capacity and community engagement are crucial for translating new discoveries into practice. As the method is mainly based on subjective opinion, some argue that the results may be biased. However, a consistency ratio is calculated and used as a guide to subjectivity. Alternatively, a larger sample may be incorporated to reduce bias. The integrated QFD-AHP framework provides evidence that could be helpful to generate agreement, develop guidelines, allocate resources wisely, identify benchmarks and enhance collaboration among similar projects. Current conceptual models in translational research provide little or no clue to assess complexity. The proposed method aimed to fill this gap. Additionally, the literature review includes various features that have not been explored in translational research.
An Exact Integration Scheme for Radiative Cooling in Hydrodynamical Simulations
NASA Astrophysics Data System (ADS)
Townsend, R. H. D.
2009-04-01
A new scheme for incorporating radiative cooling in hydrodynamical codes is presented, centered around exact integration of the governing semidiscrete cooling equation. Using benchmark calculations based on the cooling downstream of a radiative shock, I demonstrate that the new scheme outperforms traditional explicit and implicit approaches in terms of accuracy, while remaining competitive in terms of execution speed.
Fracture Capabilities in Grizzly with the extended Finite Element Method (X-FEM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolbow, John; Zhang, Ziyu; Spencer, Benjamin
Efforts are underway to develop fracture mechanics capabilities in the Grizzly code to enable it to be used to perform deterministic fracture assessments of degraded reactor pressure vessels (RPVs). A capability was previously developed to calculate three-dimensional interaction- integrals to extract mixed-mode stress-intensity factors. This capability requires the use of a finite element mesh that conforms to the crack geometry. The eXtended Finite Element Method (X-FEM) provides a means to represent a crack geometry without explicitly fitting the finite element mesh to it. This is effected by enhancing the element kinematics to represent jump discontinuities at arbitrary locations inside ofmore » the element, as well as the incorporation of asymptotic near-tip fields to better capture crack singularities. In this work, use of only the discontinuous enrichment functions was examined to see how accurate stress intensity factors could still be calculated. This report documents the following work to enhance Grizzly’s engineering fracture capabilities by introducing arbitrary jump discontinuities for prescribed crack geometries; X-FEM Mesh Cutting in 3D: to enhance the kinematics of elements that are intersected by arbitrary crack geometries, a mesh cutting algorithm was implemented in Grizzly. The algorithm introduces new virtual nodes and creates partial elements, and then creates a new mesh connectivity; Interaction Integral Modifications: the existing code for evaluating the interaction integral in Grizzly was based on the assumption of a mesh that was fitted to the crack geometry. Modifications were made to allow for the possibility of a crack front that passes arbitrarily through the mesh; and Benchmarking for 3D Fracture: the new capabilities were benchmarked against mixed-mode three-dimensional fracture problems with known analytical solutions.« less
NASA Technical Reports Server (NTRS)
Krause, David L.; Brewer, Ethan J.; Pawlik, Ralph
2013-01-01
This report provides test methodology details and qualitative results for the first structural benchmark creep test of an Advanced Stirling Convertor (ASC) heater head of ASC-E2 design heritage. The test article was recovered from a flight-like Microcast MarM-247 heater head specimen previously used in helium permeability testing. The test article was utilized for benchmark creep test rig preparation, wall thickness and diametral laser scan hardware metrological developments, and induction heater custom coil experiments. In addition, a benchmark creep test was performed, terminated after one week when through-thickness cracks propagated at thermocouple weld locations. Following this, it was used to develop a unique temperature measurement methodology using contact thermocouples, thereby enabling future benchmark testing to be performed without the use of conventional welded thermocouples, proven problematic for the alloy. This report includes an overview of heater head structural benchmark creep testing, the origin of this particular test article, test configuration developments accomplished using the test article, creep predictions for its benchmark creep test, qualitative structural benchmark creep test results, and a short summary.
Benchmarking U.S. Small Wind Costs with the Distributed Wind Taxonomy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orrell, Alice C.; Poehlman, Eric A.
The objective of this report is to benchmark costs for small wind projects installed in the United States using a distributed wind taxonomy. Consequently, this report is a starting point to help expand the U.S. distributed wind market by informing potential areas for small wind cost-reduction opportunities and providing a benchmark to track future small wind cost-reduction progress.
TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking
NASA Astrophysics Data System (ADS)
Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.
2014-06-01
The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.
Samman, Samir; McCarthur, Jennifer O; Peat, Mary
2006-01-01
Benchmarking has been adopted by educational institutions as a potentially sensitive tool for improving learning and teaching. To date there has been limited application of benchmarking methodology in the Discipline of Nutritional Science. The aim of this survey was to define core elements and outstanding practice in Nutritional Science through collaborative benchmarking. Questionnaires that aimed to establish proposed core elements for Nutritional Science, and inquired about definitions of " good" and " outstanding" practice were posted to named representatives at eight Australian universities. Seven respondents identified core elements that included knowledge of nutrient metabolism and requirement, food production and processing, modern biomedical techniques that could be applied to understanding nutrition, and social and environmental issues as related to Nutritional Science. Four of the eight institutions who agreed to participate in the present survey identified the integration of teaching with research as an indicator of outstanding practice. Nutritional Science is a rapidly evolving discipline. Further and more comprehensive surveys are required to consolidate and update the definition of the discipline, and to identify the optimal way of teaching it. Global ideas and specific regional requirements also need to be considered.
Alternative industrial carbon emissions benchmark based on input-output analysis
NASA Astrophysics Data System (ADS)
Han, Mengyao; Ji, Xi
2016-12-01
Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.
Development of an Unmanned Aerial System (UAS) for Scaling Terrestrial Ecosystem Traits
NASA Astrophysics Data System (ADS)
Meng, R.; McMahon, A. M.; Serbin, S.; Rogers, A.
2015-12-01
The next generation of Ecosystem and Earth System Models (EESMs) will require detailed information on ecosystem structure and function, including properties of vegetation related to carbon (C), water, and energy cycling, in order to project the future state of ecosystems. High spatial-temporal resolution measurements of terrestrial ecosystem are also important for EESMs, because they can provide critical inputs and benchmark datasets for evaluation of EESMs simulations across scales. The recent development of high-quality, low-altitude remote sensing platforms or small UAS (< 25 kg) enables measurements of terrestrial ecosystems at unprecedented temporal and spatial scales. Specifically, these new platforms can provide detailed information on patterns and processes of terrestrial ecosystems at a critical intermediate scale between point measurements and suborbital and satellite platforms. Given their potential for sub-decimeter spatial resolution, improved mission safety, high revisit frequency, and reduced operation cost, these platforms are of particular interest in the development of ecological scaling algorithms to parameterize and benchmark EESMs, particularly over complex and remote terrain. Our group is developing a small UAS platform and integrated sensor package focused on measurement needs for scaling and informing ecosystem modeling activities, as well as scaling and mapping plant functional traits. To do this we are developing an integrated software workflow and hardware package using off-the-shelf instrumentation including a high-resolution digital camera for Structure from Motion, spectroradiometer, and a thermal infrared camera. Our workflow includes platform design, measurement, image processing, data management, and information extraction. The fusion of 3D structure information, thermal-infrared imagery, and spectroscopic measurements, will provide a foundation for the development of ecological scaling and mapping algorithms. Our initial focus is in temperate forests but near-term research will expand into the high-arctic and eventually tropical systems. The results of this prototype study show that off-the-shelf technology can be used to develop a low-cost alternative for mapping plant traits and three-dimensional structure for ecological research.
The Ongoing Impact of the U.S. Fast Reactor Integral Experiments Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; Michael A. Pope; Harold F. McFarlane
2012-11-01
The creation of a large database of integral fast reactor physics experiments advanced nuclear science and technology in ways that were unachievable by less capital intensive and operationally challenging approaches. They enabled the compilation of integral physics benchmark data, validated (or not) analytical methods, and provided assurance of future rector designs The integral experiments performed at Argonne National Laboratory (ANL) represent decades of research performed to support fast reactor design and our understanding of neutronics behavior and reactor physics measurements. Experiments began in 1955 with the Zero Power Reactor No. 3 (ZPR-3) and terminated with the Zero Power Physics Reactormore » (ZPPR, originally the Zero Power Plutonium Reactor) in 1990 at the former ANL-West site in Idaho, which is now part of the Idaho National Laboratory (INL). Two additional critical assemblies, ZPR-6 and ZPR-9, operated at the ANL-East site in Illinois. A total of 128 fast reactor assemblies were constructed with these facilities [1]. The infrastructure and measurement capabilities are too expensive to be replicated in the modern era, making the integral database invaluable as the world pushes ahead with development of liquid metal cooled reactors.« less
NAS Grid Benchmarks: A Tool for Grid Space Exploration
NASA Technical Reports Server (NTRS)
Frumkin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)
2001-01-01
We present an approach for benchmarking services provided by computational Grids. It is based on the NAS Parallel Benchmarks (NPB) and is called NAS Grid Benchmark (NGB) in this paper. We present NGB as a data flow graph encapsulating an instance of an NPB code in each graph node, which communicates with other nodes by sending/receiving initialization data. These nodes may be mapped to the same or different Grid machines. Like NPB, NGB will specify several different classes (problem sizes). NGB also specifies the generic Grid services sufficient for running the bench-mark. The implementor has the freedom to choose any specific Grid environment. However, we describe a reference implementation in Java, and present some scenarios for using NGB.
The essential value of long-term experimental data for hydrology and water management
NASA Astrophysics Data System (ADS)
Tetzlaff, Doerthe; Carey, Sean K.; McNamara, James P.; Laudon, Hjalmar; Soulsby, Chris
2017-04-01
Observations and data from long-term experimental watersheds are the foundation of hydrology as a geoscience. They allow us to benchmark process understanding, observe trends and natural cycles, and are prerequisites for testing predictive models. Long-term experimental watersheds also are places where new measurement technologies are developed. These studies offer a crucial evidence base for understanding and managing the provision of clean water supplies, predicting and mitigating the effects of floods, and protecting ecosystem services provided by rivers and wetlands. They also show how to manage land and water in an integrated, sustainable way that reduces environmental and economic costs.
Polarized Light Reflected and Transmitted by Thick Rayleigh Scattering Atmospheres
NASA Astrophysics Data System (ADS)
Natraj, Vijay; Hovenier, J. W.
2012-03-01
Accurate values for the intensity and polarization of light reflected and transmitted by optically thick Rayleigh scattering atmospheres with a Lambert surface underneath are presented. A recently reported new method for solving integral equations describing Chandrasekhar's X- and Y-functions is used. The results have been validated using various tests and techniques, including the doubling-adding method, and are accurate to within one unit in the eighth decimal place. Tables are stored electronically and expected to be useful as benchmark results for the (exo)planetary science and astrophysics communities. Asymptotic expressions to obtain Stokes parameters for a thick layer from those of a semi-infinite atmosphere are also provided.
NASA Astrophysics Data System (ADS)
Stupakov, Gennady; Zhou, Demin
2016-04-01
We develop a general model of coherent synchrotron radiation (CSR) impedance with shielding provided by two parallel conducting plates. This model allows us to easily reproduce all previously known analytical CSR wakes and to expand the analysis to situations not explored before. It reduces calculations of the impedance to taking integrals along the trajectory of the beam. New analytical results are derived for the radiation impedance with shielding for the following orbits: a kink, a bending magnet, a wiggler of finite length, and an infinitely long wiggler. All our formulas are benchmarked against numerical simulations with the CSRZ computer code.
Big Data and Predictive Analytics: Applications in the Care of Children.
Suresh, Srinivasan
2016-04-01
Emerging changes in the United States' healthcare delivery model have led to renewed interest in data-driven methods for managing quality of care. Analytics (Data plus Information) plays a key role in predictive risk assessment, clinical decision support, and various patient throughput measures. This article reviews the application of a pediatric risk score, which is integrated into our hospital's electronic medical record, and provides an early warning sign for clinical deterioration. Dashboards that are a part of disease management systems, are a vital tool in peer benchmarking, and can help in reducing unnecessary variations in care. Copyright © 2016 Elsevier Inc. All rights reserved.
Green related practices for construction procurement
NASA Astrophysics Data System (ADS)
Bidin, Z. A.; Bohari, A. A. M.; Rais, S. L. A.; Saferi, M. M.
2018-04-01
Environment degradation issue has been a worldwide problem and various efforts have been done to minimise the problem. Construction sectors are pointed as one of the major contributors to its construction activities. Green-oriented procurement [GP] is considered as an environmental strategy to integrate the environmental practices into the construction delivery. Thus, this paper reviews the concept of GP and the practices related to it. The outcome of this research findings provides a basis for understanding the concept of GP and the further development of GP. The listed GP practices identified from this paper can serve as guidelines for industry practitioners to design, implement, and benchmarking the green practices in their procurement delivery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stupakov, Gennady; Zhou, Demin
2016-04-21
We develop a general model of coherent synchrotron radiation (CSR) impedance with shielding provided by two parallel conducting plates. This model allows us to easily reproduce all previously known analytical CSR wakes and to expand the analysis to situations not explored before. It reduces calculations of the impedance to taking integrals along the trajectory of the beam. New analytical results are derived for the radiation impedance with shielding for the following orbits: a kink, a bending magnet, a wiggler of finite length, and an infinitely long wiggler. All our formulas are benchmarked against numerical simulations with the CSRZ computer code.
Rethinking the reference collection: exploring benchmarks and e-book availability.
Husted, Jeffrey T; Czechowski, Leslie J
2012-01-01
Librarians in the Health Sciences Library System at the University of Pittsburgh explored the possibility of developing an electronic reference collection that would replace the print reference collection, thus providing access to these valuable materials to a widely dispersed user population. The librarians evaluated the print reference collection and standard collection development lists as potential benchmarks for the electronic collection, and they determined which books were available in electronic format. They decided that the low availability of electronic versions of titles in each benchmark group rendered the creation of an electronic reference collection using either benchmark impractical.
International land Model Benchmarking (ILAMB) Package v002.00
Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory
2016-05-09
As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.
International land Model Benchmarking (ILAMB) Package v001.00
Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory
2016-05-02
As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karl Anderson, Steve Plimpton
2015-01-27
The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created inmore » the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.« less
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassanein, Ahmed
2015-03-31
This report describes implementation of comprehensive and integrated models to evaluate plasma material interactions during normal and abnormal plasma operations. The models in full3D simulations represent state-of-the art worldwide development with numerous benchmarking of various tokamak devices and plasma simulators. In addition, significant number of experimental work has been performed in our center for materials under extreme environment (CMUXE) at Purdue to benchmark the effect of intense particle and heat fluxes on plasma-facing components. This represents one-year worth of work and resulted in more than 23 Journal Publications and numerous conferences presentations. The funding has helped several students to obtainmore » their M.Sc. and Ph.D. degrees and many of them are now faculty members in US and around the world teaching and conducting fusion research. Our work has also been recognized through many awards.« less
Once-through integral system (OTIS): Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gloudemans, J R
1986-09-01
A scaled experimental facility, designated the once-through integral system (OTIS), was used to acquire post-small break loss-of-coolant accident (SBLOCA) data for benchmarking system codes. OTIS was also used to investigate the application of the Abnormal Transient Operating Guidelines (ATOG) used in the Babcock and Wilcox (B and W) designed nuclear steam supply system (NSSS) during the course of an SBLOCA. OTIS was a single-loop facility with a plant to model power scale factor of 1686. OTIS maintained the key elevations, approximate component volumes, and loop flow resistances, and simulated the major component phenomena of a B and W raised-loop nuclearmore » plant. A test matrix consisting of 15 tests divided into four categories was performed. The largest group contained 10 tests and was defined to parametrically obtain an extensive set of plant-typical experimental data for code benchmarking. Parameters such as leak size, leak location, and high-pressure injection (HPI) shut-off head were individually varied. The remaining categories were specified to study the impact of the ATOGs (2 tests), to note the effect of guard heater operation on observed phenomena (2 tests), and to provide a data set for comparison with previous test experience (1 test). A summary of the test results and a detailed discussion of Test 220100 is presented. Test 220100 was the nominal or reference test for the parametric studies. This test was performed with a scaled 10-cm/sup 2/ leak located in the cold leg suction piping.« less
Benchmarking on the evaluation of major accident-related risk assessment.
Fabbri, Luciano; Contini, Sergio
2009-03-15
This paper summarises the main results of a European project BEQUAR (Benchmarking Exercise in Quantitative Area Risk Assessment in Central and Eastern European Countries). This project is among the first attempts to explore how independent evaluations of the same risk study associated with a certain chemical establishment could differ from each other and the consequent effects on the resulting area risk estimate. The exercise specifically aimed at exploring the manner and degree to which independent experts may disagree on the interpretation of quantitative risk assessments for the same entity. The project first compared the results of a number of independent expert evaluations of a quantitative risk assessment study for the same reference chemical establishment. This effort was then followed by a study of the impact of the different interpretations on the estimate of the overall risk on the area concerned. In order to improve the inter-comparability of the results, this exercise was conducted using a single tool for area risk assessment based on the ARIPAR methodology. The results of this study are expected to contribute to an improved understanding of the inspection criteria and practices used by the different national authorities responsible for the implementation of the Seveso II Directive in their countries. The activity was funded under the Enlargement and Integration Action of the Joint Research Centre (JRC), that aims at providing scientific and technological support for promoting integration of the New Member States and assisting the Candidate Countries on their way towards accession to the European Union.
Bogaert, Petronille; Van Oyen, Herman
2017-01-01
Although sound data and health information are at the basis of evidence-based policy-making and research, still no single, integrated and sustainable EU-wide public health monitoring system or health information system exists. BRIDGE Health is working towards an EU health information and data generation network covering major EU health policy areas. A stakeholder consultation with national public health institutes was organised to identify the needs to strengthen the current EU health information system and to identify its possible benefits. Five key issues for improvement were identified: (1) coherence, coordination and sustainability; (2) data harmonization, collection, processing and reporting; (3) comparison and benchmarking; (4) knowledge sharing and capacity building; and (5) transferability of health information into evidence-based policy making. The vision of an improved EU health information system was formulated and the possible benefits in relation to six target groups. Through this consultation, BRIDGE Health has identified the continuous need to strengthen the EU health information system. A better system is about sustainability, better coordination, governance and collaboration among national health information systems and stakeholders to jointly improve, harmonise, standardise and analyse health information. More and better sharing of this comparable health data allows for more and better comparative health research, international benchmarking, national and EU-wide public health monitoring. This should be developed with the view to provide the tools to fight both common and individual challenges faced by the Members States and their politicians.
NASA Technical Reports Server (NTRS)
McGalliard, James
2008-01-01
This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.
NASA Technical Reports Server (NTRS)
deWit, A.; Cohn, N.
1999-01-01
The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure, This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.
NASA Technical Reports Server (NTRS)
de Wit, A.; Cohn, N.
1999-01-01
The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.
Engine Benchmarking - Final CRADA Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallner, Thomas
Detailed benchmarking of the powertrains of three light-duty vehicles was performed. Results were presented and provided to CRADA partners. The vehicles included a MY2011 Audi A4, a MY2012 Mini Cooper and a MY2014 Nissan Versa.
Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)
2013-01-01
Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.
Mitchell, L
1996-01-01
The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.
ERIC Educational Resources Information Center
Ramsay, Jennifer M.; Hanna, Lauren L. Hulsman; Ringwall, Kris A.
2016-01-01
One goal of Extension is to provide practical information that makes a difference to producers. Cow Herd Appraisal Performance Software (CHAPS) has provided beef producers with production benchmarks for 30 years, creating a large historical data set. Many such large data sets contain useful information but are underutilized. Our goal was to create…
BIOREL: the benchmark resource to estimate the relevance of the gene networks.
Antonov, Alexey V; Mewes, Hans W
2006-02-06
The progress of high-throughput methodologies in functional genomics has lead to the development of statistical procedures to infer gene networks from various types of high-throughput data. However, due to the lack of common standards, the biological significance of the results of the different studies is hard to compare. To overcome this problem we propose a benchmark procedure and have developed a web resource (BIOREL), which is useful for estimating the biological relevance of any genetic network by integrating different sources of biological information. The associations of each gene from the network are classified as biologically relevant or not. The proportion of genes in the network classified as "relevant" is used as the overall network relevance score. Employing synthetic data we demonstrated that such a score ranks the networks fairly in respect to the relevance level. Using BIOREL as the benchmark resource we compared the quality of experimental and theoretically predicted protein interaction data.
Path-integral Monte Carlo method for Rényi entanglement entropies.
Herdman, C M; Inglis, Stephen; Roy, P-N; Melko, R G; Del Maestro, A
2014-07-01
We introduce a quantum Monte Carlo algorithm to measure the Rényi entanglement entropies in systems of interacting bosons in the continuum. This approach is based on a path-integral ground state method that can be applied to interacting itinerant bosons in any spatial dimension with direct relevance to experimental systems of quantum fluids. We demonstrate how it may be used to compute spatial mode entanglement, particle partitioned entanglement, and the entanglement of particles, providing insights into quantum correlations generated by fluctuations, indistinguishability, and interactions. We present proof-of-principle calculations and benchmark against an exactly soluble model of interacting bosons in one spatial dimension. As this algorithm retains the fundamental polynomial scaling of quantum Monte Carlo when applied to sign-problem-free models, future applications should allow for the study of entanglement entropy in large-scale many-body systems of interacting bosons.
TRAC-PF1 code verification with data from the OTIS test facility. [Once-Through Intergral System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childerson, M.T.; Fujita, R.K.
1985-01-01
A computer code (TRAC-PF1/MOD1) developed for predicting transient thermal and hydraulic integral nuclear steam supply system (NSSS) response was benchmarked. Post-small break loss-of-coolant accident (LOCA) data from a scaled, experimental facility, designated the One-Through Integral System (OTIS), were obtained for the Babcock and Wilcox NSSS and compared to TRAC predictions. The OTIS tests provided a challenging small break LOCA data set for TRAC verification. The major phases of a small break LOCA observed in the OTIS tests included pressurizer draining and loop saturation, intermittent reactor coolant system circulation, boiler-condenser mode, and the initial stages of refill. The TRAC code wasmore » successful in predicting OTIS loop conditions (system pressures and temperatures) after modification of the steam generator model. In particular, the code predicted both pool and auxiliary-feedwater initiated boiler-condenser mode heat transfer.« less
Streaming and particle motion in acoustically-actuated leaky systems
NASA Astrophysics Data System (ADS)
Nama, Nitesh; Barnkob, Rune; Jun Huang, Tony; Kahler, Christian; Costanzo, Francesco
2017-11-01
The integration of acoustics with microfluidics has shown great promise for applications within biology, chemistry, and medicine. A commonly employed system to achieve this integration consists of a fluid-filled, polymer-walled microchannel that is acoustically actuated via standing surface acoustic waves. However, despite significant experimental advancements, the precise physical understanding of such systems remains a work in progress. In this work, we investigate the nature of acoustic fields that are setup inside the microchannel as well as the fundamental driving mechanism governing the fluid and particle motion in these systems. We provide an experimental benchmark using state-of-art 3D measurements of fluid and particle motion and present a Lagrangian velocity based temporal multiscale numerical framework to explain the experimental observations. Following verification and validation, we employ our numerical model to reveal the presence of a pseudo-standing acoustic wave that drives the acoustic streaming and particle motion in these systems.
Zuckerman, Stephen; Skopec, Laura; Guterman, Stuart
2017-12-01
Medicare Advantage (MA), the program that allows people to receive their Medicare benefits through private health plans, uses a benchmark-and-bidding system to induce plans to provide benefits at lower costs. However, prior research suggests medical costs, profits, and other plan costs are not as low under this system as they might otherwise be. To examine how well the current system encourages MA plans to bid their lowest cost by examining the relationship between costs and bonuses (rebates) and the benchmarks Medicare uses in determining plan payments. Regression analysis using 2015 data for HMO and local PPO plans. Costs and rebates are higher for MA plans in areas with higher benchmarks, and plan costs vary less than benchmarks do. A one-dollar increase in benchmarks is associated with 32-cent-higher plan costs and a 52-cent-higher rebate, even when controlling for market and plan factors that can affect costs. This suggests the current benchmark-and-bidding system allows plans to bid higher than local input prices and other market conditions would seem to warrant. To incentivize MA plans to maximize efficiency and minimize costs, Medicare could change the way benchmarks are set or used.
Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects
Lovaglio, Pietro Giorgio
2012-01-01
Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140
Benchmarking strategies for measuring the quality of healthcare: problems and prospects.
Lovaglio, Pietro Giorgio
2012-01-01
Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.
Issues to consider in the derivation of water quality benchmarks for the protection of aquatic life.
Schneider, Uwe
2014-01-01
While water quality benchmarks for the protection of aquatic life have been in use in some jurisdictions for several decades (USA, Canada, several European countries), more and more countries are now setting up their own national water quality benchmark development programs. In doing so, they either adopt an existing method from another jurisdiction, update on an existing approach, or develop their own new derivation method. Each approach has its own advantages and disadvantages, and many issues have to be addressed when setting up a water quality benchmark development program or when deriving a water quality benchmark. Each of these tasks requires a special expertise. They may seem simple, but are complex in their details. The intention of this paper was to provide some guidance for this process of water quality benchmark development on the program level, for the derivation methodology development, and in the actual benchmark derivation step, as well as to point out some issues (notably the inclusion of adapted populations and cryptic species and points to consider in the use of the species sensitivity distribution approach) and future opportunities (an international data repository and international collaboration in water quality benchmark development).
Fund allocation within Australian dental care: an innovative approach to output based funding.
Tennant, M; Carrello, C; Kruger, E
2005-12-01
Over the last 15 years in Australia the process of funding government health care has changed significantly. The development of dental funding models that transparently meet both the service delivery needs for data at the treatment level and policy makers' need for health condition data is critical to the continued integration of dentistry into the wider health system. This paper presents a model of fund allocation that provides a communication construct that addresses the needs of both policy makers and service providers. In this model, dental treatments (dental item numbers) have been grouped into eight broad dental health conditions. Within each dental health condition, a weighted average price is determined using the Department of Veterans Affairs' (DVA) fee schedule as the benchmark, adjusted for the mix of care. The model also adjusts for the efficiency differences between sectors providing government funded dental care. In summary, the price to be applied to a dental health condition category is determined by the weighted average DVA price adjusted by the sector efficiency. This model allows governments and dental service providers to develop funding agreements that both quantify and justify the treatment to be provided. Such a process facilitates the continued integration of dental care into the wider health system.
Contracting out : benchmarking study : phase 1 : part 2 : external data collection
DOT National Transportation Integrated Search
2002-04-17
The planning provisions of the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) and the transportation provisions of the Clean Air Act Amendments of 1990 (CAAA) define the framework for the effective integration of transportation and ...
Benchmarking and validation activities within JEFF project
NASA Astrophysics Data System (ADS)
Cabellos, O.; Alvarez-Velarde, F.; Angelone, M.; Diez, C. J.; Dyrda, J.; Fiorito, L.; Fischer, U.; Fleming, M.; Haeck, W.; Hill, I.; Ichou, R.; Kim, D. H.; Klix, A.; Kodeli, I.; Leconte, P.; Michel-Sendis, F.; Nunnenmann, E.; Pecchia, M.; Peneliau, Y.; Plompen, A.; Rochman, D.; Romojaro, P.; Stankovskiy, A.; Sublet, J. Ch.; Tamagno, P.; Marck, S. van der
2017-09-01
The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.
Using a health promotion model to promote benchmarking.
Welby, Jane
2006-07-01
The North East (England) Neonatal Benchmarking Group has been established for almost a decade and has researched and developed a substantial number of evidence-based benchmarks. With no firm evidence that these were being used or that there was any standardisation of neonatal care throughout the region, the group embarked on a programme to review the benchmarks and determine what evidence-based guidelines were needed to support standardisation. A health promotion planning model was used by one subgroup to structure the programme; it enabled all members of the sub group to engage in the review process and provided the motivation and supporting documentation for implementation of changes in practice. The need for a regional guideline development group to complement the activity of the benchmarking group is being addressed.
Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; J. Blair Briggs; David W. Nigg
2009-11-01
One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.
Dynamic vehicle routing with time windows in theory and practice.
Yang, Zhiwei; van Osta, Jan-Paul; van Veen, Barry; van Krevelen, Rick; van Klaveren, Richard; Stam, Andries; Kok, Joost; Bäck, Thomas; Emmerich, Michael
2017-01-01
The vehicle routing problem is a classical combinatorial optimization problem. This work is about a variant of the vehicle routing problem with dynamically changing orders and time windows. In real-world applications often the demands change during operation time. New orders occur and others are canceled. In this case new schedules need to be generated on-the-fly. Online optimization algorithms for dynamical vehicle routing address this problem but so far they do not consider time windows. Moreover, to match the scenarios found in real-world problems adaptations of benchmarks are required. In this paper, a practical problem is modeled based on the procedure of daily routing of a delivery company. New orders by customers are introduced dynamically during the working day and need to be integrated into the schedule. A multiple ant colony algorithm combined with powerful local search procedures is proposed to solve the dynamic vehicle routing problem with time windows. The performance is tested on a new benchmark based on simulations of a working day. The problems are taken from Solomon's benchmarks but a certain percentage of the orders are only revealed to the algorithm during operation time. Different versions of the MACS algorithm are tested and a high performing variant is identified. Finally, the algorithm is tested in situ: In a field study, the algorithm schedules a fleet of cars for a surveillance company. We compare the performance of the algorithm to that of the procedure used by the company and we summarize insights gained from the implementation of the real-world study. The results show that the multiple ant colony algorithm can get a much better solution on the academic benchmark problem and also can be integrated in a real-world environment.
Collected notes from the Benchmarks and Metrics Workshop
NASA Technical Reports Server (NTRS)
Drummond, Mark E.; Kaelbling, Leslie P.; Rosenschein, Stanley J.
1991-01-01
In recent years there has been a proliferation of proposals in the artificial intelligence (AI) literature for integrated agent architectures. Each architecture offers an approach to the general problem of constructing an integrated agent. Unfortunately, the ways in which one architecture might be considered better than another are not always clear. There has been a growing realization that many of the positive and negative aspects of an architecture become apparent only when experimental evaluation is performed and that to progress as a discipline, we must develop rigorous experimental methods. In addition to the intrinsic intellectual interest of experimentation, rigorous performance evaluation of systems is also a crucial practical concern to our research sponsors. DARPA, NASA, and AFOSR (among others) are actively searching for better ways of experimentally evaluating alternative approaches to building intelligent agents. One tool for experimental evaluation involves testing systems on benchmark tasks in order to assess their relative performance. As part of a joint DARPA and NASA funded project, NASA-Ames and Teleos Research are carrying out a research effort to establish a set of benchmark tasks and evaluation metrics by which the performance of agent architectures may be determined. As part of this project, we held a workshop on Benchmarks and Metrics at the NASA Ames Research Center on June 25, 1990. The objective of the workshop was to foster early discussion on this important topic. We did not achieve a consensus, nor did we expect to. Collected here is some of the information that was exchanged at the workshop. Given here is an outline of the workshop, a list of the participants, notes taken on the white-board during open discussions, position papers/notes from some participants, and copies of slides used in the presentations.
Edwards, Roger A; Dee, Deborah; Umer, Amna; Perrine, Cria G; Shealy, Katherine R; Grummer-Strawn, Laurence M
2014-02-01
A substantial proportion of US maternity care facilities engage in practices that are not evidence-based and that interfere with breastfeeding. The CDC Survey of Maternity Practices in Infant Nutrition and Care (mPINC) showed significant variation in maternity practices among US states. The purpose of this article is to use benchmarking techniques to identify states within relevant peer groups that were top performers on mPINC survey indicators related to breastfeeding support. We used 11 indicators of breastfeeding-related maternity care from the 2011 mPINC survey and benchmarking techniques to organize and compare hospital-based maternity practices across the 50 states and Washington, DC. We created peer categories for benchmarking first by region (grouping states by West, Midwest, South, and Northeast) and then by size (grouping states by the number of maternity facilities and dividing each region into approximately equal halves based on the number of facilities). Thirty-four states had scores high enough to serve as benchmarks, and 32 states had scores low enough to reflect the lowest score gap from the benchmark on at least 1 indicator. No state served as the benchmark on more than 5 indicators and no state was furthest from the benchmark on more than 7 indicators. The small peer group benchmarks in the South, West, and Midwest were better than the large peer group benchmarks on 91%, 82%, and 36% of the indicators, respectively. In the West large, the Midwest large, the Midwest small, and the South large peer groups, 4-6 benchmarks showed that less than 50% of hospitals have ideal practice in all states. The evaluation presents benchmarks for peer group state comparisons that provide potential and feasible targets for improvement.
GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise Paul
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. •more » The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary. 09/2016: Tables 6 and 8 updated. AGR-2 input data added« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise P.
2014-09-01
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparisonmore » of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary.« less
U.S. EPA'S ACUTE REFERENCE EXPOSURE METHODOLOGY FOR ACUTE INHALATION EXPOSURES
The US EPA National Center for Environmental Assessment has developed a methodology to derive acute inhalation toxicity benchmarks, called acute reference exposures (AREs), for noncancer effects. The methodology provides guidance for the derivation of chemical-specific benchmark...
New data tool provides wealth of clinical, financial benchmarks by census region.
1998-08-01
Data Library: Compare your departmental expenses, administrative expense ratio, length of stay, and other clinical-financial data to benchmarks for your census region. A new CD-rom product that provides access to four years of Medicare Cost Report data for every reporting hospital in the nation allows users to slice and dice the data by more than 200 different performance measures.
Value of 18F-FDG PET and PET/CT for evaluation of pediatric malignancies.
Uslu, Lebriz; Donig, Jessica; Link, Michael; Rosenberg, Jarrett; Quon, Andrew; Daldrup-Link, Heike E
2015-02-01
Successful management of solid tumors in children requires imaging tests for accurate disease detection, characterization, and treatment monitoring. Technologic developments aim toward the creation of integrated imaging approaches that provide a comprehensive diagnosis with a single visit. These integrated diagnostic tests not only are convenient for young patients but also save direct and indirect health-care costs by streamlining procedures, minimizing hospitalizations, and minimizing lost school or work time for children and their parents. (18)F-FDG PET/CT is a highly sensitive and specific imaging modality for whole-body evaluation of pediatric malignancies. However, recent concerns about ionizing radiation exposure have led to a search for alternative imaging methods, such as whole-body MR imaging and PET/MR. As we develop new approaches for tumor staging, it is important to understand current benchmarks. This review article will synthesize the current literature on (18)F-FDG PET/CT for tumor staging in children, summarizing questions that have been solved and providing an outlook on unsolved avenues. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Barty, Rebecca L; Gagliardi, Kathleen; Owens, Wendy; Lauzon, Deborah; Scheuermann, Sheena; Liu, Yang; Wang, Grace; Pai, Menaka; Heddle, Nancy M
2015-07-01
Benchmarking is a quality improvement tool that compares an organization's performance to that of its peers for selected indicators, to improve practice. Processes to develop evidence-based benchmarks for red blood cell (RBC) outdating in Ontario hospitals, based on RBC hospital disposition data from Canadian Blood Services, have been previously reported. These benchmarks were implemented in 160 hospitals provincewide with a multifaceted approach, which included hospital education, inventory management tools and resources, summaries of best practice recommendations, recognition of high-performing sites, and audit tools on the Transfusion Ontario website (http://transfusionontario.org). In this study we describe the implementation process and the impact of the benchmarking program on RBC outdating. A conceptual framework for continuous quality improvement of a benchmarking program was also developed. The RBC outdating rate for all hospitals trended downward continuously from April 2006 to February 2012, irrespective of hospitals' transfusion rates or their distance from the blood supplier. The highest annual outdating rate was 2.82%, at the beginning of the observation period. Each year brought further reductions, with a nadir outdating rate of 1.02% achieved in 2011. The key elements of the successful benchmarking strategy included dynamic targets, a comprehensive and evidence-based implementation strategy, ongoing information sharing, and a robust data system to track information. The Ontario benchmarking program for RBC outdating resulted in continuous and sustained quality improvement. Our conceptual iterative framework for benchmarking provides a guide for institutions implementing a benchmarking program. © 2015 AABB.
Benchmark problems and solutions
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.
1995-01-01
The scientific committee, after careful consideration, adopted six categories of benchmark problems for the workshop. These problems do not cover all the important computational issues relevant to Computational Aeroacoustics (CAA). The deciding factor to limit the number of categories to six was the amount of effort needed to solve these problems. For reference purpose, the benchmark problems are provided here. They are followed by the exact or approximate analytical solutions. At present, an exact solution for the Category 6 problem is not available.
Performance Benchmark for a Prismatic Flow Solver
2007-03-26
Gauss- Seidel (LU-SGS) implicit method is used for time integration to reduce the computational time. A one-equation turbulence model by Goldberg and...numerical flux computations. The Lower-Upper-Symmetric Gauss- Seidel (LU-SGS) implicit method [1] is used for time integration to reduce the...Sharov, D. and Nakahashi, K., “Reordering of Hybrid Unstructured Grids for Lower-Upper Symmetric Gauss- Seidel Computations,” AIAA Journal, Vol. 36
Benchmarking health IT among OECD countries: better data for better policy
Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K
2014-01-01
Objective To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. Materials and methods The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. Results The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Discussion Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. Conclusions As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this. PMID:23721983
Benchmarking health IT among OECD countries: better data for better policy.
Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K
2014-01-01
To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this.
Medicare Part D Roulette: Potential Implications of Random Assignment and Plan Restrictions
Patel, Rajul A.; Walberg, Mark P.; Woelfel, Joseph A.; Amaral, Michelle M.; Varu, Paresh
2013-01-01
Background Dual-eligible (Medicare/Medicaid) beneficiaries are randomly assigned to a benchmark plan, which provides prescription drug coverage under the Part D benefit without consideration of their prescription drug profile. To date, the potential for beneficiary assignment to a plan with poor formulary coverage has been minimally studied and the resultant financial impact to beneficiaries unknown. Objective We sought to determine cost variability and drug use restrictions under each available 2010 California benchmark plan. Methods Dual-eligible beneficiaries were provided Part D plan assistance during the 2010 annual election period. The Medicare Web site was used to determine benchmark plan costs and prescription utilization restrictions for each of the six California benchmark plans available for random assignment in 2010. A standardized survey was used to record all de-identified beneficiary demographic and plan specific data. For each low-income subsidy-recipient (n = 113), cost, rank, number of non-formulary medications, and prescription utilization restrictions were recorded for each available 2010 California benchmark plan. Formulary matching rates (percent of beneficiary's medications on plan formulary) were calculated for each benchmark plan. Results Auto-assigned beneficiaries had only a 34% chance of being assigned to the lowest cost plan; the remainder faced potentially significant avoidable out-of-pocket costs. Wide variations between benchmark plans were observed for plan cost, formulary coverage, formulary matching rates, and prescription utilization restrictions. Conclusions Beneficiaries had a 66% chance of being assigned to a sub-optimal plan; thereby, they faced significant avoidable out-of-pocket costs. Alternative methods of beneficiary assignment could decrease beneficiary and Medicare costs while also reducing medication non-compliance. PMID:24753963
Modelling of a Solar Thermal Power Plant for Benchmarking Blackbox Optimization Solvers
NASA Astrophysics Data System (ADS)
Lemyre Garneau, Mathieu
A new family of problems is provided to serve as a benchmark for blackbox optimization solvers. The problems are single or bi-objective and vary in complexity in terms of the number of variables used (from 5 to 29), the type of variables (integer, real, category), the number of constraints (from 5 to 17) and their types (binary or continuous). In order to provide problems exhibiting dynamics that reflect real engineering challenges, they are extracted from an original numerical model of a concentrated solar power (CSP) power plant with molten salt thermal storage. The model simulates the performance of the power plant by using a high level modeling of each of its main components, namely, an heliostats field, a central cavity receiver, a molten salt heat storage, a steam generator and an idealized powerblock. The heliostats field layout is determined through a simple automatic strategy that finds the best individual positions on the field by considering their respective cosine efficiency, atmospheric scattering and spillage losses as a function of the design parameters. A Monte-Carlo integral method is used to evaluate the heliostats field's optical performance throughout the day so that shadowing effects between heliostats are considered, and the results of this evaluation provide the inputs to simulate the levels and temperatures of the thermal storage. The molten salt storage inventory is used to transfer thermal energy to the powerblock, which simulates a simple Rankine cycle with a single steam turbine. Auxiliary models are used to provide additional optimization constraints on the investment cost, parasitic losses or components failure. The results of preliminary optimizations performed with the NOMAD software using default settings are provided to show the validity of the problems.
Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks
NASA Technical Reports Server (NTRS)
Jin, Haoqiang; VanderWijngaart, Rob F.
2003-01-01
We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.
Reducing accounts receivable through benchmarking and best practices identification.
Berkey, T
1998-01-01
As HIM professionals look for ways to become more competitive and achieve the best results, the importance of discovering best practices becomes more apparent. Here's how one team used a benchmarking project to provide specific best practices that reduced accounts receivable days.
Watkinson, William; Raison, Nicholas; Abe, Takashige; Harrison, Patrick; Khan, Shamim; Van der Poel, Henk; Dasgupta, Prokar; Ahmed, Kamran
2018-05-01
To establish objective benchmarks at the level of a competent robotic surgeon across different exercises and metrics for the RobotiX Mentor virtual reality (VR) simulator suitable for use within a robotic surgical training curriculum. This retrospective observational study analysed results from multiple data sources, all of which used the RobotiX Mentor VR simulator. 123 participants with varying experience from novice to expert completed the exercises. Competency was established as the 25th centile of the mean advanced intermediate score. Three basic skill exercises and two advanced skill exercises were used. King's College London. 84 Novice, 26 beginner intermediates, 9 advanced intermediates and 4 experts were used in this retrospective observational study. Objective benchmarks derived from the 25th centile of the mean scores of the advanced intermediates provided suitably challenging yet also achievable targets for training surgeons. The disparity in scores was greatest for the advanced exercises. Novice surgeons are able to achieve the benchmarks across all exercises in the majority of metrics. We have successfully created this proof-of-concept study, which requires validation in a larger cohort. Objective benchmarks obtained from the 25th centile of the mean scores of advanced intermediates provide clinically relevant benchmarks at the standard of a competent robotic surgeon that are challenging yet also attainable. That can be used within a VR training curriculum allowing participants to track and monitor their progress in a structured and progressional manner through five exercises. Providing clearly defined targets, ensuring that a universal training standard has been achieved across training surgeons. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Benchmarking and the laboratory
Galloway, M; Nadin, L
2001-01-01
This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; J. Blair Briggs; Jim Gulliford
2014-10-01
The International Reactor Physics Experiment Evaluation Project (IRPhEP) is a widely recognized world class program. The work of the IRPhEP is documented in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Integral data from the IRPhEP Handbook is used by reactor safety and design, nuclear data, criticality safety, and analytical methods development specialists, worldwide, to perform necessary validations of their calculational techniques. The IRPhEP Handbook is among the most frequently quoted reference in the nuclear industry and is expected to be a valuable resource for future decades.
Quantifying complexity in translational research: an integrated approach
Munoz, David A.; Nembhard, Harriet Black; Kraschnewski, Jennifer L.
2014-01-01
Purpose This article quantifies complexity in translational research. The impact of major operational steps and technical requirements (TR) is calculated with respect to their ability to accelerate moving new discoveries into clinical practice. Design/Methodology/Approach A three-phase integrated Quality Function Deployment (QFD) and Analytic Hierarchy Process (AHP) method was used to quantify complexity in translational research. A case study in obesity was used to usability. Findings Generally, the evidence generated was valuable for understanding various components in translational research. Particularly, we found that collaboration networks, multidisciplinary team capacity and community engagement are crucial for translating new discoveries into practice. Research limitations/implications As the method is mainly based on subjective opinion, some argue that the results may be biased. However, a consistency ratio is calculated and used as a guide to subjectivity. Alternatively, a larger sample may be incorporated to reduce bias. Practical implications The integrated QFD-AHP framework provides evidence that could be helpful to generate agreement, develop guidelines, allocate resources wisely, identify benchmarks and enhance collaboration among similar projects. Originality/value Current conceptual models in translational research provide little or no clue to assess complexity. The proposed method aimed to fill this gap. Additionally, the literature review includes various features that have not been explored in translational research. PMID:25417380
NASA Astrophysics Data System (ADS)
Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.
2015-12-01
Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.
Space Operations Training Concepts Benchmark Study (Training in a Continuous Operations Environment)
NASA Technical Reports Server (NTRS)
Johnston, Alan E.; Gilchrist, Michael; Underwood, Debrah (Technical Monitor)
2002-01-01
The NASA/USAF Benchmark Space Operations Training Concepts Study will perform a comparative analysis of the space operations training programs utilized by the United States Air Force Space Command with those utilized by the National Aeronautics and Space Administration. The concentration of the study will be focused on Ground Controller/Flight Controller Training for the International Space Station Payload Program. The duration of the study is expected to be five months with report completion by 30 June 2002. The U.S. Air Force Space Command was chosen as the most likely candidate for this benchmark study because their experience in payload operations controller training and user interfaces compares favorably with the Payload Operations Integration Center's training and user interfaces. These similarities can be seen in the dynamics of missions/payloads, controller on-console requirements, and currency/proficiency challenges to name a few. It is expected that the report will look at the respective programs and investigate goals of each training program, unique training challenges posed by space operations ground controller environments, processes of setting up controller training programs, phases of controller training, methods of controller training, techniques to evaluate adequacy of controller knowledge and the training received, and approaches to training administration. The report will provide recommendations to the respective agencies based on the findings. Attached is a preliminary outline of the study. Following selection of participants and an approval to proceed, initial contact will be made with U.S. Air Force Space Command Directorate of Training to discuss steps to accomplish the study.
Integrating Six Sigma with total quality management: a case example for measuring medication errors.
Revere, Lee; Black, Ken
2003-01-01
Six Sigma is a new management philosophy that seeks a nonexistent error rate. It is ripe for healthcare because many healthcare processes require a near-zero tolerance for mistakes. For most organizations, establishing a Six Sigma program requires significant resources and produces considerable stress. However, in healthcare, management can piggyback Six Sigma onto current total quality management (TQM) efforts so that minimal disruption occurs in the organization. Six Sigma is an extension of the Failure Mode and Effects Analysis that is required by JCAHO; it can easily be integrated into existing quality management efforts. Integrating Six Sigma into the existing TQM program facilitates process improvement through detailed data analysis. A drilled-down approach to root-cause analysis greatly enhances the existing TQM approach. Using the Six Sigma metrics, internal project comparisons facilitate resource allocation while external project comparisons allow for benchmarking. Thus, the application of Six Sigma makes TQM efforts more successful. This article presents a framework for including Six Sigma in an organization's TQM plan while providing a concrete example using medication errors. Using the process defined in this article, healthcare executives can integrate Six Sigma into all of their TQM projects.
A Seafloor Benchmark for 3-dimensional Geodesy
NASA Astrophysics Data System (ADS)
Chadwell, C. D.; Webb, S. C.; Nooner, S. L.
2014-12-01
We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone. Using a ROV to place and remove sensors on the benchmarks will significantly reduce the number of sensors required by the community to monitor offshore strain in subduction zones.
A Field-Based Aquatic Life Benchmark for Conductivity in ...
This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for dissolved salts as measured by conductivity in Central Appalachian streams using data from West Virginia and Kentucky. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.
Benchmarking: measuring the outcomes of evidence-based practice.
DeLise, D C; Leasure, A R
2001-01-01
Measurement of the outcomes associated with implementation of evidence-based practice changes is becoming increasingly emphasized by multiple health care disciplines. A final step to the process of implementing and sustaining evidence-supported practice changes is that of outcomes evaluation and monitoring. The comparison of outcomes to internal and external measures is known as benchmarking. This article discusses evidence-based practice, provides an overview of outcomes evaluation, and describes the process of benchmarking to improve practice. A case study is used to illustrate this concept.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W., II
1993-01-01
One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less
Perlin, Johnathan B; Kolodner, Robert M; Roswell, Robert H
2005-01-01
The Veterans Health Administration is the United States' largest integrated health system. Once disparaged as a bureaucracy providing mediocre care, the Department of Veterans Affairs (VA) reinvented itself during the past decade through a policy shift mandating structural and organizational change, rationalization of resource allocation, explicit measurement and accountability for quality and value, and development of an information infrastructure supporting the needs of patients, clinicians, and administrators. Today, the VA is recognized for leadership in clinical informatics and performance improvement, cares for more patients with proportionally fewer resources, and sets national benchmarks in patient satisfaction and for 18 indicators of quality in disease prevention and treatment.
Perlin, Jonathan B; Kolodner, Robert M; Roswell, Robert H
2004-11-01
The Veterans Health Administration is the United States' largest integrated health system. Once disparaged as a bureaucracy providing mediocre care, the Department of Veterans Affairs (VA) reinvented itself during the past decade through a policy shift mandating structural and organizational change, rationalization of resource allocation, explicit measurement and accountability for quality and value, and development of an information infrastructure supporting the needs of patients, clinicians, and administrators. Today, the VA is recognized for leadership in clinical informatics and performance improvement, cares for more patients with proportionally fewer resources, and sets national benchmarks in patient satisfaction and for 18 indicators of quality in disease prevention and treatment.
Quantum-enhanced multiparameter estimation in multiarm interferometers
Ciampini, Mario A.; Spagnolo, Nicolò; Vitelli, Chiara; Pezzè, Luca; Smerzi, Augusto; Sciarrino, Fabio
2016-01-01
Quantum metrology is the state-of-the-art measurement technology. It uses quantum resources to enhance the sensitivity of phase estimation over that achievable by classical physics. While single parameter estimation theory has been widely investigated, much less is known about the simultaneous estimation of multiple phases, which finds key applications in imaging and sensing. In this manuscript we provide conditions of useful particle (qudit) entanglement for multiphase estimation and adapt them to multiarm Mach-Zehnder interferometry. We theoretically discuss benchmark multimode Fock states containing useful qudit entanglement and overcoming the sensitivity of separable qudit states in three and four arm Mach-Zehnder-like interferometers - currently within the reach of integrated photonics technology. PMID:27381743
A Method of Synchrophasor Technology for Detecting and Analyzing Cyber-Attacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCann, Roy; Al-Sarray, Muthanna
Studying cybersecurity events and analyzing their impacts encourage planners and operators to develop innovative approaches for preventing attacks in order to avoid outages and other disruptions. This work considers two parts in security studies; detecting an integrity attack and examining its effects on power system generators. The detection was conducted through employing synchrophasor technology to provide authentication of ACG commands based on observed system operating characteristics. The examination of an attack is completed via a detailed simulation of a modified IEEE 68-bus benchmark model to show the associated power system dynamic response. The results of the simulation are discussed formore » assessing the impacts of cyber threats.« less
NASA Astrophysics Data System (ADS)
Liu, Fenglai; Kong, Jing
2018-07-01
Unique technical challenges and their solutions for implementing semi-numerical Hartree-Fock exchange on the Phil Processor are discussed, especially concerning the single- instruction-multiple-data type of processing and small cache size. Benchmark calculations on a series of buckyball molecules with various Gaussian basis sets on a Phi processor and a six-core CPU show that the Phi processor provides as much as 12 times of speedup with large basis sets compared with the conventional four-center electron repulsion integration approach performed on the CPU. The accuracy of the semi-numerical scheme is also evaluated and found to be comparable to that of the resolution-of-identity approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stupakov, Gennady; Zhou, Demin
2016-04-21
We develop a general model of coherent synchrotron radiation (CSR) impedance with shielding provided by two parallel conducting plates. This model allows us to easily reproduce all previously known analytical CSR wakes and to expand the analysis to situations not explored before. It reduces calculations of the impedance to taking integrals along the trajectory of the beam. New analytical results are derived for the radiation impedance with shielding for the following orbits: a kink, a bending magnet, a wiggler of finite length, and an infinitely long wiggler. Furthermore, all our formulas are benchmarked against numerical simulations with the CSRZ computermore » code.« less
Categorical Regression and Benchmark Dose Software 3.0
The objective of this full-day course is to provide participants with interactive training on the use of the U.S. Environmental Protection Agency’s (EPA) Benchmark Dose software (BMDS, version 3.0, released fall 2018) and Categorical Regression software (CatReg, version 3.1...
Navigation Ground Data System Engineering for the Cassini/Huygens Mission
NASA Technical Reports Server (NTRS)
Beswick, R. M.; Antreasian, P. G.; Gillam, S. D.; Hahn, Y.; Roth, D. C.; Jones, J. B.
2008-01-01
The launch of the Cassini/Huygens mission on October 15, 1997, began a seven year journey across the solar system that culminated in the entry of the spacecraft into Saturnian orbit on June 30, 2004. Cassini/Huygens Spacecraft Navigation is the result of a complex interplay between several teams within the Cassini Project, performed on the Ground Data System. The work of Spacecraft Navigation involves rigorous requirements for accuracy and completeness carried out often under uncompromising critical time pressures. To support the Navigation function, a fault-tolerant, high-reliability/high-availability computational environment was necessary to support data processing. Configuration Management (CM) was integrated with fault tolerant design and security engineering, according to the cornerstone principles of Confidentiality, Integrity, and Availability. Integrated with this approach are security benchmarks and validation to meet strict confidence levels. In addition, similar approaches to CM were applied in consideration of the staffing and training of the system administration team supporting this effort. As a result, the current configuration of this computational environment incorporates a secure, modular system, that provides for almost no downtime during tour operations.
Multiloop integral system test (MIST): Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gloudemans, J.R.
1991-04-01
The Multiloop Integral System Test (MIST) is part of a multiphase program started in 1983 to address small-break loss-of-coolant accidents (SBLOCAs) specific to Babcock and Wilcox designed plants. MIST is sponsored by the US Nuclear Regulatory Commission, the Babcock Wilcox Owners Group, the Electric Power Research Institute, and Babcock and Wilcox. The unique features of the Babcock and Wilcox design, specifically the hot leg U-bends and steam generators, prevented the use of existing integral system data or existing integral facilities to address the thermal-hydraulic SBLOCA questions. MIST was specifically designed and constructed for this program, and an existing facility --more » the Once Through Integral System (OTIS) -- was also used. Data from MIST and OTIS are used to benchmark the adequacy of system codes, such as RELAP5 and TRAC, for predicting abnormal plant transients. The MIST program is reported in 11 volumes. Volumes 2 through 8 pertain to groups of Phase 3 tests by type; Volume 9 presents inter-group comparisons; Volume 10 provides comparisons between the RELAP5/MOD2 calculations and MIST observations, and Volume 11 (with addendum) presents the later Phase 4 tests. This is Volume 1 of the MIST final report, a summary of the entire MIST program. Major topics include, Test Advisory Group (TAG) issues, facility scaling and design, test matrix, observations, comparison of RELAP5 calculations to MIST observations, and MIST versus the TAG issues. MIST generated consistent integral-system data covering a wide range of transient interactions. MIST provided insight into integral system behavior and assisted the code effort. The MIST observations addressed each of the TAG issues. 11 refs., 29 figs., 9 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balkey, K.; Witt, F.J.; Bishop, B.A.
1995-06-01
Significant attention has been focused on the issue of reactor vessel pressurized thermal shock (PTS) for many years. Pressurized thermal shock transient events are characterized by a rapid cooldown at potentially high pressure levels that could lead to a reactor vessel integrity concern for some pressurized water reactors. As a result of regulatory and industry efforts in the early 1980`s, a probabilistic risk assessment methodology has been established to address this concern. Probabilistic fracture mechanics analyses are performed as part of this methodology to determine conditional probability of significant flaw extension for given pressurized thermal shock events. While recent industrymore » efforts are underway to benchmark probabilistic fracture mechanics computer codes that are currently used by the nuclear industry, Part I of this report describes the comparison of two independent computer codes used at the time of the development of the original U.S. Nuclear Regulatory Commission (NRC) pressurized thermal shock rule. The work that was originally performed in 1982 and 1983 to compare the U.S. NRC - VISA and Westinghouse (W) - PFM computer codes has been documented and is provided in Part I of this report. Part II of this report describes the results of more recent industry efforts to benchmark PFM computer codes used by the nuclear industry. This study was conducted as part of the USNRC-EPRI Coordinated Research Program for reviewing the technical basis for pressurized thermal shock (PTS) analyses of the reactor pressure vessel. The work focused on the probabilistic fracture mechanics (PFM) analysis codes and methods used to perform the PTS calculations. An in-depth review of the methodologies was performed to verify the accuracy and adequacy of the various different codes. The review was structured around a series of benchmark sample problems to provide a specific context for discussion and examination of the fracture mechanics methodology.« less
Machine characterization and benchmark performance prediction
NASA Technical Reports Server (NTRS)
Saavedra-Barrera, Rafael H.
1988-01-01
From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffin, Brian M.; Larson, Vincent E.
Microphysical processes, such as the formation, growth, and evaporation of precipitation, interact with variability and covariances (e.g., fluxes) in moisture and heat content. For instance, evaporation of rain may produce cold pools, which in turn may trigger fresh convection and precipitation. These effects are usually omitted or else crudely parameterized at subgrid scales in weather and climate models.A more formal approach is pursued here, based on predictive, horizontally averaged equations for the variances, covariances, and fluxes of moisture and heat content. These higher-order moment equations contain microphysical source terms. The microphysics terms can be integrated analytically, given a suitably simplemore » warm-rain microphysics scheme and an approximate assumption about the multivariate distribution of cloud-related and precipitation-related variables. Performing the integrations provides exact expressions within an idealized context.A large-eddy simulation (LES) of a shallow precipitating cumulus case is performed here, and it indicates that the microphysical effects on (co)variances and fluxes can be large. In some budgets and altitude ranges, they are dominant terms. The analytic expressions for the integrals are implemented in a single-column, higher-order closure model. Interactive single-column simulations agree qualitatively with the LES. The analytic integrations form a parameterization of microphysical effects in their own right, and they also serve as benchmark solutions that can be compared to non-analytic integration methods.« less
An integrity measure to benchmark quantum error correcting memories
NASA Astrophysics Data System (ADS)
Xu, Xiaosi; de Beaudrap, Niel; O'Gorman, Joe; Benjamin, Simon C.
2018-02-01
Rapidly developing experiments across multiple platforms now aim to realise small quantum codes, and so demonstrate a memory within which a logical qubit can be protected from noise. There is a need to benchmark the achievements in these diverse systems, and to compare the inherent power of the codes they rely upon. We describe a recently introduced performance measure called integrity, which relates to the probability that an ideal agent will successfully ‘guess’ the state of a logical qubit after a period of storage in the memory. Integrity is straightforward to evaluate experimentally without state tomography and it can be related to various established metrics such as the logical fidelity and the pseudo-threshold. We offer a set of experimental milestones that are steps towards demonstrating unconditionally superior encoded memories. Using intensive numerical simulations we compare memories based on the five-qubit code, the seven-qubit Steane code, and a nine-qubit code which is the smallest instance of a surface code; we assess both the simple and fault-tolerant implementations of each. While the ‘best’ code upon which to base a memory does vary according to the nature and severity of the noise, nevertheless certain trends emerge.
Designing and benchmarking the MULTICOM protein structure prediction system
2013-01-01
Background Predicting protein structure from sequence is one of the most significant and challenging problems in bioinformatics. Numerous bioinformatics techniques and tools have been developed to tackle almost every aspect of protein structure prediction ranging from structural feature prediction, template identification and query-template alignment to structure sampling, model quality assessment, and model refinement. How to synergistically select, integrate and improve the strengths of the complementary techniques at each prediction stage and build a high-performance system is becoming a critical issue for constructing a successful, competitive protein structure predictor. Results Over the past several years, we have constructed a standalone protein structure prediction system MULTICOM that combines multiple sources of information and complementary methods at all five stages of the protein structure prediction process including template identification, template combination, model generation, model assessment, and model refinement. The system was blindly tested during the ninth Critical Assessment of Techniques for Protein Structure Prediction (CASP9) in 2010 and yielded very good performance. In addition to studying the overall performance on the CASP9 benchmark, we thoroughly investigated the performance and contributions of each component at each stage of prediction. Conclusions Our comprehensive and comparative study not only provides useful and practical insights about how to select, improve, and integrate complementary methods to build a cutting-edge protein structure prediction system but also identifies a few new sources of information that may help improve the design of a protein structure prediction system. Several components used in the MULTICOM system are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:23442819
Integrative Analysis of Omics Big Data.
Yu, Xiang-Tian; Zeng, Tao
2018-01-01
The diversity and huge omics data take biology and biomedicine research and application into a big data era, just like that popular in human society a decade ago. They are opening a new challenge from horizontal data ensemble (e.g., the similar types of data collected from different labs or companies) to vertical data ensemble (e.g., the different types of data collected for a group of person with match information), which requires the integrative analysis in biology and biomedicine and also asks for emergent development of data integration to address the great changes from previous population-guided to newly individual-guided investigations.Data integration is an effective concept to solve the complex problem or understand the complicate system. Several benchmark studies have revealed the heterogeneity and trade-off that existed in the analysis of omics data. Integrative analysis can combine and investigate many datasets in a cost-effective reproducible way. Current integration approaches on biological data have two modes: one is "bottom-up integration" mode with follow-up manual integration, and the other one is "top-down integration" mode with follow-up in silico integration.This paper will firstly summarize the combinatory analysis approaches to give candidate protocol on biological experiment design for effectively integrative study on genomics and then survey the data fusion approaches to give helpful instruction on computational model development for biological significance detection, which have also provided newly data resources and analysis tools to support the precision medicine dependent on the big biomedical data. Finally, the problems and future directions are highlighted for integrative analysis of omics big data.
Systematic Benchmarking of Diagnostic Technologies for an Electrical Power System
NASA Technical Reports Server (NTRS)
Kurtoglu, Tolga; Jensen, David; Poll, Scott
2009-01-01
Automated health management is a critical functionality for complex aerospace systems. A wide variety of diagnostic algorithms have been developed to address this technical challenge. Unfortunately, the lack of support to perform large-scale V&V (verification and validation) of diagnostic technologies continues to create barriers to effective development and deployment of such algorithms for aerospace vehicles. In this paper, we describe a formal framework developed for benchmarking of diagnostic technologies. The diagnosed system is the Advanced Diagnostics and Prognostics Testbed (ADAPT), a real-world electrical power system (EPS), developed and maintained at the NASA Ames Research Center. The benchmarking approach provides a systematic, empirical basis to the testing of diagnostic software and is used to provide performance assessment for different diagnostic algorithms.
Structural Benchmark Creep Testing for the Advanced Stirling Convertor Heater Head
NASA Technical Reports Server (NTRS)
Krause, David L.; Kalluri, Sreeramesh; Bowman, Randy R.; Shah, Ashwin R.
2008-01-01
The National Aeronautics and Space Administration (NASA) has identified the high efficiency Advanced Stirling Radioisotope Generator (ASRG) as a candidate power source for use on long duration Science missions such as lunar applications, Mars rovers, and deep space missions. For the inherent long life times required, a structurally significant design limit for the heater head component of the ASRG Advanced Stirling Convertor (ASC) is creep deformation induced at low stress levels and high temperatures. Demonstrating proof of adequate margins on creep deformation and rupture for the operating conditions and the MarM-247 material of construction is a challenge that the NASA Glenn Research Center is addressing. The combined analytical and experimental program ensures integrity and high reliability of the heater head for its 17-year design life. The life assessment approach starts with an extensive series of uniaxial creep tests on thin MarM-247 specimens that comprise the same chemistry, microstructure, and heat treatment processing as the heater head itself. This effort addresses a scarcity of openly available creep properties for the material as well as for the virtual absence of understanding of the effect on creep properties due to very thin walls, fine grains, low stress levels, and high-temperature fabrication steps. The approach continues with a considerable analytical effort, both deterministically to evaluate the median creep life using nonlinear finite element analysis, and probabilistically to calculate the heater head s reliability to a higher degree. Finally, the approach includes a substantial structural benchmark creep testing activity to calibrate and validate the analytical work. This last element provides high fidelity testing of prototypical heater head test articles; the testing includes the relevant material issues and the essential multiaxial stress state, and applies prototypical and accelerated temperature profiles for timely results in a highly controlled laboratory environment. This paper focuses on the last element and presents a preliminary methodology for creep rate prediction, the experimental methods, test challenges, and results from benchmark testing of a trial MarM-247 heater head test article. The results compare favorably with the analytical strain predictions. A description of other test findings is provided, and recommendations for future test procedures are suggested. The manuscript concludes with describing the potential impact of the heater head creep life assessment and benchmark testing effort on the ASC program.
78 FR 8964 - Environmental Impact and Related Procedures
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-07
... designed so that no significant impact will occur. FTA is deleting, however, some items in the list of... supporting documentation, which includes, but is not limited to, comparative benchmarking and expert opinion... fall within the ten broad categories. Comparative benchmarking provides support for the new CEs by...
Benchmark Problems for Spacecraft Formation Flying Missions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.
2003-01-01
To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.
This article provides an overview of the development, theoretical basis, regulatory status, and application of the U.S. Environmental Protection Agency's (USEPA's)< Equilibrium Partitioning Sediment Benchmarks (ESBs) for PAH mixtures. ESBs are compared to other sediment quality g...
Sparganothis fruitworm degree-day benchmarks provide key treatmen timings for cranberry IPM
USDA-ARS?s Scientific Manuscript database
Degree-day benchmarks indicate discrete biological events in the development of insect pests. For the Sparganothis fruitworm, we have isolated all key development events and linked them to degree-day accumulations. These degree-day accumulations can greatly improve treatment timings for cranberry ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.D.; Kornreich, D.E.
Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less
Quality Applications to the Classroom of Tomorrow.
ERIC Educational Resources Information Center
Branson, Robert K.; Buckner, Terrelle
1995-01-01
Discusses the concept of quality in relation to educational programs. Highlights include quality as a process rather than as excellence; education's relationship to the community and to business and industry; the need for a mission statement, including desired outcomes; horizontal and vertical integration; and benchmarking. (LRW)
Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen
2017-01-01
Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children’s strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders’ NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children’s NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders’ NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children’s benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children’s age and familiarity with the number range, these additional external benchmarks might need to be labeled. PMID:28713302
Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen
2017-01-01
Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children's age and familiarity with the number range, these additional external benchmarks might need to be labeled.
Edwards, Roger A.; Dee, Deborah; Umer, Amna; Perrine, Cria G.; Shealy, Katherine R.; Grummer-Strawn, Laurence M.
2015-01-01
Background A substantial proportion of US maternity care facilities engage in practices that are not evidence-based and that interfere with breastfeeding. The CDC Survey of Maternity Practices in Infant Nutrition and Care (mPINC) showed significant variation in maternity practices among US states. Objective The purpose of this article is to use benchmarking techniques to identify states within relevant peer groups that were top performers on mPINC survey indicators related to breastfeeding support. Methods We used 11 indicators of breastfeeding-related maternity care from the 2011 mPINC survey and benchmarking techniques to organize and compare hospital-based maternity practices across the 50 states and Washington, DC. We created peer categories for benchmarking first by region (grouping states by West, Midwest, South, and Northeast) and then by size (grouping states by the number of maternity facilities and dividing each region into approximately equal halves based on the number of facilities). Results Thirty-four states had scores high enough to serve as benchmarks, and 32 states had scores low enough to reflect the lowest score gap from the benchmark on at least 1 indicator. No state served as the benchmark on more than 5 indicators and no state was furthest from the benchmark on more than 7 indicators. The small peer group benchmarks in the South, West, and Midwest were better than the large peer group benchmarks on 91%, 82%, and 36% of the indicators, respectively. In the West large, the Midwest large, the Midwest small, and the South large peer groups, 4–6 benchmarks showed that less than 50% of hospitals have ideal practice in all states. Conclusion The evaluation presents benchmarks for peer group state comparisons that provide potential and feasible targets for improvement. PMID:24394963
Conceptual Models, Choices, and Benchmarks for Building Quality Work Cultures.
ERIC Educational Resources Information Center
Acker-Hocevar, Michele
1996-01-01
The two models in Florida's Educational Quality Benchmark System represent a new way of thinking about developing schools' work culture. The Quality Performance System Model identifies nine dimensions of work within a quality system. The Change Process Model provides a theoretical framework for changing existing beliefs, attitudes, and behaviors…
Learning Probe: Benchmarking for Excellence. Questionnaire. Second Edition.
ERIC Educational Resources Information Center
Owen, Jane; Yarrow, David; Appleby, Alex
This document is a questionnaire designed for work-based learning providers. It is a diagnostic benchmarking tool developed to give organizations a snapshot of their current state. Following a brief introduction, there are instructions for filling in the questionnaire, which includes both open-ended response and scoring according to a…
Children's Services Statistical Neighbour Benchmarking Tool. Practitioner User Guide
ERIC Educational Resources Information Center
National Foundation for Educational Research, 2007
2007-01-01
Statistical neighbour models provide one method for benchmarking progress. For each local authority (LA), these models designate a number of other LAs deemed to have similar characteristics. These designated LAs are known as statistical neighbours. Any LA may compare its performance (as measured by various indicators) against its statistical…
IT-benchmarking of clinical workflows: concept, implementation, and evaluation.
Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula
2014-01-01
Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project.
Object-Oriented Implementation of the NAS Parallel Benchmarks using Charm++
NASA Technical Reports Server (NTRS)
Krishnan, Sanjeev; Bhandarkar, Milind; Kale, Laxmikant V.
1996-01-01
This report describes experiences with implementing the NAS Computational Fluid Dynamics benchmarks using a parallel object-oriented language, Charm++. Our main objective in implementing the NAS CFD kernel benchmarks was to develop a code that could be used to easily experiment with different domain decomposition strategies and dynamic load balancing. We also wished to leverage the object-orientation provided by the Charm++ parallel object-oriented language, to develop reusable abstractions that would simplify the process of developing parallel applications. We first describe the Charm++ parallel programming model and the parallel object array abstraction, then go into detail about each of the Scalar Pentadiagonal (SP) and Lower/Upper Triangular (LU) benchmarks, along with performance results. Finally we conclude with an evaluation of the methodology used.
Performance Measures, Benchmarking and Value.
ERIC Educational Resources Information Center
McGregor, Felicity
This paper discusses performance measurement in university libraries, based on examples from the University of Wollongong (UoW) in Australia. The introduction highlights the integration of information literacy into the curriculum and the outcomes of a 1998 UoW student satisfaction survey. The first section considers performance indicators in…
ETO - ENGINEERING TRADE-OFFS (SYSTEMS ANALYSIS BRANCH, SUSTAINABLE TECHNOLOGY DIVISION, NRMRL)
The ETO - Engineering Trade-Offs program is to develop a new, integrated decision-making approach to compare/contrast two or more states of being: a benchmark and an alternative, a change in a production process, alternative processes or products. ETO highlights the difference in...
Long-term integrating samplers for indoor air and sub slab soil gas at VI sites
Vapor intrusion (VI) site assessments are plagued by substantial spatial and temporal variability that makes exposure and risk assessment difficult. Most risk-based decision making for volatile organic compound (VOC) exposure in the indoor environment is based on health benchmark...
Peng, Bo; Kowalski, Karol
2017-09-12
The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, N b , ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C 60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10 -4 to 10 -3 to give acceptable compromise between efficiency and accuracy.
2015-06-01
headquarters Services , Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and...are positioned on the outer ASW screen to protect an HVU from submarine attacks. This baseline scenario provides a standardized benchmark on current...are positioned on the outer ASW screen to protect an HVU from submarine attacks. This baseline scenario provides us a standardized benchmark . In the
Mean velocity and turbulence measurements in a 90 deg curved duct with thin inlet boundary layer
NASA Technical Reports Server (NTRS)
Crawford, R. A.; Peters, C. E.; Steinhoff, J.; Hornkohl, J. O.; Nourinejad, J.; Ramachandran, K.
1985-01-01
The experimental database established by this investigation of the flow in a large rectangular turning duct is of benchmark quality. The experimental Reynolds numbers, Deans numbers and boundary layer characteristics are significantly different from previous benchmark curved-duct experimental parameters. This investigation extends the experimental database to higher Reynolds number and thinner entrance boundary layers. The 5% to 10% thick boundary layers, based on duct half-width, results in a large region of near-potential flow in the duct core surrounded by developing boundary layers with large crossflows. The turbulent entrance boundary layer case at R sub ed = 328,000 provides an incompressible flowfield which approaches real turbine blade cascade characteristics. The results of this investigation provide a challenging benchmark database for computational fluid dynamics code development.
Serious injuries: an additional indicator to fatalities for road safety benchmarking.
Shen, Yongjun; Hermans, Elke; Bao, Qiong; Brijs, Tom; Wets, Geert
2015-01-01
Almost all of the current road safety benchmarking studies focus entirely on fatalities, which, however, represent only one measure of the magnitude of the road safety problem. The main objective of this article was to investigate the possibility of including the number of serious injuries in addition to the number of fatalities for road safety benchmarking and to further illuminate its impact on the countries' rankings. We introduced the technique of data envelopment analysis (DEA) to the road safety domain and developed a DEA-based road safety model (DEA-RS) in this study. Moreover, we outlined different types of possible weight restrictions and adopted 2 of them to indicate the relationship between road fatalities and serious injuries for the sake of rational benchmarking. One was a relative weight restriction based on the information of their shadow price, and the other was a virtual weight restriction using a priori knowledge about the importance level of these 2 aspects. By computing the most optimal road safety risk scores of 10 European countries based on the different models, we found that United Kingdom was the only best-performing country no matter which model was utilized. However, countries such as The Netherlands, Sweden, and Switzerland were no longer best-performing when the serious injuries were integrated. On the contrary, Spain, which ranked almost at the bottom among all of the countries when only the number of road fatalities was considered, became a relatively well-performing country when integrating its number of serious injuries in the evaluation. In general, no matter whether the country's road safety ranking was improved or deteriorated, most of the countries achieved a higher risk score when the number of serious injuries was included, which implied that compared to the road fatalities, more policy attention has to be paid to improve the situation of serious injuries in most countries. Given the importance of considering the serious injuries in addition to the fatalities for international benchmarking of road safety, the proposed model (i.e., the DEA-RS model with weight restrictions) turned out to be effective in deriving reasonable results. We are thereby also inspired to apply this kind of model to a more complete road safety benchmarking practice in the future when the data on, for example, the number of slight injuries, the degree of property damage, and the number of crashes are ready (i.e., comparable) to use.
Benchmarking infrastructure for mutation text mining
2014-01-01
Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600
Benchmarking infrastructure for mutation text mining.
Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo
2014-02-25
Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greiner, Miles
Radial hydride formation in high-burnup used fuel cladding has the potential to radically reduce its ductility and suitability for long-term storage and eventual transport. To avoid this formation, the maximum post-reactor temperature must remain sufficiently low to limit the cladding hoop stress, and so that hydrogen from the existing circumferential hydrides will not dissolve and become available to re-precipitate into radial hydrides under the slow cooling conditions during drying, transfer and early dry-cask storage. The objective of this research is to develop and experimentallybenchmark computational fluid dynamics simulations of heat transfer in post-pool-storage drying operations, when high-burnup fuel cladding ismore » likely to experience its highest temperature. These benchmarked tools can play a key role in evaluating dry cask storage systems for extended storage of high-burnup fuels and post-storage transportation, including fuel retrievability. The benchmarked tools will be used to aid the design of efficient drying processes, as well as estimate variations of surface temperatures as a means of inferring helium integrity inside the canister or cask. This work will be conducted effectively because the principal investigator has experience developing these types of simulations, and has constructed a test facility that can be used to benchmark them.« less
Vitecek, Simon; Kučinić, Mladen; Previšić, Ana; Živić, Ivana; Stojanović, Katarina; Keresztes, Lujza; Bálint, Miklós; Hoppeler, Felicitas; Waringer, Johann; Graf, Wolfram; Pauls, Steffen U
2017-06-06
Taxonomy offers precise species identification and delimitation and thus provides basic information for biological research, e.g. through assessment of species richness. The importance of molecular taxonomy, i.e., the identification and delimitation of taxa based on molecular markers, has increased in the past decade. Recently developed exploratory tools now allow estimating species-level diversity in multi-locus molecular datasets. Here we use molecular species delimitation tools that either quantify differences in intra- and interspecific variability of loci, or divergence times within and between species, or perform coalescent species tree inference to estimate species-level entities in molecular genetic datasets. We benchmark results from these methods against 14 morphologically readily differentiable species of a well-defined subgroup of the diverse Drusinae subfamily (Trichoptera, Limnephilidae). Using a 3798 bp (6 loci) molecular data set we aim to corroborate a geographically isolated new species by integrating comparative morphological studies and molecular taxonomy. Our results indicate that only multi-locus species delimitation provides taxonomically relevant information. The data further corroborate the new species Drusus zivici sp. nov. We provide differential diagnostic characters and describe the male, female and larva of this new species and discuss diversity patterns of Drusinae in the Balkans. We further discuss potential and significance of molecular species delimitation. Finally we argue that enhancing collaborative integrative taxonomy will accelerate assessment of global diversity and completion of reference libraries for applied fields, e.g., conservation and biomonitoring.
NASA Astrophysics Data System (ADS)
Stone, Christopher P.; Alferman, Andrew T.; Niemeyer, Kyle E.
2018-05-01
Accurate and efficient methods for solving stiff ordinary differential equations (ODEs) are a critical component of turbulent combustion simulations with finite-rate chemistry. The ODEs governing the chemical kinetics at each mesh point are decoupled by operator-splitting allowing each to be solved concurrently. An efficient ODE solver must then take into account the available thread and instruction-level parallelism of the underlying hardware, especially on many-core coprocessors, as well as the numerical efficiency. A stiff Rosenbrock and a nonstiff Runge-Kutta ODE solver are both implemented using the single instruction, multiple thread (SIMT) and single instruction, multiple data (SIMD) paradigms within OpenCL. Both methods solve multiple ODEs concurrently within the same instruction stream. The performance of these parallel implementations was measured on three chemical kinetic models of increasing size across several multicore and many-core platforms. Two separate benchmarks were conducted to clearly determine any performance advantage offered by either method. The first benchmark measured the run-time of evaluating the right-hand-side source terms in parallel and the second benchmark integrated a series of constant-pressure, homogeneous reactors using the Rosenbrock and Runge-Kutta solvers. The right-hand-side evaluations with SIMD parallelism on the host multicore Xeon CPU and many-core Xeon Phi co-processor performed approximately three times faster than the baseline multithreaded C++ code. The SIMT parallel model on the host and Phi was 13%-35% slower than the baseline while the SIMT model on the NVIDIA Kepler GPU provided approximately the same performance as the SIMD model on the Phi. The runtimes for both ODE solvers decreased significantly with the SIMD implementations on the host CPU (2.5-2.7 ×) and Xeon Phi coprocessor (4.7-4.9 ×) compared to the baseline parallel code. The SIMT implementations on the GPU ran 1.5-1.6 times faster than the baseline multithreaded CPU code; however, this was significantly slower than the SIMD versions on the host CPU or the Xeon Phi. The performance difference between the three platforms was attributed to thread divergence caused by the adaptive step-sizes within the ODE integrators. Analysis showed that the wider vector width of the GPU incurs a higher level of divergence than the narrower Sandy Bridge or Xeon Phi. The significant performance improvement provided by the SIMD parallel strategy motivates further research into more ODE solver methods that are both SIMD-friendly and computationally efficient.
NASA Technical Reports Server (NTRS)
Hall, Callie; Arnone, Robert
2006-01-01
The NASA Applied Sciences Program seeks to transfer NASA data, models, and knowledge into the hands of end-users by forming links with partner agencies and associated decision support tools (DSTs). Through the NASA REASoN (Research, Education and Applications Solutions Network) Cooperative Agreement, the Oceanography Division of the Naval Research Laboratory (NRLSSC) is developing new products through the integration of data from NASA Earth-Sun System assets with coastal ocean forecast models and other available data to enhance coastal management in the Gulf of Mexico. The recipient federal agency for this research effort is the National Oceanic and Atmospheric Administration (NOAA). The contents of this report detail the effort to further the goals of the NASA Applied Sciences Program by demonstrating the use of NASA satellite products combined with data-assimilating ocean models to provide near real-time information to maritime users and coastal managers of the Gulf of Mexico. This effort provides new and improved capabilities for monitoring, assessing, and predicting the coastal environment. Coastal managers can exploit these capabilities through enhanced DSTs at federal, state and local agencies. The project addresses three major issues facing coastal managers: 1) Harmful Algal Blooms (HABs); 2) hypoxia; and 3) freshwater fluxes to the coastal ocean. A suite of ocean products capable of describing Ocean Weather is assembled on a daily basis as the foundation for this semi-operational multiyear effort. This continuous realtime capability brings decision makers a new ability to monitor both normal and anomalous coastal ocean conditions with a steady flow of satellite and ocean model conditions. Furthermore, as the baseline data sets are used more extensively and the customer list increased, customer feedback is obtained and additional customized products are developed and provided to decision makers. Continual customer feedback and response with new improved products are required between the researcher and customer. This document details the methods by which these coastal ocean products are produced including the data flow, distribution, and verification. Product applications and the degree to which these products are used successfully within NOAA and coordinated with the Mississippi Department of Marine Resources (MDMR) is benchmarked.
The implementation of interconception care in two community health settings: lessons learned.
Handler, Arden; Rankin, Kristin M; Peacock, Nadine; Townsell, Stephanie; McGlynn, Andrea; Issel, L Michele
2013-01-01
This study reports on an evaluation of the implementation of a pilot interconceptional care program (ICCP) in Chicago and the experiences of the participants in their first postpartum year. A longitudinal, multi-method approach was used to gather data to measure success in achieving project benchmarks and to gain insights into women's experiences after an adverse pregnancy outcome. The ICCP interventions were provided in two different health care settings. Low-income African-American women with a prior adverse pregnancy outcome were recruited to participate. Data on services delivered are available for 220 women; linked interview data are also available for 99 of these women. The ICCP focused on the integration of social services, family planning, and medical care provided through a team approach. An interview questionnaire asked detailed information about interconceptional health status, attitudes, and behaviors. A services database documented all services delivered to each participant. Key informant interviews were conducted with the ICCP project staff. Simple frequencies were generated. Chi-square and t-tests were used to compare participants and benchmarks at the two different sites. The planned delivery of interventions based on women's unique interconceptional health needs was often replaced by efforts to address women's socioeconomic needs. Although medical care remained important, participants viewed themselves as healthy and did not view medical care as a priority. Women's perceptions of contraceptive effectiveness were not always in sync with clinical knowledge. Interconceptional care is a complex process of matching interventions and services to meet women's unique needs, including their socioeconomic needs.
Weißenborn, Marina; Schulz, Martin; Kraft, Manuel; Haefeli, Walter E; Seidling, Hanna M
2018-06-21
Collaboration between general practitioners and community pharmacists is essential to ensure safe and effective patient care. However, collaboration in primary care is not standardized and varies greatly. This review aims to highlight projects about professional collaboration in ambulatory care in Germany and identifies promising approaches and successful benchmarks that should be considered for future projects. A systematic literature search was performed based on the PRISMA guidelines to identify articles focusing on professional collaboration between general practitioners and pharmacists. A total of 542 articles were retrieved. Six potential premises for successful cooperation projects were identified: GP and CP knowing each other (I), involvement of both health care providers in the project planning (II), sharing of experience or concerns during regular joint meetings enabling continuing evaluation and adaption (III), ensuring (technical) feasibility (IV), particularly by providing incentives (V), and by integrating these projects into existing health care structures (VI). Only few studies have been published in scientific journals. There was no standardized assessment of how the participants perceived their collaboration and how it facilitates their daily work, even when the study aimed to evaluate GP-CP collaboration. Successful cooperation between GP and CP in daily routine care was often characterized by personal contact and longtime relationships. Therefore, collaborative teaching sessions at university might establish sympathy and mutual understanding right from the beginning. There is a strong need to establish standardized tools to evaluate collaboration in future projects and to enable comparability of different studies. © Georg Thieme Verlag KG Stuttgart · New York.
Experimental Creep Life Assessment for the Advanced Stirling Convertor Heater Head
NASA Technical Reports Server (NTRS)
Krause, David L.; Kalluri, Sreeramesh; Shah, Ashwin R.; Korovaichuk, Igor
2010-01-01
The United States Department of Energy is planning to develop the Advanced Stirling Radioisotope Generator (ASRG) for the National Aeronautics and Space Administration (NASA) for potential use on future space missions. The ASRG provides substantial efficiency and specific power improvements over radioisotope power systems of heritage designs. The ASRG would use General Purpose Heat Source modules as energy sources and the free-piston Advanced Stirling Convertor (ASC) to convert heat into electrical energy. Lockheed Martin Corporation of Valley Forge, Pennsylvania, is integrating the ASRG systems, and Sunpower, Inc., of Athens, Ohio, is designing and building the ASC. NASA Glenn Research Center of Cleveland, Ohio, manages the Sunpower contract and provides technology development in several areas for the ASC. One area is reliability assessment for the ASC heater head, a critical pressure vessel within which heat is converted into mechanical oscillation of a displacer piston. For high system efficiency, the ASC heater head operates at very high temperature (850 C) and therefore is fabricated from an advanced heat-resistant nickel-based superalloy Microcast MarM-247. Since use of MarM-247 in a thin-walled pressure vessel is atypical, much effort is required to assure that the system will operate reliably for its design life of 17 years. One life-limiting structural response for this application is creep; creep deformation is the accumulation of time-dependent inelastic strain under sustained loading over time. If allowed to progress, the deformation eventually results in creep rupture. Since creep material properties are not available in the open literature, a detailed creep life assessment of the ASC heater head effort is underway. This paper presents an overview of that creep life assessment approach, including the reliability-based creep criteria developed from coupon testing, and the associated heater head deterministic and probabilistic analyses. The approach also includes direct benchmark experimental creep assessment. This element provides high-fidelity creep testing of prototypical heater head test articles to investigate the relevant material issues and multiaxial stress state. Benchmark testing provides required data to evaluate the complex life assessment methodology and to validate that analysis. Results from current benchmark heater head tests and newly developed experimental methods are presented. In the concluding remarks, the test results are shown to compare favorably with the creep strain predictions and are the first experimental evidence for a robust ASC heater head creep life.
A Better Benchmark Assessment: Multiple-Choice versus Project-Based
ERIC Educational Resources Information Center
Peariso, Jamon F.
2006-01-01
The purpose of this literature review and Ex Post Facto descriptive study was to determine which type of benchmark assessment, multiple-choice or project-based, provides the best indication of general success on the history portion of the CST (California Standards Tests). The result of the study indicates that although the project-based benchmark…
Benchmarking: A Study of School and School District Effect and Efficiency.
ERIC Educational Resources Information Center
Swanson, Austin D.; Engert, Frank
The "New York State School Report Card" provides a vehicle for benchmarking with respect to student achievement. In this study, additional tools were developed for making external comparisons with respect to achievement, and tools were added for assessing fiscal policy and efficiency. Data from school years 1993-94 through 1995-96 were…
Short-Term Field Study Programs: A Holistic and Experiential Approach to Learning
ERIC Educational Resources Information Center
Long, Mary M.; Sandler, Dennis M.; Topol, Martin T.
2017-01-01
For business schools, AACSB and Middle States' call for more experiential learning is one reason to provide study abroad programs. Universities must attend to the demand for continuous improvement and employ metrics to benchmark and evaluate their relative standing among peer institutions. One such benchmark is the National Survey of Student…
ARL Physics Web Pages: An Evaluation by Established, Transitional and Emerging Benchmarks.
ERIC Educational Resources Information Center
Duffy, Jane C.
2002-01-01
Provides an overview of characteristics among Association of Research Libraries (ARL) physics Web pages. Examines current academic Web literature and from that develops six benchmarks to measure physics Web pages: ease of navigation; logic of presentation; representation of all forms of information; engagement of the discipline; interactivity of…
Benchmarking Reference Desk Service in Academic Health Science Libraries: A Preliminary Survey.
ERIC Educational Resources Information Center
Robbins, Kathryn; Daniels, Kathleen
2001-01-01
This preliminary study was designed to benchmark patron perceptions of reference desk services at academic health science libraries, using a standard questionnaire. Responses were compared to determine the library that provided the highest-quality service overall and along five service dimensions. All libraries were rated very favorably, but none…
Developing a Benchmark Tool for Sustainable Consumption: An Iterative Process
ERIC Educational Resources Information Center
Heiskanen, E.; Timonen, P.; Nissinen, A.; Gronroos, J.; Honkanen, A.; Katajajuuri, J. -M.; Kettunen, J.; Kurppa, S.; Makinen, T.; Seppala, J.; Silvenius, F.; Virtanen, Y.; Voutilainen, P.
2007-01-01
This article presents the development process of a consumer-oriented, illustrative benchmarking tool enabling consumers to use the results of environmental life cycle assessment (LCA) to make informed decisions. LCA provides a wealth of information on the environmental impacts of products, but its results are very difficult to present concisely…
Benchmarking for maximum value.
Baldwin, Ed
2009-03-01
Speaking at the most recent Healthcare Estates conference, Ed Baldwin, of international built asset consultancy EC Harris LLP, examined the role of benchmarking and market-testing--two of the key methods used to evaluate the quality and cost-effectiveness of hard and soft FM services provided under PFI healthcare schemes to ensure they are offering maximum value for money.
A Critical Thinking Benchmark for a Department of Agricultural Education and Studies
ERIC Educational Resources Information Center
Perry, Dustin K.; Retallick, Michael S.; Paulsen, Thomas H.
2014-01-01
Due to an ever changing world where technology seemingly provides endless answers, today's higher education students must master a new skill set reflecting an emphasis on critical thinking, problem solving, and communications. The purpose of this study was to establish a departmental benchmark for critical thinking abilities of students majoring…
The Vulnerability Framework Integrates Various Models of Generating Surplus Revenue
ERIC Educational Resources Information Center
Maniaci, Vincent
2004-01-01
Budgets operationalize the strategic planning process, and institutions must have surplus revenue to be able to cope with future operations. There are three approaches to generate surplus revenue: increased revenue, decreased cost, and reallocation of resources. Extending their earlier work, where they established strategic benchmarks for annual…
Engineering Education as a Complex System
ERIC Educational Resources Information Center
Gattie, David K.; Kellam, Nadia N.; Schramski, John R.; Walther, Joachim
2011-01-01
This paper presents a theoretical basis for cultivating engineering education as a complex system that will prepare students to think critically and make decisions with regard to poorly understood, ill-structured issues. Integral to this theoretical basis is a solution space construct developed and presented as a benchmark for evaluating…
Reaching a Representative Sample of College Students: A Comparative Analysis
ERIC Educational Resources Information Center
Giovenco, Daniel P.; Gundersen, Daniel A.; Delnevo, Cristine D.
2016-01-01
Objective: To explore the feasibility of a random-digit dial (RDD) cellular phone survey in order to reach a national and representative sample of college students. Methods: Demographic distributions from the 2011 National Young Adult Health Survey (NYAHS) were benchmarked against enrollment numbers from the Integrated Postsecondary Education…
Educating Next Generation Nuclear Criticality Safety Engineers at the Idaho National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. D. Bess; J. B. Briggs; A. S. Garcia
2011-09-01
One of the challenges in educating our next generation of nuclear safety engineers is the limitation of opportunities to receive significant experience or hands-on training prior to graduation. Such training is generally restricted to on-the-job-training before this new engineering workforce can adequately provide assessment of nuclear systems and establish safety guidelines. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) can provide students and young professionals the opportunity to gain experience and enhance critical engineering skills. The ICSBEP and IRPhEP publish annual handbooks that contain evaluations of experiments along withmore » summarized experimental data and peer-reviewed benchmark specifications to support the validation of neutronics codes, nuclear cross-section data, and the validation of reactor designs. Participation in the benchmark process not only benefits those who use these Handbooks within the international community, but provides the individual with opportunities for professional development, networking with an international community of experts, and valuable experience to be used in future employment. Traditionally students have participated in benchmarking activities via internships at national laboratories, universities, or companies involved with the ICSBEP and IRPhEP programs. Additional programs have been developed to facilitate the nuclear education of students while participating in the benchmark projects. These programs include coordination with the Center for Space Nuclear Research (CSNR) Next Degree Program, the Collaboration with the Department of Energy Idaho Operations Office to train nuclear and criticality safety engineers, and student evaluations as the basis for their Master's thesis in nuclear engineering.« less
Quality assurance, benchmarking, assessment and mutual international recognition of qualifications.
Hobson, R; Rolland, S; Rotgans, J; Schoonheim-Klein, M; Best, H; Chomyszyn-Gajewska, M; Dymock, D; Essop, R; Hupp, J; Kundzina, R; Love, R; Memon, R A; Moola, M; Neumann, L; Ozden, N; Roth, K; Samwel, P; Villavicencio, J; Wright, P; Harzer, W
2008-02-01
The aim of this report is to provide guidance to assist in the international convergence of quality assurance, benchmarking and assessment systems to improve dental education. Proposals are developed for mutual recognition of qualifications, to aid international movement and exchange of staff and students including and supporting developing countries. Quality assurance is the responsibility of all staff involved in dental education and involves three levels: internal, institutional and external. Benchmarking information provides a subject framework. Benchmarks are useful for a variety of purposes including design and validation of programmes, examination and review; they can also strengthen the accreditation process undertaken by professional and statutory bodies. Benchmark information can be used by institutions as part of their programme approval process, to set degree standards. The standards should be developed by the dental academic community through formal groups of experts. Assessment outcomes of student learning are a measure of the quality of the learning programme. The goal of an effective assessment strategy should be that it provides the starting point for students to adopt a positive approach to effective and competent practice, reflective and lifelong learning. All assessment methods should be evidence based or based upon research. Mutual recognition of professional qualifications means that qualifications gained in one country (the home country) are recognized in another country (the host country). It empowers movement of skilled workers, which can help resolve skills shortages within participating countries. These proposals are not intended to be either exhaustive or prescriptive; they are purely for guidance and derived from the identification of what is perceived to be 'best practice'.
SU-E-T-148: Benchmarks and Pre-Treatment Reviews: A Study of Quality Assurance Effectiveness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowenstein, J; Nguyen, H; Roll, J
Purpose: To determine the impact benchmarks and pre-treatment reviews have on improving the quality of submitted clinical trial data. Methods: Benchmarks are used to evaluate a site’s ability to develop a treatment that meets a specific protocol’s treatment guidelines prior to placing their first patient on the protocol. A pre-treatment review is an actual patient placed on the protocol in which the dosimetry and contour volumes are evaluated to be per protocol guidelines prior to allowing the beginning of the treatment. A key component of these QA mechanisms is that sites are provided timely feedback to educate them on howmore » to plan per the protocol and prevent protocol deviations on patients accrued to a protocol. For both benchmarks and pre-treatment reviews a dose volume analysis (DVA) was performed using MIM softwareTM. For pre-treatment reviews a volume contour evaluation was also performed. Results: IROC Houston performed a QA effectiveness analysis of a protocol which required both benchmarks and pre-treatment reviews. In 70 percent of the patient cases submitted, the benchmark played an effective role in assuring that the pre-treatment review of the cases met protocol requirements. The 35 percent of sites failing the benchmark subsequently modified there planning technique to pass the benchmark before being allowed to submit a patient for pre-treatment review. However, in 30 percent of the submitted cases the pre-treatment review failed where the majority (71 percent) failed the DVA. 20 percent of sites submitting patients failed to correct their dose volume discrepancies indicated by the benchmark case. Conclusion: Benchmark cases and pre-treatment reviews can be an effective QA tool to educate sites on protocol guidelines and to minimize deviations. Without the benchmark cases it is possible that 65 percent of the cases undergoing a pre-treatment review would have failed to meet the protocols requirements.Support: U24-CA-180803.« less
NASA Astrophysics Data System (ADS)
D'Alessandro, Valerio; Binci, Lorenzo; Montelpare, Sergio; Ricci, Renato
2018-01-01
Open-source CFD codes provide suitable environments for implementing and testing low-dissipative algorithms typically used to simulate turbulence. In this research work we developed CFD solvers for incompressible flows based on high-order explicit and diagonally implicit Runge-Kutta (RK) schemes for time integration. In particular, an iterated PISO-like procedure based on Rhie-Chow correction was used to handle pressure-velocity coupling within each implicit RK stage. For the explicit approach, a projected scheme was used to avoid the "checker-board" effect. The above-mentioned approaches were also extended to flow problems involving heat transfer. It is worth noting that the numerical technology available in the OpenFOAM library was used for space discretization. In this work, we additionally explore the reliability and effectiveness of the proposed implementations by computing several unsteady flow benchmarks; we also show that the numerical diffusion due to the time integration approach is completely canceled using the solution techniques proposed here.
Decibel: The Relational Dataset Branching System
Maddox, Michael; Goehring, David; Elmore, Aaron J.; Madden, Samuel; Parameswaran, Aditya; Deshpande, Amol
2017-01-01
As scientific endeavors and data analysis become increasingly collaborative, there is a need for data management systems that natively support the versioning or branching of datasets to enable concurrent analysis, cleaning, integration, manipulation, or curation of data across teams of individuals. Common practice for sharing and collaborating on datasets involves creating or storing multiple copies of the dataset, one for each stage of analysis, with no provenance information tracking the relationships between these datasets. This results not only in wasted storage, but also makes it challenging to track and integrate modifications made by different users to the same dataset. In this paper, we introduce the Relational Dataset Branching System, Decibel, a new relational storage system with built-in version control designed to address these shortcomings. We present our initial design for Decibel and provide a thorough evaluation of three versioned storage engine designs that focus on efficient query processing with minimal storage overhead. We also develop an exhaustive benchmark to enable the rigorous testing of these and future versioned storage engine designs. PMID:28149668
Evaluation of the Pool Critical Assembly Benchmark with Explicitly-Modeled Geometry using MCNP6
Kulesza, Joel A.; Martz, Roger Lee
2017-03-01
Despite being one of the most widely used benchmarks for qualifying light water reactor (LWR) radiation transport methods and data, no benchmark calculation of the Oak Ridge National Laboratory (ORNL) Pool Critical Assembly (PCA) pressure vessel wall benchmark facility (PVWBF) using MCNP6 with explicitly modeled core geometry exists. As such, this paper provides results for such an analysis. First, a criticality calculation is used to construct the fixed source term. Next, ADVANTG-generated variance reduction parameters are used within the final MCNP6 fixed source calculations. These calculations provide unadjusted dosimetry results using three sets of dosimetry reaction cross sections of varyingmore » ages (those packaged with MCNP6, from the IRDF-2002 multi-group library, and from the ACE-formatted IRDFF v1.05 library). These results are then compared to two different sets of measured reaction rates. The comparison agrees in an overall sense within 2% and on a specific reaction- and dosimetry location-basis within 5%. Except for the neptunium dosimetry, the individual foil raw calculation-to-experiment comparisons usually agree within 10% but is typically greater than unity. Finally, in the course of developing these calculations, geometry that has previously not been completely specified is provided herein for the convenience of future analysts.« less
Wirtz, Veronika J; Santa-Ana-Tellez, Yared; Trout, Clinton H; Kaplan, Warren A
2012-12-01
Public sector price analyses of antiretroviral (ARV) medicines can provide relevant information to detect ARV procurement procedures that do not obtain competitive market prices. Price benchmarks provide a useful tool for programme managers and policy makers to support such planning and policy measures. The aim of the study was to develop regional and global price benchmarks which can be used to analyse public-sector price variability of ARVs in low- and middle-income countries using the procurement prices of Latin America and the Caribbean (LAC) countries in 2008 as an example. We used the Global Price Reporting Mechanism (GPRM) data base, provided by the World Health Organization (WHO), for 13 LAC countries' ARV procurements to analyse the procurement prices of four first-line and three second-line ARV combinations in 2008. First, a cross-sectional analysis was conducted to compare ARV combination prices. Second, four different price 'benchmarks' were created and we estimated the additional number of patients who could have been treated in each country if the ARV combinations studied were purchased at the various reference ('benchmark') prices. Large price variations exist for first- and second-line ARV combinations between countries in the LAC region. Most countries in the LAC region could be treating between 1.17 and 3.8 times more patients if procurement prices were closer to the lowest regional generic price. For all second-line combinations, a price closer to the lowest regional innovator prices or to the global median transaction price for lower-middle-income countries would also result in treating up to nearly five times more patients. Some rational allocation of financial resources due, in part, to price benchmarking and careful planning by policy makers and programme managers can assist a country in negotiating lower ARV procurement prices and should form part of a sustainable procurement policy.
Benchmarking short sequence mapping tools
2013-01-01
Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764
NCSP IER 422 CED-3b Documentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, Jesson D.; Cutler, Theresa Elizabeth; Bahran, Rian Mustafa
2017-11-22
A Subcritical Copper-Reflected α-phase Plutonium (SCRαP) integral benchmark experiment has been designed and measured. In this experiment, multiplication is approximated using correlated neutron data from a detector system consisting of 3He tubes inside high density polyethylene (HDPE). Measurements were performed on various subcritical experimental configurations consisting of a weapons-grade plutonium sphere surrounded by different Cu thicknesses. In addition to the proposed base experimental configurations with Cu, additional configurations were performed with the plutonium ball nested in various thicknesses of interleaved HDPE spherical shells mixed in with the Cu shells. The HDPE is intended to provide fast neutron moderation and reflection,more » resulting in additional measurements with differing multiplication, spectra, and nuclear data sensitivity.« less
Delta-ray Production in MCNP 6.2.0
NASA Astrophysics Data System (ADS)
Anderson, C.; McKinney, G.; Tutt, J.; James, M.
Secondary electrons in the form of delta-rays, also referred to as knock-on electrons, have been a feature of MCNP for electron and positron transport for over 20 years. While MCNP6 now includes transport for a suite of heavy-ions and charged particles from its integration with MCNPX, the production of delta-rays was still limited to electron and positron transport. In the newest release of MCNP6, version 6.2.0, delta-ray production has now been extended for all energetic charged particles. The basis of this production is the analytical formulation from Rossi and ICRU Report 37. This paper discusses the MCNP6 heavy charged-particle implementation and provides production results for several benchmark/test problems.
Recent advances and remaining challenges for the spectroscopic detection of explosive threats.
Fountain, Augustus W; Christesen, Steven D; Moon, Raphael P; Guicheteau, Jason A; Emmons, Erik D
2014-01-01
In 2010, the U.S. Army initiated a program through the Edgewood Chemical Biological Center to identify viable spectroscopic signatures of explosives and initiate environmental persistence, fate, and transport studies for trace residues. These studies were ultimately designed to integrate these signatures into algorithms and experimentally evaluate sensor performance for explosives and precursor materials in existing chemical point and standoff detection systems. Accurate and validated optical cross sections and signatures are critical in benchmarking spectroscopic-based sensors. This program has provided important information for the scientists and engineers currently developing trace-detection solutions to the homemade explosive problem. With this information, the sensitivity of spectroscopic methods for explosives detection can now be quantitatively evaluated before the sensor is deployed and tested.
IMAGESEER - IMAGEs for Education and Research
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Grubb, Thomas; Milner, Barbara
2012-01-01
IMAGESEER is a new Web portal that brings easy access to NASA image data for non-NASA researchers, educators, and students. The IMAGESEER Web site and database are specifically designed to be utilized by the university community, to enable teaching image processing (IP) techniques on NASA data, as well as to provide reference benchmark data to validate new IP algorithms. Along with the data and a Web user interface front-end, basic knowledge of the application domains, benchmark information, and specific NASA IP challenges (or case studies) are provided.
A high performance scientific cloud computing environment for materials simulations
NASA Astrophysics Data System (ADS)
Jorissen, K.; Vila, F. D.; Rehr, J. J.
2012-09-01
We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.
Pound, Catherine M.; Moreau, Katherine A.; Ward, Natalie; Eady, Kaylee; Writer, Hilary
2015-01-01
Background Research training is essential to the development of well-rounded physicians. Although many pediatric residency programs require residents to complete a research project, it is often challenging to integrate research training into educational programs. Objective We aimed to develop an innovative research program for pediatric residents, called the Scholarly Activity Guidance and Evaluation (SAGE) program. Methods We developed a competency-based program which establishes benchmarks for pediatric residents, while providing ongoing academic mentorship. Results Feedback from residents and their research supervisors about the SAGE program has been positive. Preliminary evaluation data have shown that all final-year residents have met or exceeded program expectations. Conclusions By providing residents with this supportive environment, we hope to influence their academic career paths, increase their research productivity, promote evidence-based practice, and ultimately, positively impact health outcomes. PMID:26059213
Standardised Benchmarking in the Quest for Orthologs
Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe
2016-01-01
The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882
Benchmarking CRISPR on-target sgRNA design.
Yan, Jifang; Chuai, Guohui; Zhou, Chi; Zhu, Chenyu; Yang, Jing; Zhang, Chao; Gu, Feng; Xu, Han; Wei, Jia; Liu, Qi
2017-02-15
CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats)-based gene editing has been widely implemented in various cell types and organisms. A major challenge in the effective application of the CRISPR system is the need to design highly efficient single-guide RNA (sgRNA) with minimal off-target cleavage. Several tools are available for sgRNA design, while limited tools were compared. In our opinion, benchmarking the performance of the available tools and indicating their applicable scenarios are important issues. Moreover, whether the reported sgRNA design rules are reproducible across different sgRNA libraries, cell types and organisms remains unclear. In our study, a systematic and unbiased benchmark of the sgRNA predicting efficacy was performed on nine representative on-target design tools, based on six benchmark data sets covering five different cell types. The benchmark study presented here provides novel quantitative insights into the available CRISPR tools. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Toburen, L. H.; McLawhorn, S. L.; McLawhorn, R. A.; Carnes, K. D.; Dingfelder, M.; Shinpaugh, J. L.
2013-01-01
Absolute doubly differential electron emission yields were measured from thin films of amorphous solid water (ASW) after the transmission of 6 MeV protons and 19 MeV (1 MeV/nucleon) fluorine ions. The ASW films were frozen on thin (1-μm) copper foils cooled to approximately 50 K. Electrons emitted from the films were detected as a function of angle in both the forward and backward direction and as a function of the film thickness. Electron energies were determined by measuring the ejected electron time of flight, a technique that optimizes the accuracy of measuring low-energy electron yields, where the effects of molecular environment on electron transport are expected to be most evident. Relative electron emission yields were normalized to an absolute scale by comparison of the integrated total yields for proton-induced electron emission from the copper substrate to values published previously. The absolute doubly differential yields from ASW are presented along with integrated values, providing single differential and total electron emission yields. These data may provide benchmark tests of Monte Carlo track structure codes commonly used for assessing the effects of radiation quality on biological effectiveness. PMID:20681805
Field Assessment of Energy Audit Tools for Retrofit Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, J.; Bohac, D.; Nelson, C.
2013-07-01
This project focused on the use of home energy ratings as a tool to promote energy retrofits in existing homes. A home energy rating provides a quantitative appraisal of a home's asset performance, usually compared to a benchmark such as the average energy use of similar homes in the same region. Home rating systems can help motivate homeowners in several ways. Ratings can clearly communicate a home's achievable energy efficiency potential, provide a quantitative assessment of energy savings after retrofits are completed, and show homeowners how they rate compared to their neighbors, thus creating an incentive to conform to amore » social standard. An important consideration is how rating tools for the retrofit market will integrate with existing home energy service programs. For residential programs that target energy savings only, home visits should be focused on key efficiency measures for that home. In order to gain wide adoption, a rating tool must be easily integrated into the field process, demonstrate consistency and reasonable accuracy to earn the trust of home energy technicians, and have a low monetary cost and time hurdle for homeowners. Along with the Home Energy Score, this project also evaluated the energy modeling performance of SIMPLE and REM/Rate.« less
Capital planning for clinical integration.
Grauman, Daniel M; Neff, Gerald; Johnson, Molly Martha
2011-04-01
When assessing the financial implications of a physician alignment and clinical integration initiative, a hospital should measure the initiative's potential ROI, perhaps best using a combination of net present value and payback period. The hospital should compare its own historical and projected performance with rating agency median benchmarks for key financial indicators of profitability, debt service, capital and cash flow, and liquidity. The hospital should also consider potential indirect benefits, such as retained outpatient/ancillary revenue, increased inpatient revenue, improved cost control, and improved quality and reporting transparency.
McGillis Hall, Linda; Peterson, Jessica; Baker, G Ross; Brown, Adalsteinn D; Pink, George H; McKillop, Ian; Daniel, Imtiaz; Pedersen, Cheryl
2008-01-01
This study examined relationships between financial indicators for nurse staffing and organizational system integration and change indicators. These indicators, along with hospital location and type, were examined in relation to the nursing financial indicators. Results showed that different indicators predicted each of the outcome variables. Nursing care hours were predicted by the hospital type, geographic location, and the system. Both nursing and patient care hours were significantly related to dissemination and benchmarking of clinical data.
Schaffter, Thomas; Marbach, Daniel; Floreano, Dario
2011-08-15
Over the last decade, numerous methods have been developed for inference of regulatory networks from gene expression data. However, accurate and systematic evaluation of these methods is hampered by the difficulty of constructing adequate benchmarks and the lack of tools for a differentiated analysis of network predictions on such benchmarks. Here, we describe a novel and comprehensive method for in silico benchmark generation and performance profiling of network inference methods available to the community as an open-source software called GeneNetWeaver (GNW). In addition to the generation of detailed dynamical models of gene regulatory networks to be used as benchmarks, GNW provides a network motif analysis that reveals systematic prediction errors, thereby indicating potential ways of improving inference methods. The accuracy of network inference methods is evaluated using standard metrics such as precision-recall and receiver operating characteristic curves. We show how GNW can be used to assess the performance and identify the strengths and weaknesses of six inference methods. Furthermore, we used GNW to provide the international Dialogue for Reverse Engineering Assessments and Methods (DREAM) competition with three network inference challenges (DREAM3, DREAM4 and DREAM5). GNW is available at http://gnw.sourceforge.net along with its Java source code, user manual and supporting data. Supplementary data are available at Bioinformatics online. dario.floreano@epfl.ch.
The Medical Library Association Benchmarking Network: development and implementation.
Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C; Smith, Bernie Todd
2006-04-01
This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries. The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described. The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004. The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program.
The Medical Library Association Benchmarking Network: development and implementation*
Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd
2006-01-01
Objective: This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries. Methods: The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described. Results: The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004. Conclusions: The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program. PMID:16636702
Benchmarking homogenization algorithms for monthly data
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.
2013-09-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-03-09
This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3Dmore » code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.« less
A homology-based pipeline for global prediction of post-translational modification sites
NASA Astrophysics Data System (ADS)
Chen, Xiang; Shi, Shao-Ping; Xu, Hao-Dong; Suo, Sheng-Bao; Qiu, Jian-Ding
2016-05-01
The pathways of protein post-translational modifications (PTMs) have been shown to play particularly important roles for almost any biological process. Identification of PTM substrates along with information on the exact sites is fundamental for fully understanding or controlling biological processes. Alternative computational strategies would help to annotate PTMs in a high-throughput manner. Traditional algorithms are suited for identifying the common organisms and tissues that have a complete PTM atlas or extensive experimental data. While annotation of rare PTMs in most organisms is a clear challenge. In this work, to this end we have developed a novel homology-based pipeline named PTMProber that allows identification of potential modification sites for most of the proteomes lacking PTMs data. Cross-promotion E-value (CPE) as stringent benchmark has been used in our pipeline to evaluate homology to known modification sites. Independent-validation tests show that PTMProber achieves over 58.8% recall with high precision by CPE benchmark. Comparisons with other machine-learning tools show that PTMProber pipeline performs better on general predictions. In addition, we developed a web-based tool to integrate this pipeline at http://bioinfo.ncu.edu.cn/PTMProber/index.aspx. In addition to pre-constructed prediction models of PTM, the website provides an extensional functionality to allow users to customize models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MILLS, EVAN; MATTHE, PAUL; STOUFER, MARTIN
2016-10-06
EnergyIQ-the first "action-oriented" benchmarking tool for non-residential buildings-provides a standardized opportunity assessment based on benchmarking results. along with decision-support information to help refine action plans. EnergyIQ offers a wide array of benchmark metrics, with visuall as well as tabular display. These include energy, costs, greenhouse-gas emissions, and a large array of characteristics (e.g. building components or operational strategies). The tool supports cross-sectional benchmarking for comparing the user's building to it's peers at one point in time, as well as longitudinal benchmarking for tracking the performance of an individual building or enterprise portfolio over time. Based on user inputs, the toolmore » generates a list of opportunities and recommended actions. Users can then explore the "Decision Support" module for helpful information on how to refine action plans, create design-intent documentation, and implement improvements. This includes information on best practices, links to other energy analysis tools and more. The variety of databases are available within EnergyIQ from which users can specify peer groups for comparison. Using the tool, this data can be visually browsed and used as a backdrop against which to view a variety of energy benchmarking metrics for the user's own building. User can save their project information and return at a later date to continue their exploration. The initial database is the CA Commercial End-Use Survey (CEUS), which provides details on energy use and characteristics for about 2800 buildings (and 62 building types). CEUS is likely the most thorough survey of its kind every conducted. The tool is built as a web service. The EnergyIQ web application is written in JSP with pervasive us of JavaScript and CSS2. EnergyIQ also supports a SOAP based web service to allow the flow of queries and data to occur with non-browser implementations. Data are stored in an Oracle 10g database. References: Mills, Mathew, Brook and Piette. 2008. "Action Oriented Benchmarking: Concepts and Tools." Energy Engineering, Vol.105, No. 4, pp 21-40. LBNL-358E; Mathew, Mills, Bourassa, Brook. 2008. "Action-Oriented Benchmarking: Using the CEUS Database to Benchmark Commercial Buildings in California." Energy Engineering, Vol 105, No. 5, pp 6-18. LBNL-502E.« less
On the predictability of land surface fluxes from meteorological variables
NASA Astrophysics Data System (ADS)
Haughton, Ned; Abramowitz, Gab; Pitman, Andy J.
2018-01-01
Previous research has shown that land surface models (LSMs) are performing poorly when compared with relatively simple empirical models over a wide range of metrics and environments. Atmospheric driving data appear to provide information about land surface fluxes that LSMs are not fully utilising. Here, we further quantify the information available in the meteorological forcing data that are used by LSMs for predicting land surface fluxes, by interrogating FLUXNET data, and extending the benchmarking methodology used in previous experiments. We show that substantial performance improvement is possible for empirical models using meteorological data alone, with no explicit vegetation or soil properties, thus setting lower bounds on a priori expectations on LSM performance. The process also identifies key meteorological variables that provide predictive power. We provide an ensemble of empirical benchmarks that are simple to reproduce and provide a range of behaviours and predictive performance, acting as a baseline benchmark set for future studies. We reanalyse previously published LSM simulations and show that there is more diversity between LSMs than previously indicated, although it remains unclear why LSMs are broadly performing so much worse than simple empirical models.
Schad, Friedemann; Thronicke, Anja; Merkle, Antje; Steele, Megan L; Kröz, Matthias; Herbstreit, Cornelia; Matthes, Harald
2018-01-01
In recent decades the concept of integrative medicine has attracted growing interest in patients and professionals. At the Gemeinschaftskrankenhaus Havelhöhe (GKH), a hospital specialized in anthroposophical medicine, a breast cancer center (BCC) has been successfully certified for more than 5 years. The objective of the present study was to analyze how integrative strategies were implemented in the daily care of primary breast cancer patients. Clinical, demographic, and follow-up data as well as information on non-pharmacological interventions were analyzed. In addition, BCC quality measures were compared with data of the National Breast Cancer Benchmarking Report 2016. Between 2011 and 2016, 741 primary breast cancer patients (median age 57.4 years) were treated at the GKH BCC. 91.5% of the patients showed Union for International Cancer Control (UICC) stage 0, I, II, or III and 8.2% were in UICC stage IV. 97% of the patients underwent surgery, 53% radiation, 38% had hormone therapy, and 25% received cytostatic drugs. 96% of the patients received non-pharmacological interventions and 32% received Viscum album L. Follow-up was performed in up to 93% of the patients 2 years after first diagnosis. Compared to nationwide benchmarking BCCs, the GKH BCC met the requirements in central items. The results of the present study show that integrative therapies offered by the concept of anthroposophical medicine can be implemented in the daily care and treatment of a certified BCC. However, as national guidelines on integrative concepts in oncology are missing, further studies are needed for a systematic evaluation of integrative treatment and care concepts in this field. © 2018 The Author(s). Published by S. Karger GmbH, Freiburg.
ERIC Educational Resources Information Center
Hearn, Jessica E.
2015-01-01
Principal preparation programs in Kentucky can use the items in the Dispositions, Dimensions, and Functions for School Leaders (EPSB, 2008) as mastery benchmarks to quantify incoming Educational Specialist (Ed.S) students' perceived level of mastery. This can serve both internal and external purposes by providing diagnostic feedback to students…
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Coverage of rural health clinic and federally... clinic and federally qualified health center (FQHC) services. If a State provides benchmark or benchmark... otherwise, to rural health clinic services and FQHC services as defined in subparagraphs (B) and (C) of...
ERIC Educational Resources Information Center
Sampson, K. A.; Johnston, L.; Comer, K.; Brogt, E.
2016-01-01
Summative and benchmarking surveys to measure the postgraduate student research experience are well reported in the literature. While useful, we argue that local instruments that provide formative resources with an academic development focus are also required. If higher education institutions are to move beyond the identification of issues and…
Benchmark Analysis of Career and Technical Education in Lenawee County. Final Report.
ERIC Educational Resources Information Center
Hollenbeck, Kevin
The career and technical education (CTE) provided in grades K-12 in the county's vocational-technical center and 12 local public school districts of Lenawee County, Michigan, was benchmarked with respect to its attention to career development. Data were collected from the following sources: structured interviews with a number of key respondents…
Developing a Benchmarking Process in Perfusion: A Report of the Perfusion Downunder Collaboration
Baker, Robert A.; Newland, Richard F.; Fenton, Carmel; McDonald, Michael; Willcox, Timothy W.; Merry, Alan F.
2012-01-01
Abstract: Improving and understanding clinical practice is an appropriate goal for the perfusion community. The Perfusion Downunder Collaboration has established a multi-center perfusion focused database aimed at achieving these goals through the development of quantitative quality indicators for clinical improvement through benchmarking. Data were collected using the Perfusion Downunder Collaboration database from procedures performed in eight Australian and New Zealand cardiac centers between March 2007 and February 2011. At the Perfusion Downunder Meeting in 2010, it was agreed by consensus, to report quality indicators (QI) for glucose level, arterial outlet temperature, and pCO2 management during cardiopulmonary bypass. The values chosen for each QI were: blood glucose ≥4 mmol/L and ≤10 mmol/L; arterial outlet temperature ≤37°C; and arterial blood gas pCO2 ≥ 35 and ≤45 mmHg. The QI data were used to derive benchmarks using the Achievable Benchmark of Care (ABC™) methodology to identify the incidence of QIs at the best performing centers. Five thousand four hundred and sixty-five procedures were evaluated to derive QI and benchmark data. The incidence of the blood glucose QI ranged from 37–96% of procedures, with a benchmark value of 90%. The arterial outlet temperature QI occurred in 16–98% of procedures with the benchmark of 94%; while the arterial pCO2 QI occurred in 21–91%, with the benchmark value of 80%. We have derived QIs and benchmark calculations for the management of several key aspects of cardiopulmonary bypass to provide a platform for improving the quality of perfusion practice. PMID:22730861
Radiation shielding quality assurance
NASA Astrophysics Data System (ADS)
Um, Dallsun
For the radiation shielding quality assurance, the validity and reliability of the neutron transport code MCNP, which is now one of the most widely used radiation shielding analysis codes, were checked with lot of benchmark experiments. And also as a practical example, follows were performed in this thesis. One integral neutron transport experiment to measure the effect of neutron streaming in iron and void was performed with Dog-Legged Void Assembly in Knolls Atomic Power Laboratory in 1991. Neutron flux was measured six different places with the methane detectors and a BF-3 detector. The main purpose of the measurements was to provide benchmark against which various neutron transport calculation tools could be compared. Those data were used in verification of Monte Carlo Neutron & Photon Transport Code, MCNP, with the modeling for that. Experimental results and calculation results were compared in both ways, as the total integrated value of neutron fluxes along neutron energy range from 10 KeV to 2 MeV and as the neutron spectrum along with neutron energy range. Both results are well matched with the statistical error +/-20%. MCNP results were also compared with those of TORT, a three dimensional discrete ordinates code which was developed by Oak Ridge National Laboratory. MCNP results are superior to the TORT results at all detector places except one. This means that MCNP is proved as a very powerful tool for the analysis of neutron transport through iron & air and further it could be used as a powerful tool for the radiation shielding analysis. For one application of the analysis of variance (ANOVA) to neutron and gamma transport problems, uncertainties for the calculated values of critical K were evaluated as in the ANOVA on statistical data.
Visual-Vestibular Conflict Detection Depends on Fixation.
Garzorz, Isabelle T; MacNeilage, Paul R
2017-09-25
Visual and vestibular signals are the primary sources of sensory information for self-motion. Conflict among these signals can be seriously debilitating, resulting in vertigo [1], inappropriate postural responses [2], and motion, simulator, or cyber sickness [3-8]. Despite this significance, the mechanisms mediating conflict detection are poorly understood. Here we model conflict detection simply as crossmodal discrimination with benchmark performance limited by variabilities of the signals being compared. In a series of psychophysical experiments conducted in a virtual reality motion simulator, we measure these variabilities and assess conflict detection relative to this benchmark. We also examine the impact of eye movements on visual-vestibular conflict detection. In one condition, observers fixate a point that is stationary in the simulated visual environment by rotating the eyes opposite head rotation, thereby nulling retinal image motion. In another condition, eye movement is artificially minimized via fixation of a head-fixed fixation point, thereby maximizing retinal image motion. Visual-vestibular integration performance is also measured, similar to previous studies [9-12]. We observe that there is a tradeoff between integration and conflict detection that is mediated by eye movements. Minimizing eye movements by fixating a head-fixed target leads to optimal integration but highly impaired conflict detection. Minimizing retinal motion by fixating a scene-fixed target improves conflict detection at the cost of impaired integration performance. The common tendency to fixate scene-fixed targets during self-motion [13] may indicate that conflict detection is typically a higher priority than the increase in precision of self-motion estimation that is obtained through integration. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rodrigues, M.; Patricio, V.; Rothberg, B.; Sanchez-Janssen, R.; Vale Asari, N.
We present the first results of our observational project 'Starfish' (STellar Population From Integrated Spectrum). The goal of this project is to calibrate, for the first time, the properties of stellar populations derived from integrated spectra with the same properties derived from direct imaging of stellar populations in the same set of galaxies. These properties include the star-formation history (SFH), stellar mass, age, and metallicity. To date, such calibrations have been demonstrated only in star clusters, globular clusters with single stellar populations, not in complex and composite objects such as galaxies. We are currently constructing a library of integrated spectra obtained from a sample of 38 nearby dwarf galaxies obtained with GEMINI/GMOS-N&S (25h) and VLT/VIMOS-IFU (43h). These are to be compared with color magnitude diagrams (CMDs) of the same galaxies constructed from archival HST imaging sensitive to at least 1.5 magnitudes below the tip of the red giant branch. From this comparison we will assess the systematics and uncertainties from integrated spectral techniques. The spectra library will be made publicly available to the community via a dedicated web-page and Vizier database. This dataset will provide a unique benchmark for testing fitting procedures and stellar population models for both nearby and distant galaxies. http://www.sc.eso.org/˜marodrig/Starfish/
[Benchmarking and other functions of ROM: back to basics].
Barendregt, M
2015-01-01
Since 2011 outcome data in the Dutch mental health care have been collected on a national scale. This has led to confusion about the position of benchmarking in the system known as routine outcome monitoring (rom). To provide insight into the various objectives and uses of aggregated outcome data. A qualitative review was performed and the findings were analysed. Benchmarking is a strategy for finding best practices and for improving efficacy and it belongs to the domain of quality management. Benchmarking involves comparing outcome data by means of instrumentation and is relatively tolerant with regard to the validity of the data. Although benchmarking is a function of rom, it must be differentiated form other functions from rom. Clinical management, public accountability, research, payment for performance and information for patients are all functions of rom which require different ways of data feedback and which make different demands on the validity of the underlying data. Benchmarking is often wrongly regarded as being simply a synonym for 'comparing institutions'. It is, however, a method which includes many more factors; it can be used to improve quality and has a more flexible approach to the validity of outcome data and is less concerned than other rom functions about funding and the amount of information given to patients. Benchmarking can make good use of currently available outcome data.
Ang, Darwin; McKenney, Mark; Norwood, Scott; Kurek, Stanley; Kimbrell, Brian; Liu, Huazhi; Ziglar, Michele; Hurst, James
2015-09-01
Improving clinical outcomes of trauma patients is a challenging problem at a statewide level, particularly if data from the state's registry are not publicly available. Promotion of optimal care throughout the state is not possible unless clinical benchmarks are available for comparison. Using publicly available administrative data from the State Department of Health and the Agency for Healthcare Research and Quality (AHRQ) patient safety indicators (PSIs), we sought to create a statewide method for benchmarking trauma mortality and at the same time also identifying a pattern of unique complications that have an independent influence on mortality. Data for this study were obtained from State of Florida Agency for Health Care Administration. Adult trauma patients were identified as having International Classification of Disease ninth edition codes defined by the state. Multivariate logistic regression was used to create a predictive inpatient expected mortality model. The expected value of PSIs was created using the multivariate model and their beta coefficients provided by the AHRQ. Case-mix adjusted mortality results were reported as observed to expected (O/E) ratios to examine mortality, PSIs, failure to prevent complications, and failure to rescue from death. There were 50,596 trauma patients evaluated during the study period. The overall fit of the expected mortality model was very strong at a c-statistic of 0.93. Twelve of 25 trauma centers had O/E ratios <1 or better than expected. Nine statewide PSIs had failure to prevent O/E ratios higher than expected. Five statewide PSIs had failure to rescue O/E ratios higher than expected. The PSI that had the strongest influence on trauma mortality for the state was PSI no. 9 or perioperative hemorrhage or hematoma. Mortality could be further substratified by PSI complications at the hospital level. AHRQ PSIs can have an integral role in an adjusted benchmarking method that screens at risk trauma centers in the state for higher than expected mortality. Stratifying mortality based on failure to prevent PSIs may identify areas of needed improvement at a statewide level. Copyright © 2015 Elsevier Inc. All rights reserved.
A Feature-Reinforcement-Based Approach for Supporting Poly-Lingual Category Integration
NASA Astrophysics Data System (ADS)
Wei, Chih-Ping; Chen, Chao-Chi; Cheng, Tsang-Hsiang; Yang, Christopher C.
Document-category integration (or category integration for short) is fundamental to many e-commerce applications, including information integration along supply chains and information aggregation by intermediaries. Because of the trend of globalization, the requirement for category integration has been extended from monolingual to poly-lingual settings. Poly-lingual category integration (PLCI) aims to integrate two document catalogs, each of which consists of documents written in a mix of languages. Several category integration techniques have been proposed in the literature, but these techniques focus only on monolingual category integration rather than PLCI. In this study, we propose a feature-reinforcement-based PLCI (namely, FR-PLCI) technique that takes into account the master documents of all languages when integrating source documents (in the source catalog) written in a specific language into the master catalog. Using the monolingual category integration (MnCI) technique as a performance benchmark, our empirical evaluation results show that our proposed FR-PLCI technique achieves better integration accuracy than MnCI does in both English and Chinese category integration tasks.
PID controller tuning using metaheuristic optimization algorithms for benchmark problems
NASA Astrophysics Data System (ADS)
Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.
2017-11-01
This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.
MC21 analysis of the MIT PWR benchmark: Hot zero power results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly Iii, D. J.; Aviles, B. N.; Herman, B. R.
2013-07-01
MC21 Monte Carlo results have been compared with hot zero power measurements from an operating pressurized water reactor (PWR), as specified in a new full core PWR performance benchmark from the MIT Computational Reactor Physics Group. Included in the comparisons are axially integrated full core detector measurements, axial detector profiles, control rod bank worths, and temperature coefficients. Power depressions from grid spacers are seen clearly in the MC21 results. Application of Coarse Mesh Finite Difference (CMFD) acceleration within MC21 has been accomplished, resulting in a significant reduction of inactive batches necessary to converge the fission source. CMFD acceleration has alsomore » been shown to work seamlessly with the Uniform Fission Site (UFS) variance reduction method. (authors)« less
NASA Astrophysics Data System (ADS)
Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hartl, C.; Hörmann, N.; Hrubec, J.; Jeitler, M.; König, A.; Krätschmer, I.; Liko, D.; Matsushita, T.; Mikulec, I.; Rabady, D.; Rad, N.; Rahbaran, B.; Rohringer, H.; Schieck, J.; Strauss, J.; Treberer-Treberspurg, W.; Waltenberger, W.; Wulz, C.-E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Alderweireldt, S.; De Wolf, E. A.; Janssen, X.; Lauwers, J.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Abu Zeid, S.; Blekman, F.; D'Hondt, J.; Daci, N.; De Bruyn, I.; Deroover, K.; Heracleous, N.; Lowette, S.; Moortgat, S.; Moreels, L.; Olbrechts, A.; Python, Q.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Parijs, I.; Brun, H.; Caillol, C.; Clerbaux, B.; De Lentdecker, G.; Delannoy, H.; Fasanella, G.; Favart, L.; Goldouzian, R.; Grebenyuk, A.; Karapostoli, G.; Lenzi, T.; Léonard, A.; Luetic, J.; Maerschalk, T.; Marinov, A.; Randle-conde, A.; Seva, T.; Vander Velde, C.; Vanlaer, P.; Yonamine, R.; Zenoni, F.; Zhang, F.; Cimmino, A.; Cornelis, T.; Dobur, D.; Fagot, A.; Garcia, G.; Gul, M.; Poyraz, D.; Salva, S.; Schöfbeck, R.; Sharma, A.; Tytgat, M.; Van Driessche, W.; Yazgan, E.; Zaganidis, N.; Bakhshiansohi, H.; Beluffi, C.; Bondu, O.; Brochet, S.; Bruno, G.; Caudron, A.; De Visscher, S.; Delaere, C.; Delcourt, M.; Francois, B.; Giammanco, A.; Jafari, A.; Jez, P.; Komm, M.; Lemaitre, V.; Magitteri, A.; Mertens, A.; Musich, M.; Nuttens, C.; Piotrzkowski, K.; Quertenmont, L.; Selvaggi, M.; Vidal Marono, M.; Wertz, S.; Beliy, N.; Aldá Júnior, W. L.; Alves, F. L.; Alves, G. A.; Brito, L.; Hensel, C.; Moraes, A.; Pol, M. E.; Rebello Teles, P.; Belchior Batista Das Chagas, E.; Carvalho, W.; Chinellato, J.; Custódio, A.; Da Costa, E. M.; Da Silveira, G. G.; De Jesus Damiao, D.; De Oliveira Martins, C.; Fonseca De Souza, S.; Huertas Guativa, L. M.; Malbouisson, H.; Matos Figueiredo, D.; Mora Herrera, C.; Mundim, L.; Nogima, H.; Prado Da Silva, W. L.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Ahuja, S.; Bernardes, C. A.; Dogra, S.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Moon, C. S.; Novaes, S. F.; Padula, Sandra S.; Romero Abad, D.; Ruiz Vargas, J. C.; Aleksandrov, A.; Hadjiiska, R.; Iaydjiev, P.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Vutova, M.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Fang, W.; Ahmad, M.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Chen, Y.; Cheng, T.; Jiang, C. H.; Leggat, D.; Liu, Z.; Romeo, F.; Shaheen, S. M.; Spiezia, A.; Tao, J.; Wang, C.; Wang, Z.; Zhang, H.; Zhao, J.; Ban, Y.; Chen, G.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; Gomez, J. P.; González Hernández, C. F.; Ruiz Alvarez, J. D.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Puljak, I.; Ribeiro Cipriano, P. M.; Sculac, T.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Ferencek, D.; Kadija, K.; Micanovic, S.; Sudic, L.; Susa, T.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Finger, M.; Finger, M.; Carrera Jarrin, E.; Abdelalim, A. A.; Mohammed, Y.; Salama, E.; Calpas, B.; Kadastik, M.; Murumaa, M.; Perrini, L.; Raidal, M.; Tiko, A.; Veelken, C.; Eerola, P.; Pekkanen, J.; Voutilainen, M.; Härkönen, J.; Karimäki, V.; Kinnunen, R.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Favaro, C.; Ferri, F.; Ganjour, S.; Ghosh, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Kucher, I.; Locci, E.; Machet, M.; Malcles, J.; Rander, J.; Rosowsky, A.; Titov, M.; Zghiche, A.; Abdulsalam, A.; Antropov, I.; Baffioni, S.; Beaudette, F.; Busson, P.; Cadamuro, L.; Chapon, E.; Charlot, C.; Davignon, O.; Granier de Cassagnac, R.; Jo, M.; Lisniak, S.; Miné, P.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Pigard, P.; Regnard, S.; Salerno, R.; Sirois, Y.; Strebler, T.; Yilmaz, Y.; Zabi, A.; Agram, J.-L.; Andrea, J.; Aubin, A.; Bloch, D.; Brom, J.-M.; Buttignol, M.; Chabert, E. C.; Chanon, N.; Collard, C.; Conte, E.; Coubez, X.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Le Bihan, A.-C.; Skovpen, K.; Van Hove, P.; Gadrat, S.; Beauceron, S.; Bernet, C.; Boudoul, G.; Bouvier, E.; Carrillo Montoya, C. A.; Chierici, R.; Contardo, D.; Courbon, B.; Depasse, P.; El Mamouni, H.; Fan, J.; Fay, J.; Gascon, S.; Gouzevitch, M.; Grenier, G.; Ille, B.; Lagarde, F.; Laktineh, I. B.; Lethuillier, M.; Mirabito, L.; Pequegnot, A. L.; Perries, S.; Popov, A.; Sabes, D.; Sordini, V.; Vander Donckt, M.; Verdier, P.; Viret, S.; Toriashvili, T.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Feld, L.; Heister, A.; Kiesel, M. K.; Klein, K.; Lipinski, M.; Ostapchuk, A.; Preuten, M.; Raupach, F.; Schael, S.; Schomakers, C.; Schulte, J. F.; Schulz, J.; Verlage, T.; Weber, H.; Zhukov, V.; Albert, A.; Brodski, M.; Dietz-Laursonn, E.; Duchardt, D.; Endres, M.; Erdmann, M.; Erdweg, S.; Esch, T.; Fischer, R.; Güth, A.; Hamer, M.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Knutzen, S.; Merschmeyer, M.; Meyer, A.; Millet, P.; Mukherjee, S.; Olschewski, M.; Padeken, K.; Pook, T.; Radziej, M.; Reithler, H.; Rieger, M.; Scheuch, F.; Sonnenschein, L.; Teyssier, D.; Thüer, S.; Cherepanov, V.; Flügge, G.; Haj Ahmad, W.; Hoehle, F.; Kargoll, B.; Kress, T.; Künsken, A.; Lingemann, J.; Müller, T.; Nehrkorn, A.; Nowack, A.; Nugent, I. M.; Pistone, C.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Asawatangtrakuldee, C.; Beernaert, K.; Behnke, O.; Behrens, U.; Bin Anuar, A. A.; Borras, K.; Campbell, A.; Connor, P.; Contreras-Campana, C.; Costanza, F.; Diez Pardos, C.; Dolinska, G.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Eren, E.; Gallo, E.; Garay Garcia, J.; Geiser, A.; Gizhko, A.; Grados Luyando, J. M.; Gunnellini, P.; Harb, A.; Hauk, J.; Hempel, M.; Jung, H.; Kalogeropoulos, A.; Karacheban, O.; Kasemann, M.; Keaveney, J.; Kleinwort, C.; Korol, I.; Krücker, D.; Lange, W.; Lelek, A.; Leonard, J.; Lipka, K.; Lobanov, A.; Lohmann, W.; Mankel, R.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mittag, G.; Mnich, J.; Mussgiller, A.; Ntomari, E.; Pitzl, D.; Placakyte, R.; Raspereza, A.; Roland, B.; Sahin, M. Ö.; Saxena, P.; Schoerner-Sadenius, T.; Seitz, C.; Spannagel, S.; Stefaniuk, N.; Van Onsem, G. P.; Walsh, R.; Wissing, C.; Blobel, V.; Centis Vignali, M.; Draeger, A. R.; Dreyer, T.; Garutti, E.; Gonzalez, D.; Haller, J.; Hoffmann, M.; Junkes, A.; Klanner, R.; Kogler, R.; Kovalchuk, N.; Lapsien, T.; Lenz, T.; Marchesini, I.; Marconi, D.; Meyer, M.; Niedziela, M.; Nowatschin, D.; Pantaleo, F.; Peiffer, T.; Perieanu, A.; Poehlsen, J.; Sander, C.; Scharf, C.; Schleper, P.; Schmidt, A.; Schumann, S.; Schwandt, J.; Stadie, H.; Steinbrück, G.; Stober, F. M.; Stöver, M.; Tholen, H.; Troendle, D.; Usai, E.; Vanelderen, L.; Vanhoefer, A.; Vormwald, B.; Barth, C.; Baus, C.; Berger, J.; Butz, E.; Chwalek, T.; Colombo, F.; De Boer, W.; Dierlamm, A.; Fink, S.; Friese, R.; Giffels, M.; Gilbert, A.; Goldenzweig, P.; Haitz, D.; Hartmann, F.; Heindl, S. M.; Husemann, U.; Katkov, I.; Lobelle Pardo, P.; Maier, B.; Mildner, H.; Mozer, M. U.; Müller, Th.; Plagge, M.; Quast, G.; Rabbertz, K.; Röcker, S.; Roscher, F.; Schröder, M.; Shvetsov, I.; Sieber, G.; Simonis, H. J.; Ulrich, R.; Wagner-Kuhr, J.; Wayand, S.; Weber, M.; Weiler, T.; Williamson, S.; Wöhrmann, C.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Topsis-Giotis, I.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Tziaferi, E.; Evangelou, I.; Flouris, G.; Foudas, C.; Kokkas, P.; Loukas, N.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Filipovic, N.; Bencze, G.; Hajdu, C.; Hidas, P.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Makovec, A.; Molnar, J.; Szillasi, Z.; Bartók, M.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Bahinipati, S.; Choudhury, S.; Mal, P.; Mandal, K.; Nayak, A.; Sahoo, D. K.; Sahoo, N.; Swain, S. K.; Bansal, S.; Beri, S. B.; Bhatnagar, V.; Chawla, R.; Bhawandeep, U.; Kalsi, A. K.; Kaur, A.; Kaur, M.; Kumar, R.; Kumari, P.; Mehta, A.; Mittal, M.; Singh, J. B.; Walia, G.; Kumar, Ashok; Bhardwaj, A.; Choudhary, B. C.; Garg, R. B.; Keshri, S.; Malhotra, S.; Naimuddin, M.; Nishu, N.; Ranjan, K.; Sharma, R.; Sharma, V.; Bhattacharya, R.; Bhattacharya, S.; Chatterjee, K.; Dey, S.; Dutt, S.; Dutta, S.; Ghosh, S.; Majumdar, N.; Modak, A.; Mondal, K.; Mukhopadhyay, S.; Nandan, S.; Purohit, A.; Roy, A.; Roy, D.; Roy Chowdhury, S.; Sarkar, S.; Sharan, M.; Thakur, S.; Behera, P. K.; Chudasama, R.; Dutta, D.; Jha, V.; Kumar, V.; Mohanty, A. K.; Netrakanti, P. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Dugad, S.; Kole, G.; Mahakud, B.; Mitra, S.; Mohanty, G. B.; Parida, B.; Sur, N.; Sutar, B.; Banerjee, S.; Bhowmik, S.; Dewanjee, R. K.; Ganguly, S.; Guchait, M.; Jain, Sa.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Sarkar, T.; Wickramage, N.; Chauhan, S.; Dube, S.; Hegde, V.; Kapoor, A.; Kothekar, K.; Rane, A.; Sharma, S.; Behnamian, H.; Chenarani, S.; Eskandari Tadavani, E.; Etesami, S. M.; Fahim, A.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Caputo, C.; Colaleo, A.; Creanza, D.; Cristella, L.; De Filippis, N.; De Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; Miniello, G.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Ranieri, A.; Selvaggi, G.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Abbiendi, G.; Battilana, C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Chhibra, S. S.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Albergo, S.; Chiorboli, M.; Costa, S.; Di Mattia, A.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Gori, V.; Lenzi, P.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Viliani, L.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Primavera, F.; Calvelli, V.; Ferro, F.; Lo Vetere, M.; Monge, M. R.; Robutti, E.; Tosi, S.; Brianza, L.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Ghezzi, A.; Govoni, P.; Malberti, M.; Malvezzi, S.; Manzoni, R. A.; Marzocchi, B.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Pigazzini, S.; Ragazzi, S.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; De Nardo, G.; Di Guida, S.; Esposito, M.; Fabozzi, F.; Iorio, A. O. M.; Lanza, G.; Lista, L.; Meola, S.; Paolucci, P.; Sciacca, C.; Thyssen, F.; Azzi, P.; Bacchetta, N.; Benato, L.; Bisello, D.; Boletti, A.; Carlin, R.; Carvalho Antunes De Oliveira, A.; Checchia, P.; Dall'Osso, M.; De Castro Manzano, P.; Dorigo, T.; Dosselli, U.; Gasparini, F.; Gasparini, U.; Gozzelino, A.; Lacaprara, S.; Margoni, M.; Meneguzzo, A. T.; Pazzini, J.; Pozzobon, N.; Ronchese, P.; Simonetto, F.; Torassa, E.; Zanetti, M.; Zotto, P.; Zucchetta, A.; Zumerle, G.; Braghieri, A.; Magnani, A.; Montagna, P.; Ratti, S. P.; Re, V.; Riccardi, C.; Salvini, P.; Vai, I.; Vitulo, P.; Alunni Solestizi, L.; Bilei, G. M.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Leonardi, R.; Mantovani, G.; Menichelli, M.; Saha, A.; Santocchia, A.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Donato, S.; Fedi, G.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; Cipriani, M.; D'imperio, G.; Del Re, D.; Diemoz, M.; Gelli, S.; Longo, E.; Margaroli, F.; Meridiani, P.; Organtini, G.; Paramatti, R.; Preiato, F.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bartosik, N.; Bellan, R.; Biino, C.; Cartiglia, N.; Costa, M.; Covarelli, R.; Degano, A.; Demaria, N.; Finco, L.; Kiani, B.; Mariotti, C.; Maselli, S.; Mazza, G.; Migliore, E.; Monaco, V.; Monteil, E.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Ravera, F.; Romero, A.; Rotondo, F.; Ruspa, M.; Sacchi, R.; Sola, V.; Solano, A.; Staiano, A.; Traczyk, P.; Belforte, S.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; La Licata, C.; Schizzi, A.; Zanetti, A.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Lee, S.; Lee, S. W.; Oh, Y. D.; Sekmen, S.; Son, D. C.; Yang, Y. C.; Lee, A.; Kim, H.; Brochero Cifuentes, J. A.; Kim, T. J.; Cho, S.; Choi, S.; Go, Y.; Gyun, D.; Ha, S.; Hong, B.; Jo, Y.; Kim, Y.; Lee, B.; Lee, K.; Lee, K. S.; Lee, S.; Lim, J.; Park, S. K.; Roh, Y.; Almond, J.; Kim, J.; Lee, H.; Oh, S. B.; Radburn-Smith, B. C.; Seo, S. h.; Yang, U. K.; Yoo, H. D.; Yu, G. B.; Choi, M.; Kim, H.; Kim, J. H.; Lee, J. S. H.; Park, I. C.; Ryu, G.; Ryu, M. S.; Choi, Y.; Goh, J.; Hwang, C.; Lee, J.; Yu, I.; Dudenas, V.; Juodagalvis, A.; Vaitkus, J.; Ahmed, I.; Ibrahim, Z. A.; Komaragiri, J. R.; Md Ali, M. A. B.; Mohamad Idris, F.; Wan Abdullah, W. A. T.; Yusli, M. N.; Zolkapli, Z.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-De La Cruz, I.; Hernandez-Almada, A.; Lopez-Fernandez, R.; Magaña Villalba, R.; Mejia Guisao, J.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Oropeza Barrera, C.; Vazquez Valencia, F.; Carpinteyro, S.; Pedraza, I.; Salazar Ibarguen, H. A.; Uribe Estrada, C.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Khan, W. A.; Saddique, A.; Shah, M. A.; Shoaib, M.; Waqas, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Zalewski, P.; Bunkowski, K.; Byszuk, A.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Walczak, M.; Bargassa, P.; Beirão Da Cruz E Silva, C.; Di Francesco, A.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Hollar, J.; Leonardo, N.; Lloret Iglesias, L.; Nemallapudi, M. V.; Rodrigues Antunes, J.; Seixas, J.; Toldaiev, O.; Vadruccio, D.; Varela, J.; Vischia, P.; Belotelov, I.; Bunin, P.; Golutvin, I.; Gorbunov, I.; Karjavin, V.; Kozlov, G.; Lanev, A.; Malakhov, A.; Matveev, V.; Moisenz, P.; Palichik, V.; Perelygin, V.; Savina, M.; Shmatov, S.; Shulha, S.; Skatchkov, N.; Smirnov, V.; Voytishin, N.; Zarubin, A.; Chtchipounov, L.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Kuznetsova, E.; Murzin, V.; Oreshkin, V.; Sulimov, V.; Vorobyev, A.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Karneyeu, A.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Spiridonov, A.; Toms, M.; Vlasov, E.; Zhokin, A.; Bylinkin, A.; Chistov, R.; Danilov, M.; Rusinov, V.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Rusakov, S. V.; Terkulov, A.; Baskakov, A.; Belyaev, A.; Boos, E.; Bunichev, V.; Dubinin, M.; Dudko, L.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Miagkov, I.; Obraztsov, S.; Perfilov, M.; Petrushanko, S.; Savrin, V.; Snigirev, A.; Blinov, V.; Skovpen, Y.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Elumakhov, D.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Cirkovic, P.; Devetak, D.; Dordevic, M.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Barrio Luna, M.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Navarro De Martino, E.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Soares, M. S.; de Trocóniz, J. F.; Missiroli, M.; Moran, D.; Cuevas, J.; Fernandez Menendez, J.; Gonzalez Caballero, I.; González Fernández, J. R.; Palencia Cortezon, E.; Sanchez Cruz, S.; Suárez Andrés, I.; Vizan Garcia, J. M.; Cabrillo, I. J.; Calderon, A.; Castiñeiras De Saa, J. R.; Curras, E.; Fernandez, M.; Garcia-Ferrero, J.; Gomez, G.; Lopez Virto, A.; Marco, J.; Martinez Rivero, C.; Matorras, F.; Piedra Gomez, J.; Rodrigo, T.; Ruiz-Jimeno, A.; Scodellaro, L.; Trevisani, N.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Bachtis, M.; Baillon, P.; Ball, A. H.; Barney, D.; Bloch, P.; Bocci, A.; Bonato, A.; Botta, C.; Camporesi, T.; Castello, R.; Cepeda, M.; Cerminara, G.; D'Alfonso, M.; d'Enterria, D.; Dabrowski, A.; Daponte, V.; David, A.; De Gruttola, M.; De Roeck, A.; Di Marco, E.; Dobson, M.; Dorney, B.; du Pree, T.; Duggan, D.; Dünser, M.; Dupont, N.; Elliott-Peisert, A.; Fartoukh, S.; Franzoni, G.; Fulcher, J.; Funk, W.; Gigi, D.; Gill, K.; Girone, M.; Glege, F.; Gulhan, D.; Gundacker, S.; Guthoff, M.; Hammer, J.; Harris, P.; Hegeman, J.; Innocente, V.; Janot, P.; Kieseler, J.; Kirschenmann, H.; Knünz, V.; Kornmayer, A.; Kortelainen, M. J.; Kousouris, K.; Krammer, M.; Lange, C.; Lecoq, P.; Lourenço, C.; Lucchini, M. T.; Malgeri, L.; Mannelli, M.; Martelli, A.; Meijers, F.; Merlin, J. A.; Mersi, S.; Meschi, E.; Moortgat, F.; Morovic, S.; Mulders, M.; Neugebauer, H.; Orfanelli, S.; Orsini, L.; Pape, L.; Perez, E.; Peruzzi, M.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pierini, M.; Racz, A.; Reis, T.; Rolandi, G.; Rovere, M.; Ruan, M.; Sakulin, H.; Sauvan, J. B.; Schäfer, C.; Schwick, C.; Seidel, M.; Sharma, A.; Silva, P.; Sphicas, P.; Steggemann, J.; Stoye, M.; Takahashi, Y.; Tosi, M.; Treille, D.; Triossi, A.; Tsirou, A.; Veckalns, V.; Veres, G. I.; Wardle, N.; Wöhri, H. K.; Zagozdzinska, A.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Rohe, T.; Bachmair, F.; Bäni, L.; Bianchini, L.; Casal, B.; Dissertori, G.; Dittmar, M.; Donegà, M.; Grab, C.; Heidegger, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Lecomte, P.; Lustermann, W.; Mangano, B.; Marionneau, M.; Martinez Ruiz del Arbol, P.; Masciovecchio, M.; Meinhard, M. T.; Meister, D.; Micheli, F.; Musella, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pata, J.; Pauss, F.; Perrin, G.; Perrozzi, L.; Quittnat, M.; Rossini, M.; Schönenberger, M.; Starodumov, A.; Tavolaro, V. R.; Theofilatos, K.; Wallny, R.; Aarrestad, T. K.; Amsler, C.; Caminada, L.; Canelli, M. F.; De Cosa, A.; Galloni, C.; Hinzmann, A.; Hreus, T.; Kilminster, B.; Ngadiuba, J.; Pinna, D.; Rauco, G.; Robmann, P.; Salerno, D.; Yang, Y.; Candelise, V.; Doan, T. H.; Jain, Sh.; Khurana, R.; Konyushikhin, M.; Kuo, C. M.; Lin, W.; Lu, Y. J.; Pozdnyakov, A.; Yu, S. S.; Kumar, Arun; Chang, P.; Chang, Y. H.; Chang, Y. W.; Chao, Y.; Chen, K. F.; Chen, P. H.; Dietz, C.; Fiori, F.; Hou, W.-S.; Hsiung, Y.; Liu, Y. F.; Lu, R.-S.; Miñano Moya, M.; Paganis, E.; Psallidas, A.; Tsai, J. f.; Tzeng, Y. M.; Asavapibhop, B.; Singh, G.; Srimanobhas, N.; Suwonjandee, N.; Adiguzel, A.; Cerci, S.; Damarseckin, S.; Demiroglu, Z. S.; Dozen, C.; Dumanoglu, I.; Girgis, S.; Gokbulut, G.; Guler, Y.; Hos, I.; Kangal, E. E.; Kara, O.; Kayis Topaksu, A.; Kiminsu, U.; Oglakci, M.; Onengut, G.; Ozdemir, K.; Sunar Cerci, D.; Topakli, H.; Turkcapar, S.; Zorbakir, I. S.; Zorbilmez, C.; Bilin, B.; Bilmis, S.; Isildak, B.; Karapinar, G.; Yalvac, M.; Zeyrek, M.; Gülmez, E.; Kaya, M.; Kaya, O.; Yetkin, E. A.; Yetkin, T.; Cakir, A.; Cankocak, K.; Sen, S.; Grynyov, B.; Levchuk, L.; Sorokin, P.; Aggleton, R.; Ball, F.; Beck, L.; Brooke, J. J.; Burns, D.; Clement, E.; Cussans, D.; Flacher, H.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Lucas, C.; Newbold, D. M.; Paramesvaran, S.; Poll, A.; Sakuma, T.; Seif El Nasr-storey, S.; Smith, D.; Smith, V. J.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Calligaris, L.; Cieri, D.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Baber, M.; Bainbridge, R.; Buchmuller, O.; Bundock, A.; Burton, D.; Casasso, S.; Citron, M.; Colling, D.; Corpe, L.; Dauncey, P.; Davies, G.; De Wit, A.; Della Negra, M.; Di Maria, R.; Dunne, P.; Elwood, A.; Futyan, D.; Haddad, Y.; Hall, G.; Iles, G.; James, T.; Lane, R.; Laner, C.; Lucas, R.; Lyons, L.; Magnan, A.-M.; Malik, S.; Mastrolorenzo, L.; Nash, J.; Nikitenko, A.; Pela, J.; Penning, B.; Pesaresi, M.; Raymond, D. M.; Richards, A.; Rose, A.; Seez, C.; Summers, S.; Tapper, A.; Uchida, K.; Vazquez Acosta, M.; Virdee, T.; Wright, J.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leslie, D.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Borzou, A.; Call, K.; Dittmann, J.; Hatakeyama, K.; Liu, H.; Pastika, N.; Charaf, O.; Cooper, S. I.; Henderson, C.; Rumerio, P.; West, C.; Arcaro, D.; Avetisyan, A.; Bose, T.; Gastler, D.; Rankin, D.; Richardson, C.; Rohlf, J.; Sulak, L.; Zou, D.; Benelli, G.; Berry, E.; Cutts, D.; Garabedian, A.; Hakala, J.; Heintz, U.; Hogan, J. M.; Jesus, O.; Laird, E.; Landsberg, G.; Mao, Z.; Narain, M.; Piperov, S.; Sagir, S.; Spencer, E.; Syarif, R.; Breedon, R.; Breto, G.; Burns, D.; Calderon De La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Flores, C.; Funk, G.; Gardner, M.; Ko, W.; Lander, R.; Mclean, C.; Mulhearn, M.; Pellett, D.; Pilot, J.; Shalhout, S.; Smith, J.; Squires, M.; Stolp, D.; Tripathi, M.; Wilbur, S.; Yohay, R.; Cousins, R.; Everaerts, P.; Florent, A.; Hauser, J.; Ignatenko, M.; Saltzberg, D.; Takasugi, E.; Valuev, V.; Weber, M.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Ghiasi Shirazi, S. M. A.; Hanson, G.; Heilman, J.; Jandir, P.; Kennedy, E.; Lacroix, F.; Long, O. R.; Olmedo Negrete, M.; Paneva, M. I.; Shrinivas, A.; Si, W.; Wei, H.; Wimpenny, S.; Yates, B. R.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; Derdzinski, M.; Gerosa, R.; Holzner, A.; Klein, D.; Krutelyov, V.; Letts, J.; Olivito, D.; Padhi, S.; Pieri, M.; Sani, M.; Sharma, V.; Tadel, M.; Vartak, A.; Wasserbaech, S.; Welke, C.; Wood, J.; Würthwein, F.; Yagil, A.; Zevi Della Porta, G.; Bhandari, R.; Bradmiller-Feld, J.; Campagnari, C.; Dishaw, A.; Dutta, V.; Flowers, K.; Franco Sevilla, M.; Geffert, P.; George, C.; Golf, F.; Gouskos, L.; Gran, J.; Heller, R.; Incandela, J.; Mccoll, N.; Mullin, S. D.; Ovcharova, A.; Richman, J.; Stuart, D.; Suarez, I.; Yoo, J.; Anderson, D.; Apresyan, A.; Bendavid, J.; Bornheim, A.; Bunn, J.; Chen, Y.; Duarte, J.; Lawhorn, J. M.; Mott, A.; Newman, H. B.; Pena, C.; Spiropulu, M.; Vlimant, J. R.; Xie, S.; Zhu, R. Y.; Andrews, M. B.; Azzolini, V.; Ferguson, T.; Paulini, M.; Russ, J.; Sun, M.; Vogel, H.; Vorobiev, I.; Cumalat, J. P.; Ford, W. T.; Jensen, F.; Johnson, A.; Krohn, M.; Mulholland, T.; Stenson, K.; Wagner, S. R.; Alexander, J.; Chaves, J.; Chu, J.; Dittmer, S.; Mcdermott, K.; Mirman, N.; Nicolas Kaufman, G.; Patterson, J. R.; Rinkevicius, A.; Ryd, A.; Skinnari, L.; Soffi, L.; Tan, S. M.; Tao, Z.; Thom, J.; Tucker, J.; Wittich, P.; Zientek, M.; Winn, D.; Abdullin, S.; Albrow, M.; Apollinari, G.; Banerjee, S.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Cheung, H. W. K.; Chlebana, F.; Cihangir, S.; Cremonesi, M.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Hare, D.; Harris, R. M.; Hasegawa, S.; Hirschauer, J.; Hu, Z.; Jayatilaka, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kreis, B.; Lammel, S.; Linacre, J.; Lincoln, D.; Lipton, R.; Liu, M.; Liu, T.; Lopes De Sá, R.; Lykken, J.; Maeshima, K.; Magini, N.; Marraffino, J. M.; Maruyama, S.; Mason, D.; McBride, P.; Merkel, P.; Mrenna, S.; Nahn, S.; Newman-Holmes, C.; O'Dell, V.; Pedro, K.; Prokofyev, O.; Rakness, G.; Ristori, L.; Sexton-Kennedy, E.; Soha, A.; Spalding, W. J.; Spiegel, L.; Stoynev, S.; Strobbe, N.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vernieri, C.; Verzocchi, M.; Vidal, R.; Wang, M.; Weber, H. A.; Whitbeck, A.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Brinkerhoff, A.; Carnes, A.; Carver, M.; Curry, D.; Das, S.; Field, R. D.; Furic, I. K.; Konigsberg, J.; Korytov, A.; Ma, P.; Matchev, K.; Mei, H.; Milenovic, P.; Mitselmakher, G.; Rank, D.; Shchutska, L.; Sperka, D.; Thomas, L.; Wang, J.; Wang, S.; Yelton, J.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Ackert, A.; Adams, J. R.; Adams, T.; Askew, A.; Bein, S.; Diamond, B.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Khatiwada, A.; Prosper, H.; Santra, A.; Weinberg, M.; Baarmand, M. M.; Bhopatkar, V.; Colafranceschi, S.; Hohlmann, M.; Noonan, D.; Roy, T.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Bucinskaite, I.; Cavanaugh, R.; Evdokimov, O.; Gauthier, L.; Gerber, C. E.; Hofman, D. J.; Kurt, P.; O'Brien, C.; Sandoval Gonzalez, I. D.; Turner, P.; Varelas, N.; Wang, H.; Wu, Z.; Zakaria, M.; Zhang, J.; Bilki, B.; Clarida, W.; Dilsiz, K.; Durgut, S.; Gandrajula, R. P.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Snyder, C.; Tiras, E.; Wetzel, J.; Yi, K.; Anderson, I.; Blumenfeld, B.; Cocoros, A.; Eminizer, N.; Fehling, D.; Feng, L.; Gritsan, A. V.; Maksimovic, P.; Martin, C.; Osherson, M.; Roskes, J.; Sarica, U.; Swartz, M.; Xiao, M.; Xin, Y.; You, C.; Al-bataineh, A.; Baringer, P.; Bean, A.; Boren, S.; Bowen, J.; Bruner, C.; Castle, J.; Forthomme, L.; Kenny, R. P., III; Kropivnitskaya, A.; Majumder, D.; Mcbrayer, W.; Murray, M.; Sanders, S.; Stringer, R.; Tapia Takaki, J. D.; Wang, Q.; Ivanov, A.; Kaadze, K.; Khalil, S.; Maravin, Y.; Mohammadi, A.; Saini, L. K.; Skhirtladze, N.; Toda, S.; Rebassoo, F.; Wright, D.; Anelli, C.; Baden, A.; Baron, O.; Belloni, A.; Calvert, B.; Eno, S. C.; Ferraioli, C.; Gomez, J. A.; Hadley, N. J.; Jabeen, S.; Kellogg, R. G.; Kolberg, T.; Kunkle, J.; Lu, Y.; Mignerey, A. C.; Ricci-Tam, F.; Shin, Y. H.; Skuja, A.; Tonjes, M. B.; Tonwar, S. C.; Abercrombie, D.; Allen, B.; Apyan, A.; Barbieri, R.; Baty, A.; Bi, R.; Bierwagen, K.; Brandt, S.; Busza, W.; Cali, I. A.; Demiragli, Z.; Di Matteo, L.; Gomez Ceballos, G.; Goncharov, M.; Hsu, D.; Iiyama, Y.; Innocenti, G. M.; Klute, M.; Kovalskyi, D.; Krajczar, K.; Lai, Y. S.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Marini, A. C.; Mcginn, C.; Mironov, C.; Narayanan, S.; Niu, X.; Paus, C.; Roland, C.; Roland, G.; Salfeld-Nebgen, J.; Stephans, G. S. F.; Sumorok, K.; Tatar, K.; Varma, M.; Velicanu, D.; Veverka, J.; Wang, J.; Wang, T. W.; Wyslouch, B.; Yang, M.; Zhukova, V.; Benvenuti, A. C.; Chatterjee, R. M.; Evans, A.; Finkel, A.; Gude, A.; Hansen, P.; Kalafut, S.; Kao, S. C.; Kubota, Y.; Lesko, Z.; Mans, J.; Nourbakhsh, S.; Ruckstuhl, N.; Rusack, R.; Tambe, N.; Turkewitz, J.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bartek, R.; Bloom, K.; Claes, D. R.; Dominguez, A.; Fangmeier, C.; Gonzalez Suarez, R.; Kamalieddin, R.; Kravchenko, I.; Malta Rodrigues, A.; Meier, F.; Monroy, J.; Siado, J. E.; Snow, G. R.; Stieger, B.; Alyari, M.; Dolen, J.; George, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Kaisen, J.; Kharchilava, A.; Kumar, A.; Parker, A.; Rappoccio, S.; Roozbahani, B.; Alverson, G.; Barberis, E.; Baumgartel, D.; Hortiangtham, A.; Massironi, A.; Morse, D. M.; Nash, D.; Orimoto, T.; Teixeira De Lima, R.; Trocino, D.; Wang, R.-J.; Wood, D.; Bhattacharya, S.; Hahn, K. A.; Kubik, A.; Kumar, A.; Low, J. F.; Mucia, N.; Odell, N.; Pollack, B.; Schmitt, M. H.; Sung, K.; Trovato, M.; Velasco, M.; Dev, N.; Hildreth, M.; Hurtado Anampa, K.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Marinelli, N.; Meng, F.; Mueller, C.; Musienko, Y.; Planer, M.; Reinsvold, A.; Ruchti, R.; Smith, G.; Taroni, S.; Wayne, M.; Wolf, M.; Woodard, A.; Alimena, J.; Antonelli, L.; Brinson, J.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Francis, B.; Hart, A.; Hill, C.; Hughes, R.; Ji, W.; Liu, B.; Luo, W.; Puigh, D.; Winer, B. L.; Wulsin, H. W.; Cooperstein, S.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Lange, D.; Luo, J.; Marlow, D.; Medvedeva, T.; Mei, K.; Mooney, M.; Olsen, J.; Palmer, C.; Piroué, P.; Stickland, D.; Tully, C.; Zuranski, A.; Malik, S.; Barker, A.; Barnes, V. E.; Folgueras, S.; Gutay, L.; Jha, M. K.; Jones, M.; Jung, A. W.; Jung, K.; Miller, D. H.; Neumeister, N.; Shi, X.; Sun, J.; Svyatkovskiy, A.; Wang, F.; Xie, W.; Xu, L.; Parashar, N.; Stupak, J.; Adair, A.; Akgun, B.; Chen, Z.; Ecklund, K. M.; Geurts, F. J. M.; Guilbaud, M.; Li, W.; Michlin, B.; Northup, M.; Padley, B. P.; Redjimi, R.; Roberts, J.; Rorie, J.; Tu, Z.; Zabel, J.; Betchart, B.; Bodek, A.; de Barbaro, P.; Demina, R.; Duh, Y. t.; Ferbel, T.; Galanti, M.; Garcia-Bellido, A.; Han, J.; Hindrichs, O.; Khukhunaishvili, A.; Lo, K. H.; Tan, P.; Verzetti, M.; Agapitos, A.; Chou, J. P.; Contreras-Campana, E.; Gershtein, Y.; Gómez Espinosa, T. A.; Halkiadakis, E.; Heindl, M.; Hidas, D.; Hughes, E.; Kaplan, S.; Kunnawalkam Elayavalli, R.; Kyriacou, S.; Lath, A.; Nash, K.; Saka, H.; Salur, S.; Schnetzer, S.; Sheffield, D.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Foerster, M.; Heideman, J.; Riley, G.; Rose, K.; Spanier, S.; Thapa, K.; Bouhali, O.; Celik, A.; Dalchenko, M.; De Mattia, M.; Delgado, A.; Dildick, S.; Eusebi, R.; Gilmore, J.; Huang, T.; Juska, E.; Kamon, T.; Mueller, R.; Pakhotin, Y.; Patel, R.; Perloff, A.; Perniè, L.; Rathjens, D.; Rose, A.; Safonov, A.; Tatarinov, A.; Ulmer, K. A.; Akchurin, N.; Cowden, C.; Damgov, J.; De Guio, F.; Dragoiu, C.; Dudero, P. R.; Faulkner, J.; Gurpinar, E.; Kunori, S.; Lamichhane, K.; Lee, S. W.; Libeiro, T.; Peltola, T.; Undleeb, S.; Volobouev, I.; Wang, Z.; Delannoy, A. G.; Greene, S.; Gurrola, A.; Janjam, R.; Johns, W.; Maguire, C.; Melo, A.; Ni, H.; Sheldon, P.; Tuo, S.; Velkovska, J.; Xu, Q.; Arenton, M. W.; Barria, P.; Cox, B.; Goodell, J.; Hirosky, R.; Ledovskoy, A.; Li, H.; Neu, C.; Sinthuprasith, T.; Sun, X.; Wang, Y.; Wolfe, E.; Xia, F.; Clarke, C.; Harr, R.; Karchin, P. E.; Lamichhane, P.; Sturdy, J.; Belknap, D. A.; Dasu, S.; Dodd, L.; Duric, S.; Gomber, B.; Grothe, M.; Herndon, M.; Hervé, A.; Klabbers, P.; Lanaro, A.; Levine, A.; Long, K.; Loveless, R.; Ojalvo, I.; Perry, T.; Pierro, G. A.; Polese, G.; Ruggles, T.; Savin, A.; Smith, N.; Smith, W. H.; Taylor, D.; Woods, N.; CMS Collaboration
2017-10-01
A search for heavy narrow resonances decaying into four-lepton final states has been performed using proton-proton collision data at √{ s} = 8TeV collected by the CMS experiment, corresponding to an integrated luminosity of 19.7fb-1. No excess of events over the standard model background expectation is observed. Upper limits for a benchmark model on the product of cross section and branching fraction for the production of these heavy narrow resonances are presented. The limit excludes leptophobic Z‧ bosons with masses below 2.5TeV within the benchmark model. This is the first result to constrain a leptophobic Z‧ resonance in the four-lepton channel.
Data Intensive Systems (DIS) Benchmark Performance Summary
2003-08-01
models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures
Teaching and Research in Mid-Career Management Education: Function and Fusion
ERIC Educational Resources Information Center
Quinn, Bríd C.
2016-01-01
The apparent disconnect between teaching and research has implications for both curricular content and pedagogic practice and has particular salience in the field of mid-career education. To overcome this disconnect, faculty endeavour to integrate teaching and research. Pressure to do so stems from many sources. Benchmarks of professional…
Aligning the SEA's Compliance Responsibilities and Performance Objectives. Benchmark. No. 3
ERIC Educational Resources Information Center
Nafziger, D.; Jochim, A.
2013-01-01
Assuring that state and local agencies comply with the requirements of Federal and state programs has been a central feature of SEAs. The purposes of compliance requirements--ensuring both fiscal integrity and that targeted groups receive intended benefits--are important. While such requirements are often well-intentioned, too often they can act…
Debugging and Analysis of Large-Scale Parallel Programs
1989-09-01
Przybylski, T. Riordan , C. Rowen, and D. Van’t Hof, "A CMOS RISC Processor with Integrated System Functions," In Proc. of the 1986 COMPCON. IEEE, March 1986...Sequencers," Communications of the ACM, 22(2):115-123, 1979. 115 [Richardson, 1988] Rick Richardson, "Dhrystone 2.1 Benchmark," Usenet Distribution
Management Education Benchmarking Designing Customized and Flexible MBA Programs
ERIC Educational Resources Information Center
Hall, Owen P., Jr.; Young, Terry W.
2007-01-01
To meet the challenges of the 21st century B-schools are revising curriculum, delivery and outcome assessment modalities. Today, the proportion of electives and other specialty offerings in many MBA programs now constitutes more than 50% of the total curriculum. However, this focus on customization, integration and flexibility is not without its…
Scalable randomized benchmarking of non-Clifford gates
NASA Astrophysics Data System (ADS)
Cross, Andrew; Magesan, Easwar; Bishop, Lev; Smolin, John; Gambetta, Jay
Randomized benchmarking is a widely used experimental technique to characterize the average error of quantum operations. Benchmarking procedures that scale to enable characterization of n-qubit circuits rely on efficient procedures for manipulating those circuits and, as such, have been limited to subgroups of the Clifford group. However, universal quantum computers require additional, non-Clifford gates to approximate arbitrary unitary transformations. We define a scalable randomized benchmarking procedure over n-qubit unitary matrices that correspond to protected non-Clifford gates for a class of stabilizer codes. We present efficient methods for representing and composing group elements, sampling them uniformly, and synthesizing corresponding poly (n) -sized circuits. The procedure provides experimental access to two independent parameters that together characterize the average gate fidelity of a group element. We acknowledge support from ARO under Contract W911NF-14-1-0124.
Haptic simulation framework for determining virtual dental occlusion.
Wu, Wen; Chen, Hui; Cen, Yuhai; Hong, Yang; Khambay, Balvinder; Heng, Pheng Ann
2017-04-01
The surgical treatment of many dentofacial deformities is often complex due to its three-dimensional nature. To determine the dental occlusion in the most stable position is essential for the success of the treatment. Computer-aided virtual planning on individualized patient-specific 3D model can help formulate the surgical plan and predict the surgical change. However, in current computer-aided planning systems, it is not possible to determine the dental occlusion of the digital models in the intuitive way during virtual surgical planning because of absence of haptic feedback. In this paper, a physically based haptic simulation framework is proposed, which can provide surgeons with the intuitive haptic feedback to determine the dental occlusion of the digital models in their most stable position. To provide the physically realistic force feedback when the dental models contact each other during the searching process, the contact model is proposed to describe the dynamic and collision properties of the dental models during the alignment. The simulated impulse/contact-based forces are integrated into the unified simulation framework. A validation study has been conducted on fifteen sets of virtual dental models chosen at random and covering a wide range of the dental relationships found clinically. The dental occlusions obtained by an expert were employed as a benchmark to compare the virtual occlusion results. The mean translational and angular deviations of the virtual occlusion results from the benchmark were small. The experimental results show the validity of our method. The simulated forces can provide valuable insights to determine the virtual dental occlusion. The findings of this work and the validation of proposed concept lead the way for full virtual surgical planning on patient-specific virtual models allowing fully customized treatment plans for the surgical correction of dentofacial deformities.
User-centered virtual environment assessment and design for cognitive rehabilitation applications
NASA Astrophysics Data System (ADS)
Fidopiastis, Cali Michael
Virtual environment (VE) design for cognitive rehabilitation necessitates a new methodology to ensure the validity of the resulting rehabilitation assessment. We propose that benchmarking the VE system technology utilizing a user-centered approach should precede the VE construction. Further, user performance baselines should be measured throughout testing as a control for adaptive effects that may confound the metrics chosen to evaluate the rehabilitation treatment. To support these claims we present data obtained from two modules of a user-centered head-mounted display (HMD) assessment battery, specifically resolution visual acuity and stereoacuity. Resolution visual acuity and stereoacuity assessments provide information about the image quality achieved by an HMD based upon its unique system parameters. When applying a user-centered approach, we were able to quantify limitations in the VE system components (e.g., low microdisplay resolution) and separately point to user characteristics (e.g., changes in dark focus) that may introduce error in the evaluation of VE based rehabilitation protocols. Based on these results, we provide guidelines for calibrating and benchmarking HMDs. In addition, we discuss potential extensions of the assessment to address higher level usability issues. We intend to test the proposed framework within the Human Experience Modeler (HEM), a testbed created at the University of Central Florida to evaluate technologies that may enhance cognitive rehabilitation effectiveness. Preliminary results of a feasibility pilot study conducted with a memory impaired participant showed that the HEM provides the control and repeatability needed to conduct such technology comparisons. Further, the HEM affords the opportunity to integrate new brain imaging technologies (i.e., functional Near Infrared Imaging) to evaluate brain plasticity associated with VE based cognitive rehabilitation.
NASA Astrophysics Data System (ADS)
Carlson, D. F.; Novelli, G.; Guigand, C.; Özgökmen, T.; Fox-Kemper, B.; Molemaker, M. J.
2016-02-01
The Consortium for Advanced Research on the Transport of Hydrocarbon in the Environment (CARTHE) will carry out the LAgrangian Submesoscale ExpeRiment (LASER) to study the role of small-scale processes in the transport and dispersion of oil and passive tracers. The Ship-Tethered Aerostat Remote Sensing System (STARRS) will observe small-scale surface dispersion in the open ocean. STARRS is built around a high-lift-capacity (30 kg) helium-filled aerostat. STARRS is equipped with a high resolution digital camera. An integrated GNSS receiver and inertial navigation system permit direct geo-rectification of the imagery. Consortium for Advanced Research on the Transport of Hydrocarbon in the Environment (CARTHE) will carry out the LAgrangian Submesoscale ExpeRiment (LASER) to study the role of small-scale processes in the transport and dispersion of oil and passive tracers. The Ship-Tethered Aerostat Remote Sensing System (STARRS) was developed to produce observational estimates of small-scale surface dispersion in the open ocean. STARRS is built around a high-lift-capacity (30 kg) helium-filled aerostat. STARRS is equipped with a high resolution digital camera. An integrated GNSS receiver and inertial navigation system permit direct geo-rectification of the imagery. Thousands of drift cards deployed in the field of view of STARRS and tracked over time provide the first observational estimates of small-scale (1-500 m) surface dispersion in the open ocean. The STARRS imagery will be combined with GPS-tracked surface drifter trajectories, shipboard observations, and aerial surveys of sea surface temperature in the DeSoto Canyon. In addition to obvious applications to oil spill modelling, the STARRS observations will provide essential benchmarks for high resolution numerical modelsDrift cards deployed in the field of view of STARRS and tracked over time provide the first observational estimates of small-scale (1-100 m) surface dispersion in the open ocean. The STARRS imagery will be combined with GPS-tracked surface drifter trajectories, shipboard observations, and aerial surveys of sea surface temperature in the DeSoto Canyon. In addition to obvious applications to oil spill modelling, the STARRS observations will provide essential benchmarks for high resolution numerical models
The mass storage testing laboratory at GSFC
NASA Technical Reports Server (NTRS)
Venkataraman, Ravi; Williams, Joel; Michaud, David; Gu, Heng; Kalluri, Atri; Hariharan, P. C.; Kobler, Ben; Behnke, Jeanne; Peavey, Bernard
1998-01-01
Industry-wide benchmarks exist for measuring the performance of processors (SPECmarks), and of database systems (Transaction Processing Council). Despite storage having become the dominant item in computing and IT (Information Technology) budgets, no such common benchmark is available in the mass storage field. Vendors and consultants provide services and tools for capacity planning and sizing, but these do not account for the complete set of metrics needed in today's archives. The availability of automated tape libraries, high-capacity RAID systems, and high- bandwidth interconnectivity between processor and peripherals has led to demands for services which traditional file systems cannot provide. File Storage and Management Systems (FSMS), which began to be marketed in the late 80's, have helped to some extent with large tape libraries, but their use has introduced additional parameters affecting performance. The aim of the Mass Storage Test Laboratory (MSTL) at Goddard Space Flight Center is to develop a test suite that includes not only a comprehensive check list to document a mass storage environment but also benchmark code. Benchmark code is being tested which will provide measurements for both baseline systems, i.e. applications interacting with peripherals through the operating system services, and for combinations involving an FSMS. The benchmarks are written in C, and are easily portable. They are initially being aimed at the UNIX Open Systems world. Measurements are being made using a Sun Ultra 170 Sparc with 256MB memory running Solaris 2.5.1 with the following configuration: 4mm tape stacker on SCSI 2 Fast/Wide; 4GB disk device on SCSI 2 Fast/Wide; and Sony Petaserve on Fast/Wide differential SCSI 2.
Treatment planning for spinal radiosurgery : A competitive multiplatform benchmark challenge.
Moustakis, Christos; Chan, Mark K H; Kim, Jinkoo; Nilsson, Joakim; Bergman, Alanah; Bichay, Tewfik J; Palazon Cano, Isabel; Cilla, Savino; Deodato, Francesco; Doro, Raffaela; Dunst, Jürgen; Eich, Hans Theodor; Fau, Pierre; Fong, Ming; Haverkamp, Uwe; Heinze, Simon; Hildebrandt, Guido; Imhoff, Detlef; de Klerck, Erik; Köhn, Janett; Lambrecht, Ulrike; Loutfi-Krauss, Britta; Ebrahimi, Fatemeh; Masi, Laura; Mayville, Alan H; Mestrovic, Ante; Milder, Maaike; Morganti, Alessio G; Rades, Dirk; Ramm, Ulla; Rödel, Claus; Siebert, Frank-Andre; den Toom, Wilhelm; Wang, Lei; Wurster, Stefan; Schweikard, Achim; Soltys, Scott G; Ryu, Samuel; Blanck, Oliver
2018-05-25
To investigate the quality of treatment plans of spinal radiosurgery derived from different planning and delivery systems. The comparisons include robotic delivery and intensity modulated arc therapy (IMAT) approaches. Multiple centers with equal systems were used to reduce a bias based on individual's planning abilities. The study used a series of three complex spine lesions to maximize the difference in plan quality among the various approaches. Internationally recognized experts in the field of treatment planning and spinal radiosurgery from 12 centers with various treatment planning systems participated. For a complex spinal lesion, the results were compared against a previously published benchmark plan derived for CyberKnife radiosurgery (CKRS) using circular cones only. For two additional cases, one with multiple small lesions infiltrating three vertebrae and a single vertebra lesion treated with integrated boost, the results were compared against a benchmark plan generated using a best practice guideline for CKRS. All plans were rated based on a previously established ranking system. All 12 centers could reach equality (n = 4) or outperform (n = 8) the benchmark plan. For the multiple lesions and the single vertebra lesion plan only 5 and 3 of the 12 centers, respectively, reached equality or outperformed the best practice benchmark plan. However, the absolute differences in target and critical structure dosimetry were small and strongly planner-dependent rather than system-dependent. Overall, gantry-based IMAT with simple planning techniques (two coplanar arcs) produced faster treatments and significantly outperformed static gantry intensity modulated radiation therapy (IMRT) and multileaf collimator (MLC) or non-MLC CKRS treatment plan quality regardless of the system (mean rank out of 4 was 1.2 vs. 3.1, p = 0.002). High plan quality for complex spinal radiosurgery was achieved among all systems and all participating centers in this planning challenge. This study concludes that simple IMAT techniques can generate significantly better plan quality compared to previous established CKRS benchmarks.
Schachter, Michael E; Romann, Alexandra; Djurdev, Ognjenka; Levin, Adeera; Beaulieu, Monica
2013-08-29
Early referral and management of high-risk chronic kidney disease may prevent or delay the need for dialysis. Automatic eGFR reporting has increased demand for out-patient nephrology consultations and in some cases, prolonged queues. In Canada, a national task force suggested the development of waiting time targets, which has not been done for nephrology. We sought to describe waiting time for outpatient nephrology consultations in British Columbia (BC). Data collection occurred in 2 phases: 1) Baseline Description (Jan 18-28, 2010) and 2) Post Waiting Time Benchmark-Introduction (Jan 16-27, 2012). Waiting time was defined as the interval from receipt of referral letters to assessment. Using a modified Delphi process, Nephrologists and Family Physicians (FP) developed waiting time targets for commonly referred conditions through meetings and surveys. Rules were developed to weigh-in nephrologists', FPs', and patients' perspectives in order to generate waiting time benchmarks. Targets consider comorbidities, eGFR, BP and albuminuria. Referred conditions were assigned a priority score between 1-4. BC nephrologists were encouraged to centrally triage referrals to see the first available nephrologist. Waiting time benchmarks were simultaneously introduced to guide patient scheduling. A post-intervention waiting time evaluation was then repeated. In 2010 and 2012, 43/52 (83%) and 46/57 (81%) of BC nephrologists participated. Waiting time decreased from 98(IQR44,157) to 64(IQR21,120) days from 2010 to 2012 (p = <.001), despite no change in referral eGFR, demographics, nor number of office hrs/wk. Waiting time improved most for high priority patients. An integrated, Provincial initiative to measure wait times, develop waiting benchmarks, and engage physicians in active waiting time management associated with improved access to nephrologists in BC. Improvements in waiting time was most marked for the highest priority patients, which suggests that benchmarks had an influence on triaging behavior. Further research is needed to determine whether this effect is sustainable.
Integrating the New Generation Science Standards (NGSS) into K- 6 teacher training and curricula
NASA Astrophysics Data System (ADS)
Pinter, S.; Carlson, S. J.
2017-12-01
The Next Generation Science Standards is an initiative, adopted by 26 states, to set national education standards that are "rich in content and practice, arranged in a coherent manner across disciplines and grades to provide all students an internationally benchmarked science education." Educators now must integrate these standards into existing curricula. Many grade-school (K-6) teachers face a particularly daunting task, as they were traditionally not required to teach science or only at a rudimentary level. The majority of K-6 teachers enter teaching from non-science disciplines, making this transition even more difficult. Since the NGSS emphasizes integrated and coherent progression of knowledge from grade to grade, prospective K-6 teachers must be able to deliver science with confidence and enthusiasm to their students. CalTeach/MAST (Mathematics and Science Teaching Program) at the University of California Davis, has created a two-quarter sequence of integrated science courses for undergraduate students majoring in non-STEM disciplines and intending to pursue multiple-subject K-6 credentials. The UCD integrated science course provides future primary school teachers with a basic, but comprehensive background in the physical and earth/space sciences. Key tools are taught for improving teaching methods, investigating complex science ideas, and solving problems relevant to students' life experiences that require scientific or technological knowledge. This approach allows prospective K-6 teachers to explore more effectively the connections between the disciplinary core ideas, crosscutting concepts, and scientific and engineering practices, as outlined in the NGSS. In addition, they develop a core set of science teaching skills based on inquiry activities and guided lab discussions. With this course, we deliver a solid science background to prospective K-6 teachers and facilitate their ability to teach science following the standards as articulated in the NGSS.
The Encyclopedia of Life v2: Providing Global Access to Knowledge About Life on Earth
2014-01-01
Abstract The Encyclopedia of Life (EOL, http://eol.org) aims to provide unprecedented global access to a broad range of information about life on Earth. It currently contains 3.5 million distinct pages for taxa and provides content for 1.3 million of those pages. The content is primarily contributed by EOL content partners (providers) that have a more limited geographic, taxonomic or topical scope. EOL aggregates these data and automatically integrates them based on associated scientific names and other classification information. EOL also provides interfaces for curation and direct content addition. All materials in EOL are either in the public domain or licensed under a Creative Commons license. In addition to the web interface, EOL is also accessible through an Application Programming Interface. In this paper, we review recent developments added for Version 2 of the web site and subsequent releases through Version 2.2, which have made EOL more engaging, personal, accessible and internationalizable. We outline the core features and technical architecture of the system. We summarize milestones achieved so far by EOL to present results of the current system implementation and establish benchmarks upon which to judge future improvements. We have shown that it is possible to successfully integrate large amounts of descriptive biodiversity data from diverse sources into a robust, standards-based, dynamic, and scalable infrastructure. Increasing global participation and the emergence of EOL-powered applications demonstrate that EOL is becoming a significant resource for anyone interested in biological diversity. PMID:24891832
ART/Ada design project, phase 1. Task 3 report: Test plan
NASA Technical Reports Server (NTRS)
Allen, Bradley P.
1988-01-01
The plan is described for the integrated testing and benchmark of Phase Ada based ESBT Design Research Project. The integration testing is divided into two phases: (1) the modules that do not rely on the Ada code generated by the Ada Generator are tested before the Ada Generator is implemented; and (2) all modules are integrated and tested with the Ada code generated by the Ada Generator. Its performance and size as well as its functionality is verified in this phase. The target platform is a DEC Ada compiler on VAX mini-computers and VAX stations running the VMS operating system.
Fingerprinting sea-level variations in response to continental ice loss: a benchmark exercise
NASA Astrophysics Data System (ADS)
Barletta, Valentina R.; Spada, Giorgio; Riva, Riccardo E. M.; James, Thomas S.; Simon, Karen M.; van der Wal, Wouter; Martinec, Zdenek; Klemann, Volker; Olsson, Per-Anders; Hagedoorn, Jan; Stocchi, Paolo; Vermeersen, Bert
2013-04-01
Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying Glacial Isostatic Adjustment (GIA) can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Here we present the results of a benchmark exercise of independently developed codes designed to solve the SLE. The study involves predictions of current sea level changes due to present-day ice mass loss. In spite of the differences in the methods employed, the comparison shows that a significant number of GIA modellers can reproduce their sea-level computations within 2% for well defined, large-scale present-day ice mass changes. Smaller and more detailed loads need further and dedicated benchmarking and high resolution computation. This study shows how the details of the implementation and the inputs specifications are an important, and often underappreciated, aspect. Hence this represents a step toward the assessment of reliability of sea level projections obtained with benchmarked SLE codes.
ERIC Educational Resources Information Center
Galladian, Carol
2013-01-01
The purpose of this quantitative ex post facto study was to provide a description of the student engagement of commuter students attending a large urban public university located in a mid-Atlantic state using the five National Survey of Student Engagement (NSSE) benchmarks of student engagement. In addition, the study examined the relationship…
Benchmarking of DFLAW Solid Secondary Wastes and Processes with UK/Europe Counterparts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Elvie E.; Swanberg, David J.; Surman, J.
This report provides information and background on UK solid wastes and waste processes that are similar to those which will be generated by the Direct-Feed Low Activity Waste (DFLAW) facilities at Hanford. The aim is to further improve the design case for stabilizing and immobilizing of solid secondary wastes, establish international benchmarking and review possibilities for innovation.
ERIC Educational Resources Information Center
Halton, Michael J.
2003-01-01
Teachers in Ireland fear that benchmarking in the context of the present review of pay and conditions for all public service workers camouflages a shift of concern away from the development of the individual student to concern for the quality of the educational process provided by schools. A recent dispute between secondary teachers and the Irish…
ERIC Educational Resources Information Center
Raska, David
2014-01-01
This research explores and tests the effect of an innovative performance feedback practice--feedback supplemented with web-based peer benchmarking--through a lens of social cognitive framework for self-regulated learning. The results suggest that providing performance feedback with references to exemplary peer output is positively associated with…
ERIC Educational Resources Information Center
Snow, Amie B.; Morris, Darrell; Perney, Jan
2018-01-01
We examined which of two instruments (Text Reading and Comprehension inventory [TRC] or a traditional informal reading inventory [IRI]) provides the more valid assessment of a primary-grade student's reading instructional level. The TRC is currently the required, benchmark reading assessment for students in grades K-3 in the state of North…
Academic Productivity in Psychiatry: Benchmarks for the H-Index.
MacMaster, Frank P; Swansburg, Rose; Rittenbach, Katherine
2017-08-01
Bibliometrics play an increasingly critical role in the assessment of faculty for promotion and merit increases. Bibliometrics is the statistical analysis of publications, aimed at evaluating their impact. The objective of this study is to describe h-index and citation benchmarks in academic psychiatry. Faculty lists were acquired from online resources for all academic departments of psychiatry listed as having residency training programs in Canada (as of June 2016). Potential authors were then searched on Web of Science (Thomson Reuters) for their corresponding h-index and total number of citations. The sample included 1683 faculty members in academic psychiatry departments. Restricted to those with a rank of assistant, associate, or full professor resulted in 1601 faculty members (assistant = 911, associate = 387, full = 303). h-index and total citations differed significantly by academic rank. Both were highest in the full professor rank, followed by associate, then assistant. The range in each, however, was large. This study provides the initial benchmarks for the h-index and total citations in academic psychiatry. Regardless of any controversies or criticisms of bibliometrics, they are increasingly influencing promotion, merit increases, and grant support. As such, benchmarking by specialties is needed in order to provide needed context.
Lee, A S; Colagiuri, S; Flack, J R
2018-04-06
We developed and implemented a national audit and benchmarking programme to describe the clinical status of people with diabetes attending specialist diabetes services in Australia. The Australian National Diabetes Information Audit and Benchmarking (ANDIAB) initiative was established as a quality audit activity. De-identified data on demographic, clinical, biochemical and outcome items were collected from specialist diabetes services across Australia to provide cross-sectional data on people with diabetes attending specialist centres at least biennially during the years 1998 to 2011. In total, 38 155 sets of data were collected over the eight ANDIAB audits. Each ANDIAB audit achieved its primary objective to collect, collate, analyse, audit and report clinical diabetes data in Australia. Each audit resulted in the production of a pooled data report, as well as individual site reports allowing comparison and benchmarking against other participating sites. The ANDIAB initiative resulted in the largest cross-sectional national de-identified dataset describing the clinical status of people with diabetes attending specialist diabetes services in Australia. ANDIAB showed that people treated by specialist services had a high burden of diabetes complications. This quality audit activity provided a framework to guide planning of healthcare services. © 2018 Diabetes UK.
featsel: A framework for benchmarking of feature selection algorithms and cost functions
NASA Astrophysics Data System (ADS)
Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior
In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.
BACT Simulation User Guide (Version 7.0)
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1997-01-01
This report documents the structure and operation of a simulation model of the Benchmark Active Control Technology (BACT) Wind-Tunnel Model. The BACT system was designed, built, and tested at NASA Langley Research Center as part of the Benchmark Models Program and was developed to perform wind-tunnel experiments to obtain benchmark quality data to validate computational fluid dynamics and computational aeroelasticity codes, to verify the accuracy of current aeroservoelasticity design and analysis tools, and to provide an active controls testbed for evaluating new and innovative control algorithms for flutter suppression and gust load alleviation. The BACT system has been especially valuable as a control system testbed.
Benchmarking neuromorphic vision: lessons learnt from computer vision
Tan, Cheston; Lallee, Stephane; Orchard, Garrick
2015-01-01
Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision. PMID:26528120
Benchmarking facilities providing care: An international overview of initiatives
Thonon, Frédérique; Watson, Jonathan; Saghatchian, Mahasti
2015-01-01
We performed a literature review of existing benchmarking projects of health facilities to explore (1) the rationales for those projects, (2) the motivation for health facilities to participate, (3) the indicators used and (4) the success and threat factors linked to those projects. We studied both peer-reviewed and grey literature. We examined 23 benchmarking projects of different medical specialities. The majority of projects used a mix of structure, process and outcome indicators. For some projects, participants had a direct or indirect financial incentive to participate (such as reimbursement by Medicaid/Medicare or litigation costs related to quality of care). A positive impact was reported for most projects, mainly in terms of improvement of practice and adoption of guidelines and, to a lesser extent, improvement in communication. Only 1 project reported positive impact in terms of clinical outcomes. Success factors and threats are linked to both the benchmarking process (such as organisation of meetings, link with existing projects) and indicators used (such as adjustment for diagnostic-related groups). The results of this review will help coordinators of a benchmarking project to set it up successfully. PMID:26770800
Bauer, Matthias R; Ibrahim, Tamer M; Vogel, Simon M; Boeckler, Frank M
2013-06-24
The application of molecular benchmarking sets helps to assess the actual performance of virtual screening (VS) workflows. To improve the efficiency of structure-based VS approaches, the selection and optimization of various parameters can be guided by benchmarking. With the DEKOIS 2.0 library, we aim to further extend and complement the collection of publicly available decoy sets. Based on BindingDB bioactivity data, we provide 81 new and structurally diverse benchmark sets for a wide variety of different target classes. To ensure a meaningful selection of ligands, we address several issues that can be found in bioactivity data. We have improved our previously introduced DEKOIS methodology with enhanced physicochemical matching, now including the consideration of molecular charges, as well as a more sophisticated elimination of latent actives in the decoy set (LADS). We evaluate the docking performance of Glide, GOLD, and AutoDock Vina with our data sets and highlight existing challenges for VS tools. All DEKOIS 2.0 benchmark sets will be made accessible at http://www.dekois.com.
Benchmarking forensic mental health organizations.
Coombs, Tim; Taylor, Monica; Pirkis, Jane
2011-04-01
This paper describes the forensic mental health forums that were conducted as part of the National Mental Health Benchmarking Project (NMHBP). These forums encouraged participating organizations to compare their performance on a range of key performance indicators (KPIs) with that of their peers. Four forensic mental health organizations took part in the NMHBP. Representatives from these organizations attended eight benchmarking forums at which they documented their performance against previously agreed KPIs. They also undertook three special projects which explored some of the factors that might explain inter-organizational variation in performance. The inter-organizational range for many of the indicators was substantial. Observing this led participants to conduct the special projects to explore three factors which might help explain the variability - seclusion practices, delivery of community mental health services, and provision of court liaison services. The process of conducting the special projects gave participants insights into the practices and structures employed by their counterparts, and provided them with some important lessons for quality improvement. The forensic mental health benchmarking forums have demonstrated that benchmarking is feasible and likely to be useful in improving service performance and quality.
'Wasteaware' benchmark indicators for integrated sustainable waste management in cities.
Wilson, David C; Rodic, Ljiljana; Cowing, Michael J; Velis, Costas A; Whiteman, Andrew D; Scheinberg, Anne; Vilches, Recaredo; Masterson, Darragh; Stretz, Joachim; Oelz, Barbara
2015-01-01
This paper addresses a major problem in international solid waste management, which is twofold: a lack of data, and a lack of consistent data to allow comparison between cities. The paper presents an indicator set for integrated sustainable waste management (ISWM) in cities both North and South, to allow benchmarking of a city's performance, comparing cities and monitoring developments over time. It builds on pioneering work for UN-Habitat's solid waste management in the World's cities. The comprehensive analytical framework of a city's solid waste management system is divided into two overlapping 'triangles' - one comprising the three physical components, i.e. collection, recycling, and disposal, and the other comprising three governance aspects, i.e. inclusivity; financial sustainability; and sound institutions and proactive policies. The indicator set includes essential quantitative indicators as well as qualitative composite indicators. This updated and revised 'Wasteaware' set of ISWM benchmark indicators is the cumulative result of testing various prototypes in more than 50 cities around the world. This experience confirms the utility of indicators in allowing comprehensive performance measurement and comparison of both 'hard' physical components and 'soft' governance aspects; and in prioritising 'next steps' in developing a city's solid waste management system, by identifying both local strengths that can be built on and weak points to be addressed. The Wasteaware ISWM indicators are applicable to a broad range of cities with very different levels of income and solid waste management practices. Their wide application as a standard methodology will help to fill the historical data gap. Copyright © 2014 Elsevier Ltd. All rights reserved.
An automated protocol for performance benchmarking a widefield fluorescence microscope.
Halter, Michael; Bier, Elianna; DeRose, Paul C; Cooksey, Gregory A; Choquette, Steven J; Plant, Anne L; Elliott, John T
2014-11-01
Widefield fluorescence microscopy is a highly used tool for visually assessing biological samples and for quantifying cell responses. Despite its widespread use in high content analysis and other imaging applications, few published methods exist for evaluating and benchmarking the analytical performance of a microscope. Easy-to-use benchmarking methods would facilitate the use of fluorescence imaging as a quantitative analytical tool in research applications, and would aid the determination of instrumental method validation for commercial product development applications. We describe and evaluate an automated method to characterize a fluorescence imaging system's performance by benchmarking the detection threshold, saturation, and linear dynamic range to a reference material. The benchmarking procedure is demonstrated using two different materials as the reference material, uranyl-ion-doped glass and Schott 475 GG filter glass. Both are suitable candidate reference materials that are homogeneously fluorescent and highly photostable, and the Schott 475 GG filter glass is currently commercially available. In addition to benchmarking the analytical performance, we also demonstrate that the reference materials provide for accurate day to day intensity calibration. Published 2014 Wiley Periodicals Inc. Published 2014 Wiley Periodicals Inc. This article is a US government work and, as such, is in the public domain in the United States of America.
Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.
2013-01-01
We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, Azizan; Lasternas, Bertrand; Alschuler, Elena
The American Recovery and Reinvestment Act stimulus funding of 2009 for smart grid projects resulted in the tripling of smart meters deployment. In 2012, the Green Button initiative provided utility customers with access to their real-time1 energy usage. The availability of finely granular data provides an enormous potential for energy data analytics and energy benchmarking. The sheer volume of time-series utility data from a large number of buildings also poses challenges in data collection, quality control, and database management for rigorous and meaningful analyses. In this paper, we will describe a building portfolio-level data analytics tool for operational optimization, businessmore » investment and policy assessment using 15-minute to monthly intervals utility data. The analytics tool is developed on top of the U.S. Department of Energy’s Standard Energy Efficiency Data (SEED) platform, an open source software application that manages energy performance data of large groups of buildings. To support the significantly large volume of granular interval data, we integrated a parallel time-series database to the existing relational database. The time-series database improves on the current utility data input, focusing on real-time data collection, storage, analytics and data quality control. The fully integrated data platform supports APIs for utility apps development by third party software developers. These apps will provide actionable intelligence for building owners and facilities managers. Unlike a commercial system, this platform is an open source platform funded by the U.S. Government, accessible to the public, researchers and other developers, to support initiatives in reducing building energy consumption.« less
NASA Astrophysics Data System (ADS)
Moon, Hongsik
What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.
MIPS bacterial genomes functional annotation benchmark dataset.
Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen
2005-05-15
Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab
NASA Astrophysics Data System (ADS)
Trindade, B. C.; Reed, P. M.
2017-12-01
The growing access and reduced cost for computing power in recent years has promoted rapid development and application of multi-objective water supply portfolio planning. As this trend continues there is a pressing need for flexible risk-based simulation frameworks and improved algorithm benchmarking for emerging classes of water supply planning and management problems. This work contributes the Water Utilities Management and Planning (WUMP) model: a generalizable and open source simulation framework designed to capture how water utilities can minimize operational and financial risks by regionally coordinating planning and management choices, i.e. making more efficient and coordinated use of restrictions, water transfers and financial hedging combined with possible construction of new infrastructure. We introduce the WUMP simulation framework as part of a new multi-objective benchmark problem for planning and management of regionally integrated water utility companies. In this problem, a group of fictitious water utilities seek to balance the use of the mentioned reliability driven actions (e.g., restrictions, water transfers and infrastructure pathways) and their inherent financial risks. Several traits of this problem make it ideal for a benchmark problem, namely the presence of (1) strong non-linearities and discontinuities in the Pareto front caused by the step-wise nature of the decision making formulation and by the abrupt addition of storage through infrastructure construction, (2) noise due to the stochastic nature of the streamflows and water demands, and (3) non-separability resulting from the cooperative formulation of the problem, in which decisions made by stakeholder may substantially impact others. Both the open source WUMP simulation framework and its demonstration in a challenging benchmarking example hold value for promoting broader advances in urban water supply portfolio planning for regions confronting change.
A benchmark for subduction zone modeling
NASA Astrophysics Data System (ADS)
van Keken, P.; King, S.; Peacock, S.
2003-04-01
Our understanding of subduction zones hinges critically on the ability to discern its thermal structure and dynamics. Computational modeling has become an essential complementary approach to observational and experimental studies. The accurate modeling of subduction zones is challenging due to the unique geometry, complicated rheological description and influence of fluid and melt formation. The complicated physics causes problems for the accurate numerical solution of the governing equations. As a consequence it is essential for the subduction zone community to be able to evaluate the ability and limitations of various modeling approaches. The participants of a workshop on the modeling of subduction zones, held at the University of Michigan at Ann Arbor, MI, USA in 2002, formulated a number of case studies to be developed into a benchmark similar to previous mantle convection benchmarks (Blankenbach et al., 1989; Busse et al., 1991; Van Keken et al., 1997). Our initial benchmark focuses on the dynamics of the mantle wedge and investigates three different rheologies: constant viscosity, diffusion creep, and dislocation creep. In addition we investigate the ability of codes to accurate model dynamic pressure and advection dominated flows. Proceedings of the workshop and the formulation of the benchmark are available at www.geo.lsa.umich.edu/~keken/subduction02.html We strongly encourage interested research groups to participate in this benchmark. At Nice 2003 we will provide an update and first set of benchmark results. Interested researchers are encouraged to contact one of the authors for further details.
A novel all-fiber optic flow cytometer technology for Point-of Care and Remote Environments
NASA Astrophysics Data System (ADS)
Mermut, Ozzy
Traditional flow cytometry designs tend to be bulky systems with a complex optical-fluidic sub-system and often require trained personnel for operation. This makes them difficult to readily translate to remote site testing applications. A new compact and portable fiber-optic flow cell (FOFC) technology has been developed at INO. We designed and engineered a specialty optical fiber through which a square hole is transversally bored by laser micromachining. A capillary is fitted into that hole to flow analyte within the fiber square cross-section for detection and counting. With demonstrated performance benchmarks potentially comparable to commercial flow cytometers, our FOFC provides several advantages compared to classic free-space con-figurations, e.g., sheathless flow, low cost, reduced number of optical components, no need for alignment (occurring in the fabrication process only), ease-of-use, miniaturization, portability, and robustness. This sheathless configuration, based on a fiber optic flow module, renders this cytometer amenable to space-grade microgravity environments. We present our recent results for an all-fiber approach to achieve a miniature FOFC to translate flow cytometry from bench to a portable, point-of-care device for deployment in remote settings. Our unique fiber approach provides the capability to illuminate a large surface with a uniform intensity distri-bution, independently of the initial shape originating from the light source, and without loss of optical power. The CVs and sensitivities are measured and compared to industry benchmarks. Finally, integration of LEDs enable several advantages in cost, compactness, and wavelength availability.
NASA Astrophysics Data System (ADS)
Heimann, M.; Prentice, I. C.; Foley, J.; Hickler, T.; Kicklighter, D. W.; McGuire, A. D.; Melillo, J. M.; Ramankutty, N.; Sitch, S.
2001-12-01
Models of biophysical and biogeochemical proceses are being used -either offline or in coupled climate-carbon cycle (C4) models-to assess climate- and CO2-induced feedbacks on atmospheric CO2. Observations of atmospheric CO2 concentration, and supplementary tracers including O2 concentrations and isotopes, offer unique opportunities to evaluate the large-scale behaviour of models. Global patterns, temporal trends, and interannual variability of the atmospheric CO2 concentration and its seasonal cycle provide crucial benchmarks for simulations of regionally-integrated net ecosystem exchange; flux measurements by eddy correlation allow a far more demanding model test at the ecosystem scale than conventional indicators, such as measurements of annual net primary production; and large-scale manipulations, such as the Duke Forest Free Air Carbon Enrichment (FACE) experiment, give a standard to evaluate modelled phenomena such as ecosystem-level CO2 fertilization. Model runs including historical changes of CO2, climate and land use allow comparison with regional-scale monthly CO2 balances as inferred from atmospheric measurements. Such comparisons are providing grounds for some confidence in current models, while pointing to processes that may still be inadequately treated. Current plans focus on (1) continued benchmarking of land process models against flux measurements across ecosystems and experimental findings on the ecosystem-level effects of enhanced CO2, reactive N inputs and temperature; (2) improved representation of land use, forest management and crop metabolism in models; and (3) a strategy for the evaluation of C4 models in a historical observational context.
Delta-ray Production in MCNP 6.2.0
Anderson, Casey Alan; McKinney, Gregg Walter; Tutt, James Robert; ...
2017-10-26
Secondary electrons in the form of delta-rays, also referred to as knock-on electrons, have been a feature of MCNP for electron and positron transport for over 20 years. While MCNP6 now includes transport for a suite of heavy-ions and charged particles from its integration with MCNPX, the production of delta-rays was still limited to electron and positron transport. In the newest release of MCNP6, version 6.2.0, delta-ray production has now been extended for all energetic charged particles. The basis of this production is the analytical formulation from Rossi and ICRU Report 37. As a result, this paper discusses the MCNP6more » heavy charged-particle implementation and provides production results for several benchmark/test problems.« less
Guidelines for the undergraduate psychology major: Version 2.0.
2016-01-01
The APA Guidelines for the Undergraduate Psychology Major: Version 2.0 (henceforth Guidelines 2.0; APA, 2013) represents a national effort to describe and develop high-quality undergraduate programs in psychology. The task force charged with the revision of the original guidelines for the undergraduate major examined the success of the document's implementation and made changes to reflect emerging best practices and to integrate psychology's work with benchmarking scholarship in higher education. Guidelines 2.0 abandoned the original distinction drawn between psychology-focused skills and psychology skills that enhance liberal arts development. Instead, Guidelines 2.0 describes five inclusive goals for the undergraduate psychology major and two developmental levels of student learning outcomes. Suggestions for assessment planning are provided for each of the five learning goals. (c) 2016 APA, all rights reserved).
Aagaard, Brad T.; Knepley, M.G.; Williams, C.A.
2013-01-01
We employ a domain decomposition approach with Lagrange multipliers to implement fault slip in a finite-element code, PyLith, for use in both quasi-static and dynamic crustal deformation applications. This integrated approach to solving both quasi-static and dynamic simulations leverages common finite-element data structures and implementations of various boundary conditions, discretization schemes, and bulk and fault rheologies. We have developed a custom preconditioner for the Lagrange multiplier portion of the system of equations that provides excellent scalability with problem size compared to conventional additive Schwarz methods. We demonstrate application of this approach using benchmarks for both quasi-static viscoelastic deformation and dynamic spontaneous rupture propagation that verify the numerical implementation in PyLith.
A modular case-mix classification system for medical rehabilitation illustrated.
Stineman, M G; Granger, C V
1997-01-01
The authors present a modular set of patient classification systems designed for medical rehabilitation that predict resource use and outcomes for clinically similar groups of individuals. The systems, based on the Functional Independence Measure, are referred to as Function-Related Groups (FIM-FRGs). Using data from 23,637 lower extremity fracture patients from 458 inpatient medical rehabilitation facilities, 1995 benchmarks are provided and illustrated for length of stay, functional outcome, and discharge to home and skilled nursing facilities (SNFs). The FIM-FRG modules may be used in parallel to study interactions between resource use and quality and could ultimately yield an integrated strategy for payment and outcomes measurement. This could position the rehabilitation community to take a pioneering role in the application of outcomes-based clinical indicators.
Integrating Empirical-Modeling Approaches to Improve Understanding of Terrestrial Ecology Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarthy, Heather; Luo, Yiqi; Wullschleger, Stan D
Recent decades have seen tremendous increases in the quantity of empirical ecological data collected by individual investigators, as well as through research networks such as FLUXNET (Baldocchi et al., 2001). At the same time, advances in computer technology have facilitated the development and implementation of large and complex land surface and ecological process models. Separately, each of these information streams provides useful, but imperfect information about ecosystems. To develop the best scientific understanding of ecological processes, and most accurately predict how ecosystems may cope with global change, integration of empirical and modeling approaches is necessary. However, true integration - inmore » which models inform empirical research, which in turn informs models (Fig. 1) - is not yet common in ecological research (Luo et al., 2011). The goal of this workshop, sponsored by the Department of Energy, Office of Science, Biological and Environmental Research (BER) program, was to bring together members of the empirical and modeling communities to exchange ideas and discuss scientific practices for increasing empirical - model integration, and to explore infrastructure and/or virtual network needs for institutionalizing empirical - model integration (Yiqi Luo, University of Oklahoma, Norman, OK, USA). The workshop included presentations and small group discussions that covered topics ranging from model-assisted experimental design to data driven modeling (e.g. benchmarking and data assimilation) to infrastructure needs for empirical - model integration. Ultimately, three central questions emerged. How can models be used to inform experiments and observations? How can experimental and observational results be used to inform models? What are effective strategies to promote empirical - model integration?« less
Evaluating Biology Achievement Scores in an ICT Integrated PBL Environment
ERIC Educational Resources Information Center
Osman, Kamisah; Kaur, Simranjeet Judge
2014-01-01
Students' achievement in Biology is often looked up as a benchmark to evaluate the mode of teaching and learning in higher education. Problem-based learning (PBL) is an approach that focuses on students' solving a problem through collaborative groups. There were eighty samples involved in this study. The samples were divided into three groups: ICT…
ERIC Educational Resources Information Center
de los Ríos-Carmenado, I.; Sastre-Merino, Susana; Fernández Jiménez, Consuelo; Núñez del Río, Mª Cristina; Reyes Pozo, Encarnación; García Arjona, Noemi
2016-01-01
The European Higher Education Area (EHEA) represents a challenge to university teachers to adapt their assessment systems, directing them towards continuous assessment. The integration of competence-based learning as an educational benchmark has also led to a perspective more focused on student and with complex learning situations closer to…
Drafting a Customized Tech Plan: Throw out Yesterday's Creaky Model
ERIC Educational Resources Information Center
Solomon, Gwen
2004-01-01
Today, states and districts are zeroing in on standards-based learning and high stakes test scores--even benchmarking results in advance of the school year. Technology planning is--or should be--a key part of any such learning design. With careful planning for integration districts can be helped to more successfully address standards and, in…
Dilemmas in Examining Understanding of Nature of Science in Vietnam
ERIC Educational Resources Information Center
Thao-Do, Thi Phuong; Yuenyong, Chokchai
2017-01-01
Scholars proved nature of science (NOS) has made certain contributions to science teaching and learning. Nonetheless, what, how and how much NOS should be integrated in the science curriculum of each country cannot be a benchmark, due to the influence of culture and society. Before employing NOS in a new context, it should be carefully studied. In…
ERIC Educational Resources Information Center
Wall, Andrew; Frost, Robert; Smith, Ryan; Keeling, Richard
2008-01-01
Although datasets such as the Integrated Postsecondary Data System are available as inputs to higher education funding formulas, these datasets can be unreliable, incomplete, or unresponsive to criteria identified by state education officials. State formulas do not always match the state's economic and human capital goals. This article analyzes…
Benchmarks and Reliable DFT Results for Spin Gaps of Small Ligand Fe(II) Complexes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Suhwan; Kim, Min-Cheol; Sim, Eunji
2017-05-01
All-electron fixed-node diffusion Monte Carlo provides benchmark spin gaps for four Fe(II) octahedral complexes. Standard quantum chemical methods (semilocal DFT and CCSD(T)) fail badly for the energy difference between their high- and low-spin states. Density-corrected DFT is both significantly more accurate and reliable and yields a consistent prediction for the Fe-Porphyrin complex
ERIC Educational Resources Information Center
Shaw-Elgin, Linda; Jackson, Jane; Kurkowski, Bob; Riehl, Lori; Syvertson, Karen; Whitney, Linda
This document outlines the performance standards for visual arts in North Dakota public schools, grades K-12. Four levels of performance are provided for each benchmark by North Dakota educators for K-4, 5-8, and 9-12 grade levels. Level 4 describes advanced proficiency; Level 3, proficiency; Level 2, partial proficiency; and Level 1, novice. Each…
Albert Schuler; Urs Buehlmann
2003-01-01
This paper describes benchmarking activities undertaken to provide a basis for comparing the U.S. wood furniture industry with other nations that have a globally competitive furniture manufacturing industry. The second part of this paper outlines and discusses strategies that have the potential to help the U.S. furniture industry survive and thrive in a global business...
A benchmark study of the sea-level equation in GIA modelling
NASA Astrophysics Data System (ADS)
Martinec, Zdenek; Klemann, Volker; van der Wal, Wouter; Riva, Riccardo; Spada, Giorgio; Simon, Karen; Blank, Bas; Sun, Yu; Melini, Daniele; James, Tom; Bradley, Sarah
2017-04-01
The sea-level load in glacial isostatic adjustment (GIA) is described by the so called sea-level equation (SLE), which represents the mass redistribution between ice sheets and oceans on a deforming earth. Various levels of complexity of SLE have been proposed in the past, ranging from a simple mean global sea level (the so-called eustatic sea level) to the load with a deforming ocean bottom, migrating coastlines and a changing shape of the geoid. Several approaches to solve the SLE have been derived, from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, there has been no systematic intercomparison amongst the solvers through which the methods may be validated. The goal of this paper is to present a series of benchmark experiments designed for testing and comparing numerical implementations of the SLE. Our approach starts with simple load cases even though the benchmark will not result in GIA predictions for a realistic loading scenario. In the longer term we aim for a benchmark with a realistic loading scenario, and also for benchmark solutions with rotational feedback. The current benchmark uses an earth model for which Love numbers have been computed and benchmarked in Spada et al (2011). In spite of the significant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found can often be attributed to the different approximations inherent to the various algorithms. Literature G. Spada, V. R. Barletta, V. Klemann, R. E. M. Riva, Z. Martinec, P. Gasperini, B. Lund, D. Wolf, L. L. A. Vermeersen, and M. A. King, 2011. A benchmark study for glacial isostatic adjustment codes. Geophys. J. Int. 185: 106-132 doi:10.1111/j.1365-
Reactor Pressure Vessel Fracture Analysis Capabilities in Grizzly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, Benjamin; Backman, Marie; Chakraborty, Pritam
2015-03-01
Efforts have been underway to develop fracture mechanics capabilities in the Grizzly code to enable it to be used to perform deterministic fracture assessments of degraded reactor pressure vessels (RPVs). Development in prior years has resulted a capability to calculate -integrals. For this application, these are used to calculate stress intensity factors for cracks to be used in deterministic linear elastic fracture mechanics (LEFM) assessments of fracture in degraded RPVs. The -integral can only be used to evaluate stress intensity factors for axis-aligned flaws because it can only be used to obtain the stress intensity factor for pure Mode Imore » loading. Off-axis flaws will be subjected to mixed-mode loading. For this reason, work has continued to expand the set of fracture mechanics capabilities to permit it to evaluate off-axis flaws. This report documents the following work to enhance Grizzly’s engineering fracture mechanics capabilities for RPVs: • Interaction Integral and -stress: To obtain mixed-mode stress intensity factors, a capability to evaluate interaction integrals for 2D or 3D flaws has been developed. A -stress evaluation capability has been developed to evaluate the constraint at crack tips in 2D or 3D. Initial verification testing of these capabilities is documented here. • Benchmarking for axis-aligned flaws: Grizzly’s capabilities to evaluate stress intensity factors for axis-aligned flaws have been benchmarked against calculations for the same conditions in FAVOR. • Off-axis flaw demonstration: The newly-developed interaction integral capabilities are demon- strated in an application to calculate the mixed-mode stress intensity factors for off-axis flaws. • Other code enhancements: Other enhancements to the thermomechanics capabilities that relate to the solution of the engineering RPV fracture problem are documented here.« less
The Star Schema Benchmark and Augmented Fact Table Indexing
NASA Astrophysics Data System (ADS)
O'Neil, Patrick; O'Neil, Elizabeth; Chen, Xuedong; Revilak, Stephen
We provide a benchmark measuring star schema queries retrieving data from a fact table with Where clause column restrictions on dimension tables. Clustering is crucial to performance with modern disk technology, since retrievals with filter factors down to 0.0005 are now performed most efficiently by sequential table search rather than by indexed access. DB2’s Multi-Dimensional Clustering (MDC) provides methods to "dice" the fact table along a number of orthogonal "dimensions", but only when these dimensions are columns in the fact table. The diced cells cluster fact rows on several of these "dimensions" at once so queries restricting several such columns can access crucially localized data, with much faster query response. Unfortunately, columns of dimension tables of a star schema are not usually represented in the fact table. In this paper, we show a simple way to adjoin physical copies of dimension columns to the fact table, dicing data to effectively cluster query retrieval, and explain how such dicing can be achieved on database products other than DB2. We provide benchmark measurements to show successful use of this methodology on three commercial database products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.
A search for heavy narrow resonances decaying into four-lepton final states from cascade decays of a Z' boson has been performed using proton-proton collision data atmore » $$\\sqrt{s}$$ = 8 TeV collected by the CMS experiment, corresponding to an integrated luminosity of 19.7 inverse femtobarns. No excess of events over the standard model background expectation is observed. Upper limits for a benchmark model on the product of cross section and branching fraction for the production of these heavy narrow resonances are presented. The limit excludes leptophobic Z' bosons with masses below 2.5 TeV within the benchmark model. This is the first result to constrain a leptophobic Z' resonance in the four-lepton channel.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vega, Richard Manuel; Parm, Edward J.; Griffin, Patrick J.
2015-07-01
This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the Polyethylene-Lead-Graphite (PLG) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 37 integralmore » dosimetry measurements in the neutron field are reported.« less
Analysis of b quark pair production signal from neutral 2HDM Higgs bosons at future linear colliders
NASA Astrophysics Data System (ADS)
Hashemi, Majid; MahdaviKhorrami, Mostafa
2018-06-01
In this paper, the b quark pair production events are analyzed as a source of neutral Higgs bosons of the two Higgs doublet model type I at linear colliders. The production mechanism is e+e- → Z^{(*)} → HA → b{\\bar{b}}b{\\bar{b}} assuming a fully hadronic final state. The analysis aim is to identify both CP-even and CP-odd Higgs bosons in different benchmark points accommodating moderate boson masses. Due to pair production of Higgs bosons, the analysis is most suitable for a linear collider operating at √{s} = 1 TeV. Results show that in selected benchmark points, signal peaks are observable in the b-jet pair invariant mass distributions at integrated luminosity of 500 fb^{-1}.
Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; ...
2017-09-06
A search for heavy narrow resonances decaying into four-lepton final states from cascade decays of a Z' boson has been performed using proton-proton collision data atmore » $$\\sqrt{s}$$ = 8 TeV collected by the CMS experiment, corresponding to an integrated luminosity of 19.7 inverse femtobarns. No excess of events over the standard model background expectation is observed. Upper limits for a benchmark model on the product of cross section and branching fraction for the production of these heavy narrow resonances are presented. The limit excludes leptophobic Z' bosons with masses below 2.5 TeV within the benchmark model. This is the first result to constrain a leptophobic Z' resonance in the four-lepton channel.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold H. Kritz
PTRANSP, which is the predictive version of the TRANSP code, was developed in a collaborative effort involving the Princeton Plasma Physics Laboratory, General Atomics Corporation, Lawrence Livermore National Laboratory, and Lehigh University. The PTRANSP/TRANSP suite of codes is the premier integrated tokamak modeling software in the United States. A production service for PTRANSP/TRANSP simulations is maintained at the Princeton Plasma Physics Laboratory; the server has a simple command line client interface and is subscribed to by about 100 researchers from tokamak projects in the US, Europe, and Asia. This service produced nearly 13000 PTRANSP/TRANSP simulations in the four year periodmore » FY 2005 through FY 2008. Major archives of TRANSP results are maintained at PPPL, MIT, General Atomics, and JET. Recent utilization, counting experimental analysis simulations as well as predictive simulations, more than doubled from slightly over 2000 simulations per year in FY 2005 and FY 2006 to over 4300 simulations per year in FY 2007 and FY 2008. PTRANSP predictive simulations applied to ITER increased eight fold from 30 simulations per year in FY 2005 and FY 2006 to 240 simulations per year in FY 2007 and FY 2008, accounting for more than half of combined PTRANSP/TRANSP service CPU resource utilization in FY 2008. PTRANSP studies focused on ITER played a key role in journal articles. Examples of validation studies carried out for momentum transport in PTRANSP simulations were presented at the 2008 IAEA conference. The increase in number of PTRANSP simulations has continued (more than 7000 TRANSP/PTRANSP simulations in 2010) and results of PTRANSP simulations appear in conference proceedings, for example the 2010 IAEA conference, and in peer reviewed papers. PTRANSP provides a bridge to the Fusion Simulation Program (FSP) and to the future of integrated modeling. Through years of widespread usage, each of the many parts of the PTRANSP suite of codes has been thoroughly validated against experimental data and benchmarked against other codes. At the same time, architectural modernizations are improving the modularity of the PTRANSP code base. The NUBEAM neutral beam and fusion products fast ion model, the Plasma State data repository (developed originally in the SWIM SciDAC project and adapted for use in PTRANSP), and other components are already shared with the SWIM, FACETS, and CPES SciDAC FSP prototype projects. Thus, the PTRANSP code is already serving as a bridge between our present integrated modeling capability and future capability. As the Fusion Simulation Program builds toward the facility currently available in the PTRANSP suite of codes, early versions of the FSP core plasma model will need to be benchmarked against the PTRANSP simulations. This will be necessary to build user confidence in FSP, but this benchmarking can only be done if PTRANSP itself is maintained and developed.« less
The Requirements and Design of the Rapid Prototyping Capabilities System
NASA Astrophysics Data System (ADS)
Haupt, T. A.; Moorhead, R.; O'Hara, C.; Anantharaj, V.
2006-12-01
The Rapid Prototyping Capabilities (RPC) system will provide the capability to rapidly evaluate innovative methods of linking science observations. To this end, the RPC will provide the capability to integrate the software components and tools needed to evaluate the use of a wide variety of current and future NASA sensors, numerical models, and research results, model outputs, and knowledge, collectively referred to as "resources". It is assumed that the resources are geographically distributed, and thus RPC will provide the support for the location transparency of the resources. The RPC system requires providing support for: (1) discovery, semantic understanding, secure access and transport mechanisms for data products available from the known data provides; (2) data assimilation and geo- processing tools for all data transformations needed to match given data products to the model input requirements; (3) model management including catalogs of models and model metadata, and mechanisms for creation environments for model execution; and (4) tools for model output analysis and model benchmarking. The challenge involves developing a cyberinfrastructure for a coordinated aggregate of software, hardware and other technologies, necessary to facilitate RPC experiments, as well as human expertise to provide an integrated, "end-to-end" platform to support the RPC objectives. Such aggregation is to be achieved through a horizontal integration of loosely coupled services. The cyberinfrastructure comprises several software layers. At the bottom, the Grid fabric encompasses network protocols, optical networks, computational resources, storage devices, and sensors. At the top, applications use workload managers to coordinate their access to physical resources. Applications are not tightly bounded to a single physical resource. Instead, they bind dynamically to resources (i.e., they are provisioned) via a common grid infrastructure layer. For the RPC system, the cyberinfrastructure must support organizing computations (or "data transformations" in general) into complex workflows with resource discovery, automatic resource allocation, monitoring, preserving provenance as well as to aggregate heterogeneous, distributed data into knowledge databases. Such service orchestration is the responsibility of the "collective services" layer. For RPC, this layer will be based on Java Business Integration (JBI, [JSR-208]) specification which is a standards-based integration platform that combines messaging, web services, data transformation, and intelligent routing to reliably connect and coordinate the interaction of significant numbers of diverse applications (plug-in components) across organizational boundaries. JBI concept is a new approach to integration that can provide the underpinnings for loosely coupled, highly distributed integration network that can scale beyond the limits of currently used hub-and-spoke brokers. This presentation discusses the requirements, design and early prototype of the NASA-sponsored RPC system under development at Mississippi State University, demonstrating the integration of data provisioning mechanisms, data transformation tools and computational models into a single interoperable system enabling rapid execution of RPC experiments.
Benchmarking Controlled Trial--a novel concept covering all observational effectiveness studies.
Malmivaara, Antti
2015-06-01
The Benchmarking Controlled Trial (BCT) is a novel concept which covers all observational studies aiming to assess effectiveness. BCTs provide evidence of the comparative effectiveness between health service providers, and of effectiveness due to particular features of the health and social care systems. BCTs complement randomized controlled trials (RCTs) as the sources of evidence on effectiveness. This paper presents a definition of the BCT; compares the position of BCTs in assessing effectiveness with that of RCTs; presents a checklist for assessing methodological validity of a BCT; and pilot-tests the checklist with BCTs published recently in the leading medical journals.
TerraFERMA: Harnessing Advanced Computational Libraries in Earth Science
NASA Astrophysics Data System (ADS)
Wilson, C. R.; Spiegelman, M.; van Keken, P.
2012-12-01
Many important problems in Earth sciences can be described by non-linear coupled systems of partial differential equations. These "multi-physics" problems include thermo-chemical convection in Earth and planetary interiors, interactions of fluids and magmas with the Earth's mantle and crust and coupled flow of water and ice. These problems are of interest to a large community of researchers but are complicated to model and understand. Much of this complexity stems from the nature of multi-physics where small changes in the coupling between variables or constitutive relations can lead to radical changes in behavior, which in turn affect critical computational choices such as discretizations, solvers and preconditioners. To make progress in understanding such coupled systems requires a computational framework where multi-physics problems can be described at a high-level while maintaining the flexibility to easily modify the solution algorithm. Fortunately, recent advances in computational science provide a basis for implementing such a framework. Here we present the Transparent Finite Element Rapid Model Assembler (TerraFERMA), which leverages several advanced open-source libraries for core functionality. FEniCS (fenicsproject.org) provides a high level language for describing the weak forms of coupled systems of equations, and an automatic code generator that produces finite element assembly code. PETSc (www.mcs.anl.gov/petsc) provides a wide range of scalable linear and non-linear solvers that can be composed into effective multi-physics preconditioners. SPuD (amcg.ese.ic.ac.uk/Spud) is an application neutral options system that provides both human and machine-readable interfaces based on a single xml schema. Our software integrates these libraries and provides the user with a framework for exploring multi-physics problems. A single options file fully describes the problem, including all equations, coefficients and solver options. Custom compiled applications are generated from this file but share an infrastructure for services common to all models, e.g. diagnostics, checkpointing and global non-linear convergence monitoring. This maximizes code reusability, reliability and longevity ensuring that scientific results and the methods used to acquire them are transparent and reproducible. TerraFERMA has been tested against many published geodynamic benchmarks including 2D/3D thermal convection problems, the subduction zone benchmarks and benchmarks for magmatic solitary waves. It is currently being used in the investigation of reactive cracking phenomena with applications to carbon sequestration, but we will principally discuss its use in modeling the migration of fluids in subduction zones. Subduction zones require an understanding of the highly nonlinear interactions of fluids with solids and thus provide an excellent scientific driver for the development of multi-physics software.
Integrating primary care with occupational health services: a success story.
Griffith, Karen; Strasser, Patricia B
2010-12-01
This article describes the process used by a large U.S. manufacturing company to successfully integrate full-service primary care centers at two locations. The company believed that by providing employees with health promotion and disease prevention services, including screening, early diagnosis, and uncomplicated illness treatment, its health care costs could be significantly reduced while saving employees money. To accurately demonstrate the cost-effectiveness of adding primary care to existing occupational health services, a thorough financial analysis projected the return on investment (ROI) of the program. Decisions were made about center size, the scope of services, and staffing. A critical part of the ROI analysis involved evaluating employee health claim data to identify the actual cost of health care services for each center and the projected costs if the services were provided on-site. The pilot initiative included constructing two on-site health center facilities staffed with primary care physicians, nurse practitioners, physical therapists, and other health care professionals. Key outcome metrics from the pilot clinics exceeded goals in three of four categories. In addition, clinic use after 12 months far exceeded benchmarks for similar clinics. Most importantly, the pilot clinics were operating with a positive cash flow within the first year and demonstrated an increasingly positive ROI. Copyright 2010, SLACK Incorporated.
Adaptive Modulation for DFIG and STATCOM With High-Voltage Direct Current Transmission.
Tang, Yufei; He, Haibo; Ni, Zhen; Wen, Jinyu; Huang, Tingwen
2016-08-01
This paper develops an adaptive modulation approach for power system control based on the approximate/adaptive dynamic programming method, namely, the goal representation heuristic dynamic programming (GrHDP). In particular, we focus on the fault recovery problem of a doubly fed induction generator (DFIG)-based wind farm and a static synchronous compensator (STATCOM) with high-voltage direct current (HVDC) transmission. In this design, the online GrHDP-based controller provides three adaptive supplementary control signals to the DFIG controller, STATCOM controller, and HVDC rectifier controller, respectively. The mechanism is to observe the system states and their derivatives and then provides supplementary control to the plant according to the utility function. With the GrHDP design, the controller can adaptively develop an internal goal representation signal according to the observed power system states, therefore, to achieve more effective learning and modulating. Our control approach is validated on a wind power integrated benchmark system with two areas connected by HVDC transmission lines. Compared with the classical direct HDP and proportional integral control, our GrHDP approach demonstrates the improved transient stability under system faults. Moreover, experiments under different system operating conditions with signal transmission delays are also carried out to further verify the effectiveness and robustness of the proposed approach.
Griffin, Brian M.; Larson, Vincent E.
2016-11-25
Microphysical processes, such as the formation, growth, and evaporation of precipitation, interact with variability and covariances (e.g., fluxes) in moisture and heat content. For instance, evaporation of rain may produce cold pools, which in turn may trigger fresh convection and precipitation. These effects are usually omitted or else crudely parameterized at subgrid scales in weather and climate models.A more formal approach is pursued here, based on predictive, horizontally averaged equations for the variances, covariances, and fluxes of moisture and heat content. These higher-order moment equations contain microphysical source terms. The microphysics terms can be integrated analytically, given a suitably simplemore » warm-rain microphysics scheme and an approximate assumption about the multivariate distribution of cloud-related and precipitation-related variables. Performing the integrations provides exact expressions within an idealized context.A large-eddy simulation (LES) of a shallow precipitating cumulus case is performed here, and it indicates that the microphysical effects on (co)variances and fluxes can be large. In some budgets and altitude ranges, they are dominant terms. The analytic expressions for the integrals are implemented in a single-column, higher-order closure model. Interactive single-column simulations agree qualitatively with the LES. The analytic integrations form a parameterization of microphysical effects in their own right, and they also serve as benchmark solutions that can be compared to non-analytic integration methods.« less
Cryo-EM Data Are Superior to Contact and Interface Information in Integrative Modeling.
de Vries, Sjoerd J; Chauvot de Beauchêne, Isaure; Schindler, Christina E M; Zacharias, Martin
2016-02-23
Protein-protein interactions carry out a large variety of essential cellular processes. Cryo-electron microscopy (cryo-EM) is a powerful technique for the modeling of protein-protein interactions at a wide range of resolutions, and recent developments have caused a revolution in the field. At low resolution, cryo-EM maps can drive integrative modeling of the interaction, assembling existing structures into the map. Other experimental techniques can provide information on the interface or on the contacts between the monomers in the complex. This inevitably raises the question regarding which type of data is best suited to drive integrative modeling approaches. Systematic comparison of the prediction accuracy and specificity of the different integrative modeling paradigms is unavailable to date. Here, we compare EM-driven, interface-driven, and contact-driven integrative modeling paradigms. Models were generated for the protein docking benchmark using the ATTRACT docking engine and evaluated using the CAPRI two-star criterion. At 20 Å resolution, EM-driven modeling achieved a success rate of 100%, outperforming the other paradigms even with perfect interface and contact information. Therefore, even very low resolution cryo-EM data is superior in predicting heterodimeric and heterotrimeric protein assemblies. Our study demonstrates that a force field is not necessary, cryo-EM data alone is sufficient to accurately guide the monomers into place. The resulting rigid models successfully identify regions of conformational change, opening up perspectives for targeted flexible remodeling. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Cryo-EM Data Are Superior to Contact and Interface Information in Integrative Modeling
de Vries, Sjoerd J.; Chauvot de Beauchêne, Isaure; Schindler, Christina E.M.; Zacharias, Martin
2016-01-01
Protein-protein interactions carry out a large variety of essential cellular processes. Cryo-electron microscopy (cryo-EM) is a powerful technique for the modeling of protein-protein interactions at a wide range of resolutions, and recent developments have caused a revolution in the field. At low resolution, cryo-EM maps can drive integrative modeling of the interaction, assembling existing structures into the map. Other experimental techniques can provide information on the interface or on the contacts between the monomers in the complex. This inevitably raises the question regarding which type of data is best suited to drive integrative modeling approaches. Systematic comparison of the prediction accuracy and specificity of the different integrative modeling paradigms is unavailable to date. Here, we compare EM-driven, interface-driven, and contact-driven integrative modeling paradigms. Models were generated for the protein docking benchmark using the ATTRACT docking engine and evaluated using the CAPRI two-star criterion. At 20 Å resolution, EM-driven modeling achieved a success rate of 100%, outperforming the other paradigms even with perfect interface and contact information. Therefore, even very low resolution cryo-EM data is superior in predicting heterodimeric and heterotrimeric protein assemblies. Our study demonstrates that a force field is not necessary, cryo-EM data alone is sufficient to accurately guide the monomers into place. The resulting rigid models successfully identify regions of conformational change, opening up perspectives for targeted flexible remodeling. PMID:26846888
3D J-Integral Capability in Grizzly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, Benjamin; Backman, Marie; Chakraborty, Pritam
2014-09-01
This report summarizes work done to develop a capability to evaluate fracture contour J-Integrals in 3D in the Grizzly code. In the current fiscal year, a previously-developed 2D implementation of a J-Integral evaluation capability has been extended to work in 3D, and to include terms due both to mechanically-induced strains and due to gradients in thermal strains. This capability has been verified against a benchmark solution on a model of a curved crack front in 3D. The thermal term in this integral has been verified against a benchmark problem with a thermal gradient. These developments are part of a largermore » effort to develop Grizzly as a tool that can be used to predict the evolution of aging processes in nuclear power plant systems, structures, and components, and assess their capacity after being subjected to those aging processes. The capabilities described here have been developed to enable evaluations of Mode- stress intensity factors on axis-aligned flaws in reactor pressure vessels. These can be compared with the fracture toughness of the material to determine whether a pre-existing flaw would begin to propagate during a pos- tulated pressurized thermal shock accident. This report includes a demonstration calculation to show how Grizzly is used to perform a deterministic assessment of such a flaw propagation in a degraded reactor pressure vessel under pressurized thermal shock conditions. The stress intensity is calculated from J, and the toughness is computed using the fracture master curve and the degraded ductile to brittle transition temperature.« less
Research on IoT-based water environment benchmark data acquisition management
NASA Astrophysics Data System (ADS)
Yan, Bai; Xue, Bai; Ling, Lin; Jin, Huang; Ren, Liu
2017-11-01
Over the past more than 30 years of reform and opening up, China’s economy has developed at a full speed. However, this rapid growth is under restrictions of resource exhaustion and environmental pollution. Green sustainable development has become a common goal of all humans. As part of environmental resources, water resources are faced with such problems as pollution and shortage, thus hindering sustainable development. The top priority in water resources protection and research is to manage the basic data on water resources, and determine what is the footstone and scientific foundation of water environment management. By studying the aquatic organisms in the Yangtze River Basin, the Yellow River Basin, the Liaohe River Basin and the 5 lake areas, this paper puts forward an IoT-based water environment benchmark data management platform which can transform parameters measured to electric signals by way of chemical probe identification, and then send the benchmark test data of the water environment to node servers. The management platform will provide data and theoretical support for environmental chemistry, toxicology, ecology, etc., promote researches on environmental sciences, lay a solid foundation for comprehensive and systematic research on China’s regional environment characteristics, biotoxicity effects and environment criteria, and provide objective data for compiling standards of the water environment benchmark data.
Practice Benchmarking in the Age of Targeted Auditing
Langdale, Ryan P.; Holland, Ben F.
2012-01-01
The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists. PMID:23598847
Equilibrium Partitioning Sediment Benchmarks (ESBs) for the ...
This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it accounts for the varying bioavailability of chemicals in different sediments and allows for the incorporation of the appropriate biological effects concentration. This provides for the derivation of benchmarks that are causally linked to the specific chemical, applicable across sediments, and appropriately protective of benthic organisms. This equilibrium partitioning sediment benchmark (ESB) document was prepared by scientists from the Atlantic Ecology Division, Mid-Continent Ecology Division, and Western Ecology Division, the Office of Water, and private consultants. The document describes procedures to determine the interstitial water concentrations of nonionic organic chemicals in contaminated sediments. Based on these concentrations, guidance is provided on the derivation of toxic units to assess whether the sediments are likely to cause adverse effects to benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it is based on the concentrations of chemical(s) that are known to be harmful and bioavailable in the environment. This document, and five others published over the last nine years, will be useful for the Program Offices, including Superfund, a
Practice benchmarking in the age of targeted auditing.
Langdale, Ryan P; Holland, Ben F
2012-11-01
The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists.
Benchmarking and Self-Assessment in the Wine Industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galitsky, Christina; Radspieler, Anthony; Worrell, Ernst
2005-12-01
Not all industrial facilities have the staff or theopportunity to perform a detailed audit of their operations. The lack ofknowledge of energy efficiency opportunities provides an importantbarrier to improving efficiency. Benchmarking programs in the U.S. andabroad have shown to improve knowledge of the energy performance ofindustrial facilities and buildings and to fuel energy managementpractices. Benchmarking provides a fair way to compare the energyintensity of plants, while accounting for structural differences (e.g.,the mix of products produced, climate conditions) between differentfacilities. In California, the winemaking industry is not only one of theeconomic pillars of the economy; it is also a large energymore » consumer, witha considerable potential for energy-efficiency improvement. LawrenceBerkeley National Laboratory and Fetzer Vineyards developed the firstbenchmarking tool for the California wine industry called "BEST(Benchmarking and Energy and water Savings Tool) Winery". BEST Wineryenables a winery to compare its energy efficiency to a best practicereference winery. Besides overall performance, the tool enables the userto evaluate the impact of implementing efficiency measures. The toolfacilitates strategic planning of efficiency measures, based on theestimated impact of the measures, their costs and savings. The tool willraise awareness of current energy intensities and offer an efficient wayto evaluate the impact of future efficiency measures.« less
Optimal orientation in flows: providing a benchmark for animal movement strategies.
McLaren, James D; Shamoun-Baranes, Judy; Dokter, Adriaan M; Klaassen, Raymond H G; Bouten, Willem
2014-10-06
Animal movements in air and water can be strongly affected by experienced flow. While various flow-orientation strategies have been proposed and observed, their performance in variable flow conditions remains unclear. We apply control theory to establish a benchmark for time-minimizing (optimal) orientation. We then define optimal orientation for movement in steady flow patterns and, using dynamic wind data, for short-distance mass movements of thrushes (Turdus sp.) and 6000 km non-stop migratory flights by great snipes, Gallinago media. Relative to the optimal benchmark, we assess the efficiency (travel speed) and reliability (success rate) of three generic orientation strategies: full compensation for lateral drift, vector orientation (single-heading movement) and goal orientation (continually heading towards the goal). Optimal orientation is characterized by detours to regions of high flow support, especially when flow speeds approach and exceed the animal's self-propelled speed. In strong predictable flow (short distance thrush flights), vector orientation adjusted to flow on departure is nearly optimal, whereas for unpredictable flow (inter-continental snipe flights), only goal orientation was near-optimally reliable and efficient. Optimal orientation provides a benchmark for assessing efficiency of responses to complex flow conditions, thereby offering insight into adaptive flow-orientation across taxa in the light of flow strength, predictability and navigation capacity.
Optimal orientation in flows: providing a benchmark for animal movement strategies
McLaren, James D.; Shamoun-Baranes, Judy; Dokter, Adriaan M.; Klaassen, Raymond H. G.; Bouten, Willem
2014-01-01
Animal movements in air and water can be strongly affected by experienced flow. While various flow-orientation strategies have been proposed and observed, their performance in variable flow conditions remains unclear. We apply control theory to establish a benchmark for time-minimizing (optimal) orientation. We then define optimal orientation for movement in steady flow patterns and, using dynamic wind data, for short-distance mass movements of thrushes (Turdus sp.) and 6000 km non-stop migratory flights by great snipes, Gallinago media. Relative to the optimal benchmark, we assess the efficiency (travel speed) and reliability (success rate) of three generic orientation strategies: full compensation for lateral drift, vector orientation (single-heading movement) and goal orientation (continually heading towards the goal). Optimal orientation is characterized by detours to regions of high flow support, especially when flow speeds approach and exceed the animal's self-propelled speed. In strong predictable flow (short distance thrush flights), vector orientation adjusted to flow on departure is nearly optimal, whereas for unpredictable flow (inter-continental snipe flights), only goal orientation was near-optimally reliable and efficient. Optimal orientation provides a benchmark for assessing efficiency of responses to complex flow conditions, thereby offering insight into adaptive flow-orientation across taxa in the light of flow strength, predictability and navigation capacity. PMID:25056213
Measuring efficiency among US federal hospitals.
Harrison, Jeffrey P; Meyer, Sean
2014-01-01
This study evaluates the efficiency of federal hospitals, specifically those hospitals administered by the US Department of Veterans Affairs and the US Department of Defense. Hospital executives, health care policymakers, taxpayers, and federal hospital beneficiaries benefit from studies that improve hospital efficiency. This study uses data envelopment analysis to evaluate a panel of 165 federal hospitals in 2007 and 157 of the same hospitals again in 2011. Results indicate that overall efficiency in federal hospitals improved from 81% in 2007 to 86% in 2011. The number of federal hospitals operating on the efficiency frontier decreased slightly from 25 in 2007 to 21 in 2011. The higher efficiency score clearly documents that federal hospitals are becoming more efficient in the management of resources. From a policy perspective, this study highlights the economic importance of encouraging increased efficiency throughout the health care industry. This research examines benchmarking strategies to improve the efficiency of hospital services to federal beneficiaries. Through the use of strategies such as integrated information systems, consolidation of services, transaction-cost economics, and focusing on preventative health care, these organizations have been able to provide quality service while maintaining fiscal responsibility. In addition, the research documented the characteristics of those federal hospitals that were found to be on the Efficiency Frontier. These hospitals serve as benchmarks for less efficient federal hospitals as they develop strategies for improvement.
NASA Astrophysics Data System (ADS)
Liu, Lei; Li, Zhi-Guo; Dai, Jia-Yu; Chen, Qi-Feng; Chen, Xiang-Rong
2018-06-01
Comprehensive knowledge of physical properties such as equation of state (EOS), proton exchange, dynamic structures, diffusion coefficients, and viscosities of hydrogen-deuterium mixtures with densities from 0.1 to 5 g /cm3 and temperatures from 1 to 50 kK has been presented via quantum molecular dynamics (QMD) simulations. The existing multi-shock experimental EOS provides an important benchmark to evaluate exchange-correlation functionals. The comparison of simulations with experiments indicates that a nonlocal van der Waals density functional (vdW-DF1) produces excellent results. Fraction analysis of molecules using a weighted integral over pair distribution functions was performed. A dissociation diagram together with a boundary where the proton exchange (H2+D2⇌2 HD ) occurs was generated, which shows evidence that the HD molecules form as the H2 and D2 molecules are almost 50% dissociated. The mechanism of proton exchange can be interpreted as a process of dissociation followed by recombination. The ionic structures at extreme conditions were analyzed by the effective coordination number model. High-order cluster, circle, and chain structures can be founded in the strongly coupled warm dense regime. The present QMD diffusion coefficient and viscosity can be used to benchmark two analytical one-component plasma (OCP) models: the Coulomb and Yukawa OCP models.
An International Survey of Veterinary Students to Assess Their Use of Online Learning Resources.
Gledhill, Laura; Dale, Vicki H M; Powney, Sonya; Gaitskell-Phillips, Gemma H L; Short, Nick R M
Today's veterinary students have access to a wide range of online resources that support self-directed learning. To develop a benchmark of current global student practice in e-learning, this study measured self-reported access to, and use of, these resources by students internationally. An online survey was designed and promoted via veterinary student mailing lists and international organizations, resulting in 1,070 responses. Analysis of survey data indicated that students now use online resources in a wide range of ways to support their learning. Students reported that access to online veterinary learning resources was now integral to their studies. Almost all students reported using open educational resources (OERs). Ownership of smartphones was widespread, and the majority of respondents agreed that the use of mobile devices, or m-learning, was essential. Social media were highlighted as important for collaborating with peers and sharing knowledge. Constraints to e-learning principally related to poor or absent Internet access and limited institutional provision of computer facilities. There was significant geographical variation, with students from less developed countries disadvantaged by limited access to technology and networks. In conclusion, the survey provides an international benchmark on the range and diversity in terms of access to, and use of, online learning resources by veterinary students globally. It also highlights the inequalities of access among students in different parts of the world.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Yidong; Andrs, David; Martineau, Richard Charles
This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for timemore » integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.« less
Solem, Caitlyn T; Kwon, Youngmin; Shah, Ruchit M; Aly, Abdalla; Botteman, Marc F
2018-02-05
The Quality-Adjusted Time Without Symptoms or Toxicity (Q-TWiST) has been used to evaluate the clinical benefits and risks of oncology treatments. However, limited information is available to interpret and contextualize Q-TWiST results. Areas covered: A systematic review of Q-TWiST literature was conducted to provide contextualizing benchmarks for future studies. 51 articles with 81 unique Q-TWiST comparisons were identified. The mean (95% CI) and median absolute Q-TWiST gains for treatment versus control arms were 2.78 (1.82-3.73) months and 2.20 months across all cancers, respectively. The mean (median) relative Q-TWiST gains were 7.8% (7.2%) across all cancers. Most (88%) studies reported positive gains. The percentage of studies with relative Q-TWiST gains ≥10% (ie, clinically important difference) and ≥15% (ie, clearly clinically important difference) were 40.0% and 22.7%, respectively. Expert commentary: The relevance of Q-TWiST in assessing net clinical benefits of cancer therapy has not diminished, despite an arguably low number of published studies. The interest in such assessment is highlighted by the recent emergence of oncology value frameworks. The Q-TWiST should be compelling to clinicians as it integrates clinical information (ie, toxicity, relapse/progression, and survival) and patient preferences for each of these states into a single meaningful index.
Benchmarking wastewater treatment plants under an eco-efficiency perspective.
Lorenzo-Toja, Yago; Vázquez-Rowe, Ian; Amores, María José; Termes-Rifé, Montserrat; Marín-Navarro, Desirée; Moreira, María Teresa; Feijoo, Gumersindo
2016-10-01
The new ISO 14045 framework is expected to slowly start shifting the definition of eco-efficiency toward a life-cycle perspective, using Life Cycle Assessment (LCA) as the environmental impact assessment method together with a system value assessment method for the economic analysis. In the present study, a set of 22 wastewater treatment plants (WWTPs) in Spain were analyzed on the basis of eco-efficiency criteria, using LCA and Life Cycle Costing (LCC) as a system value assessment method. The study is intended to be useful to decision-makers in the wastewater treatment sector, since the combined method provides an alternative scheme for analyzing the relationship between environmental impacts and costs. Two midpoint impact categories, global warming and eutrophication potential, as well as an endpoint single score indicator were used for the environmental assessment, while LCC was used for value assessment. Results demonstrated that substantial differences can be observed between different WWTPs depending on a wide range of factors such as plant configuration, plant size or even legal discharge limits. Based on these results the benchmarking of wastewater treatment facilities was performed by creating a specific classification and certification scheme. The proposed eco-label for the WWTPs rating is based on the integration of the three environmental indicators and an economic indicator calculated within the study under the eco-efficiency new framework. Copyright © 2016 Elsevier B.V. All rights reserved.
Prediction of Gas Injection Performance for Heterogeneous Reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blunt, Martin J.; Orr, Franklin M.
This report describes research carried out in the Department of Petroleum Engineering at Stanford University from September 1997 - September 1998 under the second year of a three-year grant from the Department of Energy on the "Prediction of Gas Injection Performance for Heterogeneous Reservoirs." The research effort is an integrated study of the factors affecting gas injection, from the pore scale to the field scale, and involves theoretical analysis, laboratory experiments, and numerical simulation. The original proposal described research in four areas: (1) Pore scale modeling of three phase flow in porous media; (2) Laboratory experiments and analysis of factorsmore » influencing gas injection performance at the core scale with an emphasis on the fundamentals of three phase flow; (3) Benchmark simulations of gas injection at the field scale; and (4) Development of streamline-based reservoir simulator. Each state of the research is planned to provide input and insight into the next stage, such that at the end we should have an integrated understanding of the key factors affecting field scale displacements.« less
2018-01-01
Selective digestive decontamination (SDD, topical antibiotic regimens applied to the respiratory tract) appears effective for preventing ventilator associated pneumonia (VAP) in intensive care unit (ICU) patients. However, potential contextual effects of SDD on Staphylococcus aureus infections in the ICU remain unclear. The S. aureus ventilator associated pneumonia (S. aureus VAP), VAP overall and S. aureus bacteremia incidences within component (control and intervention) groups within 27 SDD studies were benchmarked against 115 observational groups. Component groups from 66 studies of various interventions other than SDD provided additional points of reference. In 27 SDD study control groups, the mean S. aureus VAP incidence is 9.6% (95% CI; 6.9–13.2) versus a benchmark derived from 115 observational groups being 4.8% (95% CI; 4.2–5.6). In nine SDD study control groups the mean S. aureus bacteremia incidence is 3.8% (95% CI; 2.1–5.7) versus a benchmark derived from 10 observational groups being 2.1% (95% CI; 1.1–4.1). The incidences of S. aureus VAP and S. aureus bacteremia within the control groups of SDD studies are each higher than literature derived benchmarks. Paradoxically, within the SDD intervention groups, the incidences of both S. aureus VAP and VAP overall are more similar to the benchmarks. PMID:29300363
Principles for Developing Benchmark Criteria for Staff Training in Responsible Gambling.
Oehler, Stefan; Banzer, Raphaela; Gruenerbl, Agnes; Malischnig, Doris; Griffiths, Mark D; Haring, Christian
2017-03-01
One approach to minimizing the negative consequences of excessive gambling is staff training to reduce the rate of the development of new cases of harm or disorder within their customers. The primary goal of the present study was to assess suitable benchmark criteria for the training of gambling employees at casinos and lottery retailers. The study utilised the Delphi Method, a survey with one qualitative and two quantitative phases. A total of 21 invited international experts in the responsible gambling field participated in all three phases. A total of 75 performance indicators were outlined and assigned to six categories: (1) criteria of content, (2) modelling, (3) qualification of trainer, (4) framework conditions, (5) sustainability and (6) statistical indicators. Nine of the 75 indicators were rated as very important by 90 % or more of the experts. Unanimous support for importance was given to indicators such as (1) comprehensibility and (2) concrete action-guidance for handling with problem gamblers, Additionally, the study examined the implementation of benchmarking, when it should be conducted, and who should be responsible. Results indicated that benchmarking should be conducted every 1-2 years regularly and that one institution should be clearly defined and primarily responsible for benchmarking. The results of the present study provide the basis for developing a benchmarking for staff training in responsible gambling.
NASA Technical Reports Server (NTRS)
Pedretti, Kevin T.; Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)
1997-01-01
A variety of different network technologies and topologies are currently being evaluated as part of the Whitney Project. This paper reports on the implementation and performance of a Fast Ethernet network configured in a 4x4 2D torus topology in a testbed cluster of 'commodity' Pentium Pro PCs. Several benchmarks were used for performance evaluation: an MPI point to point message passing benchmark, an MPI collective communication benchmark, and the NAS Parallel Benchmarks version 2.2 (NPB2). Our results show that for point to point communication on an unloaded network, the hub and 1 hop routes on the torus have about the same bandwidth and latency. However, the bandwidth decreases and the latency increases on the torus for each additional route hop. Collective communication benchmarks show that the torus provides roughly four times more aggregate bandwidth and eight times faster MPI barrier synchronizations than a hub based network for 16 processor systems. Finally, the SOAPBOX benchmarks, which simulate real-world CFD applications, generally demonstrated substantially better performance on the torus than on the hub. In the few cases the hub was faster, the difference was negligible. In total, our experimental results lead to the conclusion that for Fast Ethernet networks, the torus topology has better performance and scales better than a hub based network.
A Simplified Approach for the Rapid Generation of Transient Heat-Shield Environments
NASA Technical Reports Server (NTRS)
Wurster, Kathryn E.; Zoby, E. Vincent; Mills, Janelle C.; Kamhawi, Hilmi
2007-01-01
A simplified approach has been developed whereby transient entry heating environments are reliably predicted based upon a limited set of benchmark radiative and convective solutions. Heating, pressure and shear-stress levels, non-dimensionalized by an appropriate parameter at each benchmark condition are applied throughout the entry profile. This approach was shown to be valid based on the observation that the fully catalytic, laminar distributions examined were relatively insensitive to altitude as well as velocity throughout the regime of significant heating. In order to establish a best prediction by which to judge the results that can be obtained using a very limited benchmark set, predictions based on a series of benchmark cases along a trajectory are used. Solutions which rely only on the limited benchmark set, ideally in the neighborhood of peak heating, are compared against the resultant transient heating rates and total heat loads from the best prediction. Predictions based on using two or fewer benchmark cases at or near the trajectory peak heating condition, yielded results to within 5-10 percent of the best predictions. Thus, the method provides transient heating environments over the heat-shield face with sufficient resolution and accuracy for thermal protection system design and also offers a significant capability to perform rapid trade studies such as the effect of different trajectories, atmospheres, or trim angle of attack, on convective and radiative heating rates and loads, pressure, and shear-stress levels.
‘Wasteaware’ benchmark indicators for integrated sustainable waste management in cities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, David C., E-mail: waste@davidcwilson.com; Rodic, Ljiljana; Cowing, Michael J.
Highlights: • Solid waste management (SWM) is a key utility service, but data is often lacking. • Measuring their SWM performance helps a city establish priorities for action. • The Wasteaware benchmark indicators: measure both technical and governance aspects. • Have been developed over 5 years and tested in more than 50 cities on 6 continents. • Enable consistent comparison between cities and countries and monitoring progress. - Abstract: This paper addresses a major problem in international solid waste management, which is twofold: a lack of data, and a lack of consistent data to allow comparison between cities. The papermore » presents an indicator set for integrated sustainable waste management (ISWM) in cities both North and South, to allow benchmarking of a city’s performance, comparing cities and monitoring developments over time. It builds on pioneering work for UN-Habitat’s solid waste management in the World’s cities. The comprehensive analytical framework of a city’s solid waste management system is divided into two overlapping ‘triangles’ – one comprising the three physical components, i.e. collection, recycling, and disposal, and the other comprising three governance aspects, i.e. inclusivity; financial sustainability; and sound institutions and proactive policies. The indicator set includes essential quantitative indicators as well as qualitative composite indicators. This updated and revised ‘Wasteaware’ set of ISWM benchmark indicators is the cumulative result of testing various prototypes in more than 50 cities around the world. This experience confirms the utility of indicators in allowing comprehensive performance measurement and comparison of both ‘hard’ physical components and ‘soft’ governance aspects; and in prioritising ‘next steps’ in developing a city’s solid waste management system, by identifying both local strengths that can be built on and weak points to be addressed. The Wasteaware ISWM indicators are applicable to a broad range of cities with very different levels of income and solid waste management practices. Their wide application as a standard methodology will help to fill the historical data gap.« less
Schilling, Lisa; Chase, Alide; Kehrli, Sommer; Liu, Amy Y; Stiefel, Matt; Brentari, Ruth
2010-11-01
By 2004, senior leaders at Kaiser Permanente, the largest not-for-profit health plan in the United States, recognizing variations across service areas in quality, safety, service, and efficiency, began developing a performance improvement (PI) system to realizing best-in-class quality performance across all 35 medical centers. MEASURING SYSTEMWIDE PERFORMANCE: In 2005, a Web-based data dashboard, "Big Q," which tracks the performance of each medical center and service area against external benchmarks and internal goals, was created. PLANNING FOR PI AND BENCHMARKING PERFORMANCE: In 2006, Kaiser Permanente national and regional continued planning the PI system, and in 2007, quality, medical group, operations, and information technology leaders benchmarked five high-performing organizations to identify capabilities required to achieve consistent best-in-class organizational performance. THE PI SYSTEM: The PI system addresses the six capabilities: leadership priority setting, a systems approach to improvement, measurement capability, a learning organization, improvement capacity, and a culture of improvement. PI "deep experts" (mentors) consult with national, regional, and local leaders, and more than 500 improvement advisors are trained to manage portfolios of 90-120 day improvement initiatives at medical centers. Between the second quarter of 2008 and the first quarter of 2009, performance across all Kaiser Permanente medical centers improved on the Big Q metrics. The lessons learned in implementing and sustaining PI as it becomes fully integrated into all levels of Kaiser Permanente can be generalized to other health care systems, hospitals, and other health care organizations.
Challenges in physician supply planning: the case of Belgium.
Stordeur, Sabine; Léonard, Christian
2010-12-08
Planning human resources for health (HRH) is a complex process for policy-makers and, as a result, many countries worldwide swing from surplus to shortage. In-depth case studies can help appraising the challenges encountered and the solutions implemented. This paper has two objectives: to identify the key challenges in HRH planning in Belgium and to formulate recommendations for an effective HRH planning, on the basis of the Belgian case study and lessons drawn from an international benchmarking. In Belgium, a numerus clausus set up in 1997 and effective in 2004, aims to limit the total number of physicians working in the curative sector. The assumption of a positive relationship between physician densities and health care utilization was a major argument in favor of medical supply restrictions. This new regulation did not improve recurrent challenges such as specialty imbalances, with uncovered needs particularly among general practitioners, and geographical maldistribution. New difficulties also emerged. In particular, limiting national training of HRH turned out to be ineffective within the open European workforce market. The lack of integration of policies affecting HRH was noteworthy. We described in the paper what strategies were developed to address those challenges in Belgium and in neighboring countries. Planning the medical workforce involves determining the numbers, mix, and distribution of health providers that will be required at some identified future point in time. To succeed in their task, health policy planners have to take a broader perspective on the healthcare system. Focusing on numbers is too restrictive and adopting innovative policies learned from benchmarking without integration and coordination is unfruitful. Evolving towards a strategic planning is essential to control the effects of the complex factors impacting on human resources. This evolution requires an effective monitoring of all key factors affecting supply and demand, a dynamic approach, and a system-level perspective, considering all healthcare professionals, and integrating manpower planning with workforce development. To engage in an evidence-based action, policy-makers need a global manpower picture, from their own country and abroad, as well as reliable and comparable manpower databases allowing proper analysis and planning of the workforce.
Challenges in physician supply planning: the case of Belgium
2010-01-01
Introduction Planning human resources for health (HRH) is a complex process for policy-makers and, as a result, many countries worldwide swing from surplus to shortage. In-depth case studies can help appraising the challenges encountered and the solutions implemented. This paper has two objectives: to identify the key challenges in HRH planning in Belgium and to formulate recommendations for an effective HRH planning, on the basis of the Belgian case study and lessons drawn from an international benchmarking. Case description In Belgium, a numerus clausus set up in 1997 and effective in 2004, aims to limit the total number of physicians working in the curative sector. The assumption of a positive relationship between physician densities and health care utilization was a major argument in favor of medical supply restrictions. This new regulation did not improve recurrent challenges such as specialty imbalances, with uncovered needs particularly among general practitioners, and geographical maldistribution. New difficulties also emerged. In particular, limiting national training of HRH turned out to be ineffective within the open European workforce market. The lack of integration of policies affecting HRH was noteworthy. We described in the paper what strategies were developed to address those challenges in Belgium and in neighboring countries. Discussion and evaluation Planning the medical workforce involves determining the numbers, mix, and distribution of health providers that will be required at some identified future point in time. To succeed in their task, health policy planners have to take a broader perspective on the healthcare system. Focusing on numbers is too restrictive and adopting innovative policies learned from benchmarking without integration and coordination is unfruitful. Evolving towards a strategic planning is essential to control the effects of the complex factors impacting on human resources. This evolution requires an effective monitoring of all key factors affecting supply and demand, a dynamic approach, and a system-level perspective, considering all healthcare professionals, and integrating manpower planning with workforce development. Conclusion To engage in an evidence-based action, policy-makers need a global manpower picture, from their own country and abroad, as well as reliable and comparable manpower databases allowing proper analysis and planning of the workforce. PMID:21138596
Ruffle, Betsy; Henderson, James; Murphy-Hagan, Clare; Kirkwood, Gemma; Wolf, Frederick; Edwards, Deborah A
2018-01-01
A probabilistic risk assessment (PRA) was performed to evaluate the range of potential baseline and postremedy health risks to fish consumers at the Portland Harbor Superfund Site (the "Site"). The analysis focused on risks of consuming fish resident to the Site containing polychlorinated biphenyls (PCBs), given that this exposure scenario and contaminant are the primary basis for US Environmental Protection Agency's (USEPA's) selected remedy per the January 2017 Record of Decision (ROD). The PRA used probability distributions fit to the same data sets used in the deterministic baseline human health risk assessment (BHHRA) as well as recent sediment and fish tissue data to evaluate the range and likelihood of current baseline cancer risks and noncancer hazards for anglers. Areas of elevated PCBs in sediment were identified on the basis of a geospatial evaluation of the surface sediment data, and the ranges of risks and hazards associated with pre- and postremedy conditions were calculated. The analysis showed that less active remediation (targeted to areas with the highest concentrations) compared to the remedial alternative selected by USEPA in the ROD can achieve USEPA's interim risk management benchmarks (cancer risk of 10 -4 and noncancer hazard index [HI] of 10) immediately postremediation for the vast majority of subsistence anglers that consume smallmouth bass (SMB) fillet tissue. In addition, the same targeted remedy achieves USEPA's long-term benchmarks (10 -5 and HI of 1) for the majority of recreational anglers. Additional sediment remediation would result in negligible additional risk reduction due to the influence of background. The PRA approach applied here provides a simple but adaptive framework for analysis of risks and remedial options focused on variability in exposures. It can be updated and refined with new data to evaluate and reduce uncertainty, improve understanding of the Site and target populations, and foster informed remedial decision making. Integr Environ Assess Manag 2018;14:63-78. © 2017 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC). © 2017 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC).
Wilkinson, David; Schafer, Jennifer; Hewett, David; Eley, Diann; Swanson, Dave
2014-01-01
To report pilot results for international benchmarking of learning outcomes among 426 final year medical students at the University of Queensland (UQ), Australia. Students took the International Foundations of Medicine (IFOM) Clinical Sciences Exam (CSE) developed by the National Board of Medical Examiners, USA, as a required formative assessment. IFOM CSE comprises 160 multiple-choice questions in medicine, surgery, obstetrics, paediatrics and mental health, taken over 4.5 hours. Significant implementation issues; IFOM scores and benchmarking with International Comparison Group (ICG) scores and United States Medical Licensing Exam (USMLE) Step 2 Clinical Knowledge (CK) scores; and correlation with UQ medical degree cumulative grade point average (GPA). Implementation as an online exam, under university-mandated conditions was successful. Mean IFOM score was 531.3 (maximum 779-minimum 200). The UQ cohort performed better (31% scored below 500) than the ICG (55% below 500). However 49% of the UQ cohort did not meet the USMLE Step 2 CK minimum score. Correlation between IFOM scores and UQ cumulative GPA was reasonable at 0.552 (p < 0.001). International benchmarking is feasible and provides a variety of useful benchmarking opportunities.
An approach to radiation safety department benchmarking in academic and medical facilities.
Harvey, Richard P
2015-02-01
Based on anecdotal evidence and networking with colleagues at other facilities, it has become evident that some radiation safety departments are not adequately staffed and radiation safety professionals need to increase their staffing levels. Discussions with management regarding radiation safety department staffing often lead to similar conclusions. Management acknowledges the Radiation Safety Officer (RSO) or Director of Radiation Safety's concern but asks the RSO to provide benchmarking and justification for additional full-time equivalents (FTEs). The RSO must determine a method to benchmark and justify additional staffing needs while struggling to maintain a safe and compliant radiation safety program. Benchmarking and justification are extremely important tools that are commonly used to demonstrate the need for increased staffing in other disciplines and are tools that can be used by radiation safety professionals. Parameters that most RSOs would expect to be positive predictors of radiation safety staff size generally are and can be emphasized in benchmarking and justification report summaries. Facilities with large radiation safety departments tend to have large numbers of authorized users, be broad-scope programs, be subject to increased controls regulations, have large clinical operations, have significant numbers of academic radiation-producing machines, and have laser safety responsibilities.
Coalescent: an open-science framework for importance sampling in coalescent theory.
Tewari, Susanta; Spouge, John L
2015-01-01
Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.
Forecasting daily patient volumes in the emergency department.
Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L
2008-02-01
Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.
Segmentation of Object Outlines into Parts: A Large-Scale Integrative Study
ERIC Educational Resources Information Center
De Winter, Joeri; Wagemans, Johan
2006-01-01
In this study, a large number of observers (N=201) were asked to segment a collection of outlines derived from line drawings of everyday objects (N=88). This data set was then used as a benchmark to evaluate current models of object segmentation. All of the previously proposed rules of segmentation were found supported in our results. For example,…
ERIC Educational Resources Information Center
Delaunay, Christian J.; Blodgett, Mark S.
2005-01-01
Traditional IB programs have received mixed reviews from the corporate world. With this in mind, the Suffolk GMBA was benchmarked against the leading international business programs. The Suffolk GMBA was designed to be different and to ascertain the global environment in which business operates. A unique feature of the GMBA curriculum detailed in…
A Data Mining Approach to Improve Re-Accessibility and Delivery of Learning Knowledge Objects
ERIC Educational Resources Information Center
Sabitha, Sai; Mehrotra, Deepti; Bansal, Abhay
2014-01-01
Today Learning Management Systems (LMS) have become an integral part of learning mechanism of both learning institutes and industry. A Learning Object (LO) can be one of the atomic components of LMS. A large amount of research is conducted into identifying benchmarks for creating Learning Objects. Some of the major concerns associated with LO are…
Robust performance of multiple tasks by a mobile robot
NASA Technical Reports Server (NTRS)
Beckerman, Martin; Barnett, Deanna L.; Dickens, Mike; Weisbin, Charles R.
1989-01-01
While there have been many successful mobile robot experiments, only a few papers have addressed issues pertaining to the range of applicability, or robustness, of robotic systems. The purpose of this paper is to report results of a series of benchmark experiments done to determine and quantify the robustness of an integrated hardware and software system of a mobile robot.
Predicting drug-target interactions by dual-network integrated logistic matrix factorization
NASA Astrophysics Data System (ADS)
Hao, Ming; Bryant, Stephen H.; Wang, Yanli
2017-01-01
In this work, we propose a dual-network integrated logistic matrix factorization (DNILMF) algorithm to predict potential drug-target interactions (DTI). The prediction procedure consists of four steps: (1) inferring new drug/target profiles and constructing profile kernel matrix; (2) diffusing drug profile kernel matrix with drug structure kernel matrix; (3) diffusing target profile kernel matrix with target sequence kernel matrix; and (4) building DNILMF model and smoothing new drug/target predictions based on their neighbors. We compare our algorithm with the state-of-the-art method based on the benchmark dataset. Results indicate that the DNILMF algorithm outperforms the previously reported approaches in terms of AUPR (area under precision-recall curve) and AUC (area under curve of receiver operating characteristic) based on the 5 trials of 10-fold cross-validation. We conclude that the performance improvement depends on not only the proposed objective function, but also the used nonlinear diffusion technique which is important but under studied in the DTI prediction field. In addition, we also compile a new DTI dataset for increasing the diversity of currently available benchmark datasets. The top prediction results for the new dataset are confirmed by experimental studies or supported by other computational research.
Reverse Engineering Validation using a Benchmark Synthetic Gene Circuit in Human Cells
Kang, Taek; White, Jacob T.; Xie, Zhen; Benenson, Yaakov; Sontag, Eduardo; Bleris, Leonidas
2013-01-01
Multi-component biological networks are often understood incompletely, in large part due to the lack of reliable and robust methodologies for network reverse engineering and characterization. As a consequence, developing automated and rigorously validated methodologies for unraveling the complexity of biomolecular networks in human cells remains a central challenge to life scientists and engineers. Today, when it comes to experimental and analytical requirements, there exists a great deal of diversity in reverse engineering methods, which renders the independent validation and comparison of their predictive capabilities difficult. In this work we introduce an experimental platform customized for the development and verification of reverse engineering and pathway characterization algorithms in mammalian cells. Specifically, we stably integrate a synthetic gene network in human kidney cells and use it as a benchmark for validating reverse engineering methodologies. The network, which is orthogonal to endogenous cellular signaling, contains a small set of regulatory interactions that can be used to quantify the reconstruction performance. By performing successive perturbations to each modular component of the network and comparing protein and RNA measurements, we study the conditions under which we can reliably reconstruct the causal relationships of the integrated synthetic network. PMID:23654266
Reverse engineering validation using a benchmark synthetic gene circuit in human cells.
Kang, Taek; White, Jacob T; Xie, Zhen; Benenson, Yaakov; Sontag, Eduardo; Bleris, Leonidas
2013-05-17
Multicomponent biological networks are often understood incompletely, in large part due to the lack of reliable and robust methodologies for network reverse engineering and characterization. As a consequence, developing automated and rigorously validated methodologies for unraveling the complexity of biomolecular networks in human cells remains a central challenge to life scientists and engineers. Today, when it comes to experimental and analytical requirements, there exists a great deal of diversity in reverse engineering methods, which renders the independent validation and comparison of their predictive capabilities difficult. In this work we introduce an experimental platform customized for the development and verification of reverse engineering and pathway characterization algorithms in mammalian cells. Specifically, we stably integrate a synthetic gene network in human kidney cells and use it as a benchmark for validating reverse engineering methodologies. The network, which is orthogonal to endogenous cellular signaling, contains a small set of regulatory interactions that can be used to quantify the reconstruction performance. By performing successive perturbations to each modular component of the network and comparing protein and RNA measurements, we study the conditions under which we can reliably reconstruct the causal relationships of the integrated synthetic network.
NASA Astrophysics Data System (ADS)
Yang, Xudong; Sun, Lingyu; Zhang, Cheng; Li, Lijun; Dai, Zongmiao; Xiong, Zhenkai
2018-03-01
The application of polymer composites as a substitution of metal is an effective approach to reduce vehicle weight. However, the final performance of composite structures is determined not only by the material types, structural designs and manufacturing process, but also by their mutual restrict. Hence, an integrated "material-structure-process-performance" method is proposed for the conceptual and detail design of composite components. The material selection is based on the principle of composite mechanics such as rule of mixture for laminate. The design of component geometry, dimension and stacking sequence is determined by parametric modeling and size optimization. The selection of process parameters are based on multi-physical field simulation. The stiffness and modal constraint conditions were obtained from the numerical analysis of metal benchmark under typical load conditions. The optimal design was found by multi-discipline optimization. Finally, the proposed method was validated by an application case of automotive hatchback using carbon fiber reinforced polymer. Compared with the metal benchmark, the weight of composite one reduces 38.8%, simultaneously, its torsion and bending stiffness increases 3.75% and 33.23%, respectively, and the first frequency also increases 44.78%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Nicholas R.; Carlsen, Brett W.; Dixon, Brent W.
Dynamic fuel cycle simulation tools are intended to model holistic transient nuclear fuel cycle scenarios. As with all simulation tools, fuel cycle simulators require verification through unit tests, benchmark cases, and integral tests. Model validation is a vital aspect as well. Although compara-tive studies have been performed, there is no comprehensive unit test and benchmark library for fuel cycle simulator tools. The objective of this paper is to identify the must test functionalities of a fuel cycle simulator tool within the context of specific problems of interest to the Fuel Cycle Options Campaign within the U.S. Department of Energy smore » Office of Nuclear Energy. The approach in this paper identifies the features needed to cover the range of promising fuel cycle options identified in the DOE-NE Fuel Cycle Evaluation and Screening (E&S) and categorizes these features to facilitate prioritization. Features were categorized as essential functions, integrating features, and exemplary capabilities. One objective of this paper is to propose a library of unit tests applicable to each of the essential functions. Another underlying motivation for this paper is to encourage an international dialog on the functionalities and standard test methods for fuel cycle simulator tools.« less