Sample records for benchmark active controls

  1. Multirate Flutter Suppression System Design for the Benchmark Active Controls Technology Wing. Part 2; Methodology Application Software Toolbox

    NASA Technical Reports Server (NTRS)

    Mason, Gregory S.; Berg, Martin C.; Mukhopadhyay, Vivek

    2002-01-01

    To study the effectiveness of various control system design methodologies, the NASA Langley Research Center initiated the Benchmark Active Controls Project. In this project, the various methodologies were applied to design a flutter suppression system for the Benchmark Active Controls Technology (BACT) Wing. This report describes the user's manual and software toolbox developed at the University of Washington to design a multirate flutter suppression control law for the BACT wing.

  2. Semi-active control of a cable-stayed bridge under multiple-support excitations.

    PubMed

    Dai, Ze-Bing; Huang, Jin-Zhi; Wang, Hong-Xia

    2004-03-01

    This paper presents a semi-active strategy for seismic protection of a benchmark cable-stayed bridge with consideration of multiple-support excitations. In this control strategy, Magnetorheological (MR) dampers are proposed as control devices, a LQG-clipped-optimal control algorithm is employed. An active control strategy, shown in previous researches to perform well at controlling the benchmark bridge when uniform earthquake motion was assumed, is also used in this study to control this benchmark bridge with consideration of multiple-support excitations. The performance of active control system is compared to that of the presented semi-active control strategy. Because the MR fluid damper is a controllable energy- dissipation device that cannot add mechanical energy to the structural system, the proposed control strategy is fail-safe in that bounded-input, bounded-output stability of the controlled structure is guaranteed. The numerical results demonstrated that the performance of the presented control design is nearly the same as that of the active control system; and that the MR dampers can effectively be used to control seismically excited cable-stayed bridges with multiple-support excitations.

  3. Multirate Flutter Suppression System Design for the Benchmark Active Controls Technology Wing. Part 1; Theory and Design Procedure

    NASA Technical Reports Server (NTRS)

    Mason, Gregory S.; Berg, Martin C.; Mukhopadhyay, Vivek

    2002-01-01

    To study the effectiveness of various control system design methodologies, the NASA Langley Research Center initiated the Benchmark Active Controls Project. In this project, the various methodologies were applied to design a flutter suppression system for the Benchmark Active Controls Technology (BACT) Wing. This report describes a project at the University of Washington to design a multirate suppression system for the BACT wing. The objective of the project was two fold. First, to develop a methodology for designing robust multirate compensators, and second, to demonstrate the methodology by applying it to the design of a multirate flutter suppression system for the BACT wing.

  4. BACT Simulation User Guide (Version 7.0)

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1997-01-01

    This report documents the structure and operation of a simulation model of the Benchmark Active Control Technology (BACT) Wind-Tunnel Model. The BACT system was designed, built, and tested at NASA Langley Research Center as part of the Benchmark Models Program and was developed to perform wind-tunnel experiments to obtain benchmark quality data to validate computational fluid dynamics and computational aeroelasticity codes, to verify the accuracy of current aeroservoelasticity design and analysis tools, and to provide an active controls testbed for evaluating new and innovative control algorithms for flutter suppression and gust load alleviation. The BACT system has been especially valuable as a control system testbed.

  5. Benchmark Simulation Model No 2: finalisation of plant layout and default control strategy.

    PubMed

    Nopens, I; Benedetti, L; Jeppsson, U; Pons, M-N; Alex, J; Copp, J B; Gernaey, K V; Rosen, C; Steyer, J-P; Vanrolleghem, P A

    2010-01-01

    The COST/IWA Benchmark Simulation Model No 1 (BSM1) has been available for almost a decade. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the research work related to the benchmark simulation models has resulted in more than 300 publications worldwide demonstrates the interest in and need of such tools within the research community. Recent efforts within the IWA Task Group on "Benchmarking of control strategies for WWTPs" have focused on an extension of the benchmark simulation model. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently, includes both pretreatment of wastewater as well as the processes describing sludge treatment. The motivation for the extension is the increasing interest and need to operate and control wastewater treatment systems not only at an individual process level but also on a plant-wide basis. To facilitate the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one week BSM1 evaluation period. In this paper, the finalised plant layout is summarised and, as was done for BSM1, a default control strategy is proposed. A demonstration of how BSM2 can be used to evaluate control strategies is also given.

  6. WWTP dynamic disturbance modelling--an essential module for long-term benchmarking development.

    PubMed

    Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    Intensive use of the benchmark simulation model No. 1 (BSM1), a protocol for objective comparison of the effectiveness of control strategies in biological nitrogen removal activated sludge plants, has also revealed a number of limitations. Preliminary definitions of the long-term benchmark simulation model No. 1 (BSM1_LT) and the benchmark simulation model No. 2 (BSM2) have been made to extend BSM1 for evaluation of process monitoring methods and plant-wide control strategies, respectively. Influent-related disturbances for BSM1_LT/BSM2 are to be generated with a model, and this paper provides a general overview of the modelling methods used. Typical influent dynamic phenomena generated with the BSM1_LT/BSM2 influent disturbance model, including diurnal, weekend, seasonal and holiday effects, as well as rainfall, are illustrated with simulation results. As a result of the work described in this paper, a proposed influent model/file has been released to the benchmark developers for evaluation purposes. Pending this evaluation, a final BSM1_LT/BSM2 influent disturbance model definition is foreseen. Preliminary simulations with dynamic influent data generated by the influent disturbance model indicate that default BSM1 activated sludge plant control strategies will need extensions for BSM1_LT/BSM2 to efficiently handle 1 year of influent dynamics.

  7. A determination of the external forces required to move the benchmark active controls testing model in pure plunge and pure pitch

    NASA Technical Reports Server (NTRS)

    Dcruz, Jonathan

    1993-01-01

    In view of the strong need for a well-documented set of experimental data which is suitable for the validation and/or calibration of modern Computational Fluid Dynamics codes, the Benchmark Models Program was initiated by the Structural Dynamics Division of the NASA Langley Research Center. One of the models in the program, the Benchmark Active Controls Testing Model, consists of a rigid wing of rectangular planform with a NACA 0012 profile and three control surfaces (a trailing-edge control surface, a lower-surface spoiler, and an upper-surface spoiler). The model is affixed to a flexible mount system which allows only plunging and/or pitching motion. An approximate analytical determination of the forces required to move this model, with its control surfaces fixed, in pure plunge and pure pitch at a number of test conditions is included. This provides a good indication of the type of actuator system required to generate the aerodynamic data resulting from pure plunging and pure pitching motion, in which much interest was expressed. The analysis makes use of previously obtained numerical results.

  8. Modeling the Benchmark Active Control Technology Wind-Tunnel Model for Active Control Design Applications

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1998-01-01

    This report describes the formulation of a model of the dynamic behavior of the Benchmark Active Controls Technology (BACT) wind tunnel model for active control design and analysis applications. The model is formed by combining the equations of motion for the BACT wind tunnel model with actuator models and a model of wind tunnel turbulence. The primary focus of this report is the development of the equations of motion from first principles by using Lagrange's equations and the principle of virtual work. A numerical form of the model is generated by making use of parameters obtained from both experiment and analysis. Comparisons between experimental and analytical data obtained from the numerical model show excellent agreement and suggest that simple coefficient-based aerodynamics are sufficient to accurately characterize the aeroelastic response of the BACT wind tunnel model. The equations of motion developed herein have been used to aid in the design and analysis of a number of flutter suppression controllers that have been successfully implemented.

  9. Benchmark simulation Model no 2 in Matlab-simulink: towards plant-wide WWTP control strategy evaluation.

    PubMed

    Vreck, D; Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    In this paper, implementation of the Benchmark Simulation Model No 2 (BSM2) within Matlab-Simulink is presented. The BSM2 is developed for plant-wide WWTP control strategy evaluation on a long-term basis. It consists of a pre-treatment process, an activated sludge process and sludge treatment processes. Extended evaluation criteria are proposed for plant-wide control strategy assessment. Default open-loop and closed-loop strategies are also proposed to be used as references with which to compare other control strategies. Simulations indicate that the BM2 is an appropriate tool for plant-wide control strategy evaluation.

  10. Development of risk-based nanomaterial groups for occupational exposure control

    NASA Astrophysics Data System (ADS)

    Kuempel, E. D.; Castranova, V.; Geraci, C. L.; Schulte, P. A.

    2012-09-01

    Given the almost limitless variety of nanomaterials, it will be virtually impossible to assess the possible occupational health hazard of each nanomaterial individually. The development of science-based hazard and risk categories for nanomaterials is needed for decision-making about exposure control practices in the workplace. A possible strategy would be to select representative (benchmark) materials from various mode of action (MOA) classes, evaluate the hazard and develop risk estimates, and then apply a systematic comparison of new nanomaterials with the benchmark materials in the same MOA class. Poorly soluble particles are used here as an example to illustrate quantitative risk assessment methods for possible benchmark particles and occupational exposure control groups, given mode of action and relative toxicity. Linking such benchmark particles to specific exposure control bands would facilitate the translation of health hazard and quantitative risk information to the development of effective exposure control practices in the workplace. A key challenge is obtaining sufficient dose-response data, based on standard testing, to systematically evaluate the nanomaterials' physical-chemical factors influencing their biological activity. Categorization processes involve both science-based analyses and default assumptions in the absence of substance-specific information. Utilizing data and information from related materials may facilitate initial determinations of exposure control systems for nanomaterials.

  11. All inclusive benchmarking.

    PubMed

    Ellis, Judith

    2006-07-01

    The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.

  12. Transonic Flutter Suppression Control Law Design, Analysis and Wind-Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using classical, and minimax techniques are described. A unified general formulation and solution for the minimax approach, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  13. Benchmark simulation model no 2: general protocol and exploratory case studies.

    PubMed

    Jeppsson, U; Pons, M-N; Nopens, I; Alex, J; Copp, J B; Gernaey, K V; Rosen, C; Steyer, J-P; Vanrolleghem, P A

    2007-01-01

    Over a decade ago, the concept of objectively evaluating the performance of control strategies by simulating them using a standard model implementation was introduced for activated sludge wastewater treatment plants. The resulting Benchmark Simulation Model No 1 (BSM1) has been the basis for a significant new development that is reported on here: Rather than only evaluating control strategies at the level of the activated sludge unit (bioreactors and secondary clarifier) the new BSM2 now allows the evaluation of control strategies at the level of the whole plant, including primary clarifier and sludge treatment with anaerobic sludge digestion. In this contribution, the decisions that have been made over the past three years regarding the models used within the BSM2 are presented and argued, with particular emphasis on the ADM1 description of the digester, the interfaces between activated sludge and digester models, the included temperature dependencies and the reject water storage. BSM2-implementations are now available in a wide range of simulation platforms and a ring test has verified their proper implementation, consistent with the BSM2 definition. This guarantees that users can focus on the control strategy evaluation rather than on modelling issues. Finally, for illustration, twelve simple operational strategies have been implemented in BSM2 and their performance evaluated. Results show that it is an interesting control engineering challenge to further improve the performance of the BSM2 plant (which is the whole idea behind benchmarking) and that integrated control (i.e. acting at different places in the whole plant) is certainly worthwhile to achieve overall improvement.

  14. Modeling the Benchmark Active Control Technology Wind-Tunnel Model for Application to Flutter Suppression

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1996-01-01

    This paper describes the formulation of a model of the dynamic behavior of the Benchmark Active Controls Technology (BACT) wind-tunnel model for application to design and analysis of flutter suppression controllers. The model is formed by combining the equations of motion for the BACT wind-tunnel model with actuator models and a model of wind-tunnel turbulence. The primary focus of this paper is the development of the equations of motion from first principles using Lagrange's equations and the principle of virtual work. A numerical form of the model is generated using values for parameters obtained from both experiment and analysis. A unique aspect of the BACT wind-tunnel model is that it has upper- and lower-surface spoilers for active control. Comparisons with experimental frequency responses and other data show excellent agreement and suggest that simple coefficient-based aerodynamics are sufficient to accurately characterize the aeroelastic response of the BACT wind-tunnel model. The equations of motion developed herein have been used to assist the design and analysis of a number of flutter suppression controllers that have been successfully implemented.

  15. A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.

    1998-01-01

    This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.

  16. Transonic Flutter Suppression Control Law Design, Analysis and Wind Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  17. Transonic Flutter Suppression Control Law Design, Analysis and Wind-Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  18. Transonic Flutter Suppression Control Law Design Using Classical and Optimal Techniques with Wind-Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  19. Parameter Estimation of Actuators for Benchmark Active Control Technology (BACT) Wind Tunnel Model with Analysis of Wear and Aerodynamic Loading Effects

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Fung, Jimmy

    1998-01-01

    This report describes the development of transfer function models for the trailing-edge and upper and lower spoiler actuators of the Benchmark Active Control Technology (BACT) wind tunnel model for application to control system analysis and design. A simple nonlinear least-squares parameter estimation approach is applied to determine transfer function parameters from frequency response data. Unconstrained quasi-Newton minimization of weighted frequency response error was employed to estimate the transfer function parameters. An analysis of the behavior of the actuators over time to assess the effects of wear and aerodynamic load by using the transfer function models is also presented. The frequency responses indicate consistent actuator behavior throughout the wind tunnel test and only slight degradation in effectiveness due to aerodynamic hinge loading. The resulting actuator models have been used in design, analysis, and simulation of controllers for the BACT to successfully suppress flutter over a wide range of conditions.

  20. Medical school benchmarking - from tools to programmes.

    PubMed

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  1. Multirate flutter suppression system design for the Benchmark Active Controls Technology Wing

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.

    1994-01-01

    To study the effectiveness of various control system design methodologies, the NASA Langley Research Center initiated the Benchmark Active Controls Project. In this project, the various methodologies will be applied to design a flutter suppression system for the Benchmark Active Controls Technology (BACT) Wing (also called the PAPA wing). Eventually, the designs will be implemented in hardware and tested on the BACT wing in a wind tunnel. This report describes a project at the University of Washington to design a multirate flutter suppression system for the BACT wing. The objective of the project was two fold. First, to develop a methodology for designing robust multirate compensators, and second, to demonstrate the methodology by applying it to the design of a multirate flutter suppression system for the BACT wing. The contributions of this project are (1) development of an algorithm for synthesizing robust low order multirate control laws (the algorithm is capable of synthesizing a single compensator which stabilizes both the nominal plant and multiple plant perturbations; (2) development of a multirate design methodology, and supporting software, for modeling, analyzing and synthesizing multirate compensators; and (3) design of a multirate flutter suppression system for NASA's BACT wing which satisfies the specified design criteria. This report describes each of these contributions in detail. Section 2.0 discusses our design methodology. Section 3.0 details the results of our multirate flutter suppression system design for the BACT wing. Finally, Section 4.0 presents our conclusions and suggestions for future research. The body of the report focuses primarily on the results. The associated theoretical background appears in the three technical papers that are included as Attachments 1-3. Attachment 4 is a user's manual for the software that is key to our design methodology.

  2. Evaluation of control strategies using an oxidation ditch benchmark.

    PubMed

    Abusam, A; Keesman, K J; Spanjers, H; van, Straten G; Meinema, K

    2002-01-01

    This paper presents validation and implementation results of a benchmark developed for a specific full-scale oxidation ditch wastewater treatment plant. A benchmark is a standard simulation procedure that can be used as a tool in evaluating various control strategies proposed for wastewater treatment plants. It is based on model and performance criteria development. Testing of this benchmark, by comparing benchmark predictions to real measurements of the electrical energy consumptions and amounts of disposed sludge for a specific oxidation ditch WWTP, has shown that it can (reasonably) be used for evaluating the performance of this WWTP. Subsequently, the validated benchmark was then used in evaluating some basic and advanced control strategies. Some of the interesting results obtained are the following: (i) influent flow splitting ratio, between the first and the fourth aerated compartments of the ditch, has no significant effect on the TN concentrations in the effluent, and (ii) for evaluation of long-term control strategies, future benchmarks need to be able to assess settlers' performance.

  3. Active control of silver nanoparticles spacing using dielectrophoresis for surface-enhanced Raman scattering.

    PubMed

    Chrimes, Adam F; Khoshmanesh, Khashayar; Stoddart, Paul R; Kayani, Aminuddin A; Mitchell, Arnan; Daima, Hemant; Bansal, Vipul; Kalantar-zadeh, Kourosh

    2012-05-01

    We demonstrate an active microfluidic platform that integrates dielectrophoresis for the control of silver nanoparticles spacing, as they flow in a liquid channel. By careful control of the nanoparticles spacing, we can effectively increase the surface-enhanced Raman scattering (SERS) signal intensity based on augmenting the number of SERS-active hot-spots, while avoiding irreversible aggregation of the particles. The system is benchmarked using dipicolinate (2,6-pyridinedicarboxylic acid) (DPA), which is a biomarker of Bacillus anthracis. The validity of the results is discussed using several complementing characterization scenarios.

  4. Test Cases for the Benchmark Active Controls: Spoiler and Control Surface Oscillations and Flutter

    NASA Technical Reports Server (NTRS)

    Bennett, Robert M.; Scott, Robert C.; Wieseman, Carol D.

    2000-01-01

    As a portion of the Benchmark Models Program at NASA Langley, a simple generic model was developed for active controls research and was called BACT for Benchmark Active Controls Technology model. This model was based on the previously-tested Benchmark Models rectangular wing with the NACA 0012 airfoil section that was mounted on the Pitch and Plunge Apparatus (PAPA) for flutter testing. The BACT model had an upper surface spoiler, a lower surface spoiler, and a trailing edge control surface for use in flutter suppression and dynamic response excitation. Previous experience with flutter suppression indicated a need for measured control surface aerodynamics for accurate control law design. Three different types of flutter instability boundaries had also been determined for the NACA 0012/PAPA model, a classical flutter boundary, a transonic stall flutter boundary at angle of attack, and a plunge instability near M = 0.9. Therefore an extensive set of steady and control surface oscillation data was generated spanning the range of the three types of instabilities. This information was subsequently used to design control laws to suppress each flutter instability. There have been three tests of the BACT model. The objective of the first test, TDT Test 485, was to generate a data set of steady and unsteady control surface effectiveness data, and to determine the open loop dynamic characteristics of the control systems including the actuators. Unsteady pressures, loads, and transfer functions were measured. The other two tests, TDT Test 502 and TDT Test 5 18, were primarily oriented towards active controls research, but some data supplementary to the first test were obtained. Dynamic response of the flexible system to control surface excitation and open loop flutter characteristics were determined during Test 502. Loads were not measured during the last two tests. During these tests, a database of over 3000 data sets was obtained. A reasonably extensive subset of the data sets from the first two tests have been chosen for Test Cases for computational comparisons concentrating on static conditions and cases with harmonically oscillating control surfaces. Several flutter Test Cases from both tests have also been included. Some aerodynamic comparisons with the BACT data have been made using computational fluid dynamics codes at the Navier-Stokes level (and in the accompanying chapter SC). Some mechanical and active control studies have been presented. In this report several Test Cases are selected to illustrate trends for a variety of different conditions with emphasis on transonic flow effects. Cases for static angles of attack, static trailing-edge and upper-surface spoiler deflections are included for a range of conditions near those for the oscillation cases. Cases for trailing-edge control and upper-surface spoiler oscillations for a range of Mach numbers, angle of attack, and static control deflections are included. Cases for all three types of flutter instability are selected. In addition some cases are included for dynamic response measurements during forced oscillations of the controls on the flexible mount. An overview of the model and tests is given, and the standard formulary for these data is listed. Some sample data and sample results of calculations are presented. Only the static pressures and the first harmonic real and imaginary parts of the pressures are included in the data for the Test Cases, but digitized time histories have been archived. The data for the Test Cases are also available as separate electronic files.

  5. [The OPTIMISE study (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment]. Results for Luxembourg].

    PubMed

    Michel, G

    2012-01-01

    The OPTIMISE study (NCT00681850) has been run in six European countries, including Luxembourg, to prospectively assess the effect of benchmarking on the quality of primary care in patients with type 2 diabetes, using major modifiable vascular risk factors as critical quality indicators. Primary care centers treating type 2 diabetic patients were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). Primary endpoint was percentage of patients in the benchmarking group achieving pre-set targets of the critical quality indicators: glycated hemoglobin (HbAlc), systolic blood pressure (SBP) and low-density lipoprotein (LDL) cholesterol after 12 months follow-up. In Luxembourg, in the benchmarking group, more patients achieved target for SBP (40.2% vs. 20%) and for LDL-cholesterol (50.4% vs. 44.2%). 12.9% of patients in the benchmarking group met all three targets compared with patients in the control group (8.3%). In this randomized, controlled study, benchmarking was shown to be an effective tool for improving critical quality indicator targets, which are the principal modifiable vascular risk factors in diabetes type 2.

  6. Funnel plot control limits to identify poorly performing healthcare providers when there is uncertainty in the value of the benchmark.

    PubMed

    Manktelow, Bradley N; Seaton, Sarah E; Evans, T Alun

    2016-12-01

    There is an increasing use of statistical methods, such as funnel plots, to identify poorly performing healthcare providers. Funnel plots comprise the construction of control limits around a benchmark and providers with outcomes falling outside the limits are investigated as potential outliers. The benchmark is usually estimated from observed data but uncertainty in this estimate is usually ignored when constructing control limits. In this paper, the use of funnel plots in the presence of uncertainty in the value of the benchmark is reviewed for outcomes from a Binomial distribution. Two methods to derive the control limits are shown: (i) prediction intervals; (ii) tolerance intervals Tolerance intervals formally include the uncertainty in the value of the benchmark while prediction intervals do not. The probability properties of 95% control limits derived using each method were investigated through hypothesised scenarios. Neither prediction intervals nor tolerance intervals produce funnel plot control limits that satisfy the nominal probability characteristics when there is uncertainty in the value of the benchmark. This is not necessarily to say that funnel plots have no role to play in healthcare, but that without the development of intervals satisfying the nominal probability characteristics they must be interpreted with care. © The Author(s) 2014.

  7. Decoys Selection in Benchmarking Datasets: Overview and Perspectives

    PubMed Central

    Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Lagarde, Nathalie; Montes, Matthieu

    2018-01-01

    Virtual Screening (VS) is designed to prospectively help identifying potential hits, i.e., compounds capable of interacting with a given target and potentially modulate its activity, out of large compound collections. Among the variety of methodologies, it is crucial to select the protocol that is the most adapted to the query/target system under study and that yields the most reliable output. To this aim, the performance of VS methods is commonly evaluated and compared by computing their ability to retrieve active compounds in benchmarking datasets. The benchmarking datasets contain a subset of known active compounds together with a subset of decoys, i.e., assumed non-active molecules. The composition of both the active and the decoy compounds subsets is critical to limit the biases in the evaluation of the VS methods. In this review, we focus on the selection of decoy compounds that has considerably changed over the years, from randomly selected compounds to highly customized or experimentally validated negative compounds. We first outline the evolution of decoys selection in benchmarking databases as well as current benchmarking databases that tend to minimize the introduction of biases, and secondly, we propose recommendations for the selection and the design of benchmarking datasets. PMID:29416509

  8. Benchmarking and validation activities within JEFF project

    NASA Astrophysics Data System (ADS)

    Cabellos, O.; Alvarez-Velarde, F.; Angelone, M.; Diez, C. J.; Dyrda, J.; Fiorito, L.; Fischer, U.; Fleming, M.; Haeck, W.; Hill, I.; Ichou, R.; Kim, D. H.; Klix, A.; Kodeli, I.; Leconte, P.; Michel-Sendis, F.; Nunnenmann, E.; Pecchia, M.; Peneliau, Y.; Plompen, A.; Rochman, D.; Romojaro, P.; Stankovskiy, A.; Sublet, J. Ch.; Tamagno, P.; Marck, S. van der

    2017-09-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  9. Fixed-Order Mixed Norm Designs for Building Vibration Control

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Calise, Anthony J.

    2000-01-01

    This study investigates the use of H2, mu-synthesis, and mixed H2/mu methods to construct full order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodeled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full order compensators that are robust to both unmodeled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H2 design performance levels while providing the same levels of robust stability as the mu designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H2 designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.

  10. NOVEL OXIDANT FOR ELEMENTAL MERCURY CONTROL FROM FLUE GAS

    EPA Science Inventory

    A novel economical oxidant has been developed for elemental mercury (Hg(0)) removal from coal-fired boilers. The oxidant was rigorously tested in a lab-scale fixed-bed system with the Norit America's FGD activated carbon (DOE's benchmark sorbent) in a typical PRB subbituminous/l...

  11. Benchmarking is associated with improved quality of care in type 2 diabetes: the OPTIMISE randomized, controlled trial.

    PubMed

    Hermans, Michel P; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-11-01

    To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P < 0.001); 54.3 vs. 49.7% met the LDL cholesterol target (P = 0.006). Percentages of patients meeting all three targets increased during the study in both groups, with a statistically significant increase observed in the benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P < 0.001). In this prospective, randomized, controlled study, benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile.

  12. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    ERIC Educational Resources Information Center

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  13. NOVEL ECONOMICAL HG(0) OXIDATION REAGENT FOR MERCURY EMISSIONS CONTROL FROM COAL-FIRED BOILERS

    EPA Science Inventory

    The authors have developed a novel economical additive for elemental mercury (Hg0) removal from coal-fired boilers. The oxidation reagent was rigorously tested in a lab-scale fixed-bed column with the Norit America's FGD activated carbon (DOE's benchmark sorbent) in a typical PRB...

  14. A New Performance Improvement Model: Adding Benchmarking to the Analysis of Performance Indicator Data.

    PubMed

    Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu

    2016-01-01

    A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.

  15. Effectiveness of Social Marketing Interventions to Promote Physical Activity Among Adults: A Systematic Review.

    PubMed

    Xia, Yuan; Deshpande, Sameer; Bonates, Tiberius

    2016-11-01

    Social marketing managers promote desired behaviors to an audience by making them tangible in the form of environmental opportunities to enhance benefits and reduce barriers. This study proposed "benchmarks," modified from those found in the past literature, that would match important concepts of the social marketing framework and the inclusion of which would ensure behavior change effectiveness. In addition, we analyzed behavior change interventions on a "social marketing continuum" to assess whether the number of benchmarks and the role of specific benchmarks influence the effectiveness of physical activity promotion efforts. A systematic review of social marketing interventions available in academic studies published between 1997 and 2013 revealed 173 conditions in 92 interventions. Findings based on χ 2 , Mallows' Cp, and Logical Analysis of Data tests revealed that the presence of more benchmarks in interventions increased the likelihood of success in promoting physical activity. The presence of more than 3 benchmarks improved the success of the interventions; specifically, all interventions were successful when more than 7.5 benchmarks were present. Further, primary formative research, core product, actual product, augmented product, promotion, and behavioral competition all had a significant influence on the effectiveness of interventions. Social marketing is an effective approach in promoting physical activity among adults when a substantial number of benchmarks are used and when managers understand the audience, make the desired behavior tangible, and promote the desired behavior persuasively.

  16. Benchmarking in national health service procurement in Scotland.

    PubMed

    Walker, Scott; Masson, Ron; Telford, Ronnie; White, David

    2007-11-01

    The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland.

  17. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    ERIC Educational Resources Information Center

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  18. A Causal-Comparative Study of the Affects of Benchmark Assessments on Middle Grades Science Achievement Scores

    ERIC Educational Resources Information Center

    Galloway, Melissa Ritchie

    2016-01-01

    The purpose of this causal comparative study was to test the theory of assessment that relates benchmark assessments to the Georgia middle grades science Criterion Referenced Competency Test (CRCT) percentages, controlling for schools who do not administer benchmark assessments versus schools who do administer benchmark assessments for all middle…

  19. Demonstration of a tool for automatic learning and re-use of knowledge in the activated sludge process.

    PubMed

    Comas, J; Rodríguez-Roda, I; Poch, M; Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    Wastewater treatment plant operators encounter complex operational problems related to the activated sludge process and usually respond to these by applying their own intuition and by taking advantage of what they have learnt from past experiences of similar problems. However, previous process experiences are not easy to integrate in numerical control, and new tools must be developed to enable re-use of plant operating experience. The aim of this paper is to investigate the usefulness of a case-based reasoning (CBR) approach to apply learning and re-use of knowledge gained during past incidents to confront actual complex problems through the IWA/COST Benchmark protocol. A case study shows that the proposed CBR system achieves a significant improvement of the benchmark plant performance when facing a high-flow event disturbance.

  20. A high-fidelity airbus benchmark for system fault detection and isolation and flight control law clearance

    NASA Astrophysics Data System (ADS)

    Goupil, Ph.; Puyou, G.

    2013-12-01

    This paper presents a high-fidelity generic twin engine civil aircraft model developed by Airbus for advanced flight control system research. The main features of this benchmark are described to make the reader aware of the model complexity and representativeness. It is a complete representation including the nonlinear rigid-body aircraft model with a full set of control surfaces, actuator models, sensor models, flight control laws (FCL), and pilot inputs. Two applications of this benchmark in the framework of European projects are presented: FCL clearance using optimization and advanced fault detection and diagnosis (FDD).

  1. Impact of quality circles for improvement of asthma care: results of a randomized controlled trial

    PubMed Central

    Schneider, Antonius; Wensing, Michel; Biessecker, Kathrin; Quinzler, Renate; Kaufmann-Kolle, Petra; Szecsenyi, Joachim

    2008-01-01

    Rationale and aims Quality circles (QCs) are well established as a means of aiding doctors. New quality improvement strategies include benchmarking activities. The aim of this paper was to evaluate the efficacy of QCs for asthma care working either with general feedback or with an open benchmark. Methods Twelve QCs, involving 96 general practitioners, were organized in a randomized controlled trial. Six worked with traditional anonymous feedback and six with an open benchmark; both had guided discussion from a trained moderator. Forty-three primary care practices agreed to give out questionnaires to patients to evaluate the efficacy of QCs. Results A total of 256 patients participated in the survey, of whom 185 (72.3%) responded to the follow-up 1 year later. Use of inhaled steroids at baseline was high (69%) and self-management low (asthma education 27%, individual emergency plan 8%, and peak flow meter at home 21%). Guideline adherence in drug treatment increased (P = 0.19), and asthma steps improved (P = 0.02). Delivery of individual emergency plans increased (P = 0.008), and unscheduled emergency visits decreased (P = 0.064). There was no change in asthma education and peak flow meter usage. High medication guideline adherence was associated with reduced emergency visits (OR 0.24; 95% CI 0.07–0.89). Use of theophylline was associated with hospitalization (OR 7.1; 95% CI 1.5–34.3) and emergency visits (OR 4.9; 95% CI 1.6–14.7). There was no difference between traditional and benchmarking QCs. Conclusions Quality circles working with individualized feedback are effective at improving asthma care. The trial may have been underpowered to detect specific benchmarking effects. Further research is necessary to evaluate strategies for improving the self-management of asthma patients. PMID:18093108

  2. Semi-active friction damper for buildings subject to seismic excitation

    NASA Astrophysics Data System (ADS)

    Mantilla, Juan S.; Solarte, Alexander; Gomez, Daniel; Marulanda, Johannio; Thomson, Peter

    2016-04-01

    Structural control systems are considered an effective alternative for reducing vibrations in civil structures and are classified according to their energy supply requirement: passive, semi-active, active and hybrid. Commonly used structural control systems in buildings are passive friction dampers, which add energy dissipation through damping mechanisms induced by sliding friction between their surfaces. Semi-Active Variable Friction Dampers (SAVFD) allow the optimum efficiency range of friction dampers to be enhanced by controlling the clamping force in real time. This paper describes the development and performance evaluation of a low-cost SAVFD for the reduction of vibrations of structures subject to earthquakes. The SAVFD and a benchmark structural control test structure were experimentally characterized and analytical models were developed and updated based on the dynamic characterization. Decentralized control algorithms were implemented and tested on a shaking table. Relative displacements and accelerations of the structure controlled with the SAVFD were 80% less than those of the uncontrolled structure

  3. Benchmarking Is Associated With Improved Quality of Care in Type 2 Diabetes

    PubMed Central

    Hermans, Michel P.; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-01-01

    OBJECTIVE To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. RESEARCH DESIGN AND METHODS Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. RESULTS Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P < 0.001); 54.3 vs. 49.7% met the LDL cholesterol target (P = 0.006). Percentages of patients meeting all three targets increased during the study in both groups, with a statistically significant increase observed in the benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P < 0.001). CONCLUSIONS In this prospective, randomized, controlled study, benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile. PMID:23846810

  4. The art and science of using routine outcome measurement in mental health benchmarking.

    PubMed

    McKay, Roderick; Coombs, Tim; Duerden, David

    2014-02-01

    To report and critique the application of routine outcome measurement data when benchmarking Australian mental health services. The experience of the authors as participants and facilitators of benchmarking activities is augmented by a review of the literature regarding mental health benchmarking in Australia. Although the published literature is limited, in practice, routine outcome measures, in particular the Health of the National Outcomes Scales (HoNOS) family of measures, are used in a variety of benchmarking activities. Use in exploring similarities and differences in consumers between services and the outcomes of care are illustrated. This requires the rigour of science in data management and interpretation, supplemented by the art that comes from clinical experience, a desire to reflect on clinical practice and the flexibility to use incomplete data to explore clinical practice. Routine outcome measurement data can be used in a variety of ways to support mental health benchmarking. With the increasing sophistication of information development in mental health, the opportunity to become involved in benchmarking will continue to increase. The techniques used during benchmarking and the insights gathered may prove useful to support reflection on practice by psychiatrists and other senior mental health clinicians.

  5. A benchmark for fault tolerant flight control evaluation

    NASA Astrophysics Data System (ADS)

    Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.

    2013-12-01

    A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.

  6. Effects of benchmarking on the quality of type 2 diabetes care: results of the OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study in Greece

    PubMed Central

    Tsimihodimos, Vasilis; Kostapanos, Michael S.; Moulis, Alexandros; Nikas, Nikos; Elisaf, Moses S.

    2015-01-01

    Objectives: To investigate the effect of benchmarking on the quality of type 2 diabetes (T2DM) care in Greece. Methods: The OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study [ClinicalTrials.gov identifier: NCT00681850] was an international multicenter, prospective cohort study. It included physicians randomized 3:1 to either receive benchmarking for glycated hemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) treatment targets (benchmarking group) or not (control group). The proportions of patients achieving the targets of the above-mentioned parameters were compared between groups after 12 months of treatment. Also, the proportions of patients achieving those targets at 12 months were compared with baseline in the benchmarking group. Results: In the Greek region, the OPTIMISE study included 797 adults with T2DM (570 in the benchmarking group). At month 12 the proportion of patients within the predefined targets for SBP and LDL-C was greater in the benchmarking compared with the control group (50.6 versus 35.8%, and 45.3 versus 36.1%, respectively). However, these differences were not statistically significant. No difference between groups was noted in the percentage of patients achieving the predefined target for HbA1c. At month 12 the increase in the percentage of patients achieving all three targets was greater in the benchmarking (5.9–15.0%) than in the control group (2.7–8.1%). In the benchmarking group more patients were on target regarding SBP (50.6% versus 29.8%), LDL-C (45.3% versus 31.3%) and HbA1c (63.8% versus 51.2%) at 12 months compared with baseline (p < 0.001 for all comparisons). Conclusion: Benchmarking may comprise a promising tool for improving the quality of T2DM care. Nevertheless, target achievement rates of each, and of all three, quality indicators were suboptimal, indicating there are still unmet needs in the management of T2DM. PMID:26445642

  7. Benchmarking Ada tasking on tightly coupled multiprocessor architectures

    NASA Technical Reports Server (NTRS)

    Collard, Philippe; Goforth, Andre; Marquardt, Matthew

    1989-01-01

    The development of benchmarks and performance measures for parallel Ada tasking is reported with emphasis on the macroscopic behavior of the benchmark across a set of load parameters. The application chosen for the study was the NASREM model for telerobot control, relevant to many NASA missions. The results of the study demonstrate the potential of parallel Ada in accomplishing the task of developing a control system for a system such as the Flight Telerobotic Servicer using the NASREM framework.

  8. Unusually High Incidences of Staphylococcus aureus Infection within Studies of Ventilator Associated Pneumonia Prevention Using Topical Antibiotics: Benchmarking the Evidence Base

    PubMed Central

    2018-01-01

    Selective digestive decontamination (SDD, topical antibiotic regimens applied to the respiratory tract) appears effective for preventing ventilator associated pneumonia (VAP) in intensive care unit (ICU) patients. However, potential contextual effects of SDD on Staphylococcus aureus infections in the ICU remain unclear. The S. aureus ventilator associated pneumonia (S. aureus VAP), VAP overall and S. aureus bacteremia incidences within component (control and intervention) groups within 27 SDD studies were benchmarked against 115 observational groups. Component groups from 66 studies of various interventions other than SDD provided additional points of reference. In 27 SDD study control groups, the mean S. aureus VAP incidence is 9.6% (95% CI; 6.9–13.2) versus a benchmark derived from 115 observational groups being 4.8% (95% CI; 4.2–5.6). In nine SDD study control groups the mean S. aureus bacteremia incidence is 3.8% (95% CI; 2.1–5.7) versus a benchmark derived from 10 observational groups being 2.1% (95% CI; 1.1–4.1). The incidences of S. aureus VAP and S. aureus bacteremia within the control groups of SDD studies are each higher than literature derived benchmarks. Paradoxically, within the SDD intervention groups, the incidences of both S. aureus VAP and VAP overall are more similar to the benchmarks. PMID:29300363

  9. Benchmarking in pathology: development of an activity-based costing model.

    PubMed

    Burnett, Leslie; Wilson, Roger; Pfeffer, Sally; Lowry, John

    2012-12-01

    Benchmarking in Pathology (BiP) allows pathology laboratories to determine the unit cost of all laboratory tests and procedures, and also provides organisational productivity indices allowing comparisons of performance with other BiP participants. We describe 14 years of progressive enhancement to a BiP program, including the implementation of 'avoidable costs' as the accounting basis for allocation of costs rather than previous approaches using 'total costs'. A hierarchical tree-structured activity-based costing model distributes 'avoidable costs' attributable to the pathology activities component of a pathology laboratory operation. The hierarchical tree model permits costs to be allocated across multiple laboratory sites and organisational structures. This has enabled benchmarking on a number of levels, including test profiles and non-testing related workload activities. The development of methods for dealing with variable cost inputs, allocation of indirect costs using imputation techniques, panels of tests, and blood-bank record keeping, have been successfully integrated into the costing model. A variety of laboratory management reports are produced, including the 'cost per test' of each pathology 'test' output. Benchmarking comparisons may be undertaken at any and all of the 'cost per test' and 'cost per Benchmarking Complexity Unit' level, 'discipline/department' (sub-specialty) level, or overall laboratory/site and organisational levels. We have completed development of a national BiP program. An activity-based costing methodology based on avoidable costs overcomes many problems of previous benchmarking studies based on total costs. The use of benchmarking complexity adjustment permits correction for varying test-mix and diagnostic complexity between laboratories. Use of iterative communication strategies with program participants can overcome many obstacles and lead to innovations.

  10. Paradoxical Acinetobacter-associated ventilator-associated pneumonia incidence rates within prevention studies using respiratory tract applications of topical polymyxin: benchmarking the evidence base.

    PubMed

    Hurley, J C

    2018-04-10

    Regimens containing topical polymyxin appear to be more effective in preventing ventilator-associated pneumonia (VAP) than other methods. To benchmark the incidence rates of Acinetobacter-associated VAP (AAVAP) within component (control and intervention) groups from concurrent controlled studies of polymyxin compared with studies of various VAP prevention methods other than polymyxin (non-polymyxin studies). An AAVAP benchmark was derived using data from 77 observational groups without any VAP prevention method under study. Data from 41 non-polymyxin studies provided additional points of reference. The benchmarking was undertaken by meta-regression using generalized estimating equation methods. Within 20 studies of topical polymyxin, the mean AAVAP was 4.6% [95% confidence interval (CI) 3.0-6.9] and 3.7% (95% CI 2.0-5.3) for control and intervention groups, respectively. In contrast, the AAVAP benchmark was 1.5% (95% CI 1.2-2.0). In the AAVAP meta-regression model, group origin from a trauma intensive care unit (+0.55; +0.16 to +0.94, P = 0.006) or membership of a polymyxin control group (+0.64; +0.21 to +1.31, P = 0.023), but not membership of a polymyxin intervention group (+0.24; -0.37 to +0.84, P = 0.45), were significant positive correlates. The mean incidence of AAVAP within the control groups of studies of topical polymyxin is more than double the benchmark, whereas the incidence rates within the groups of non-polymyxin studies and, paradoxically, polymyxin intervention groups are more similar to the benchmark. These incidence rates, which are paradoxical in the context of an apparent effect against VAP within controlled trials of topical polymyxin-based interventions, force a re-appraisal. Copyright © 2018 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  11. Benchmarking and Hardware-In-The-Loop Operation of a 2014 MAZDA SkyActiv (SAE 2016-01-1007)

    EPA Science Inventory

    Engine Performance evaluation in support of LD MTE. EPA used elements of its ALPHA model to apply hardware-in-the-loop (HIL) controls to the SKYACTIV engine test setup to better understand how the engine would operate in a chassis test after combined with future leading edge tech...

  12. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  13. Transaction Processing Performance Council (TPC): State of the Council 2010

    NASA Astrophysics Data System (ADS)

    Nambiar, Raghunath; Wakou, Nicholas; Carman, Forrest; Majdalany, Michael

    The Transaction Processing Performance Council (TPC) is a non-profit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable performance data to the industry. Established in August 1988, the TPC has been integral in shaping the landscape of modern transaction processing and database benchmarks over the past twenty-two years. This paper provides an overview of the TPC's existing benchmark standards and specifications, introduces two new TPC benchmarks under development, and examines the TPC's active involvement in the early creation of additional future benchmarks.

  14. Overview of Goal 1 (Establish Benchmarks for Space-Weather Events) of the National Space Weather Action Plan

    NASA Astrophysics Data System (ADS)

    Jonas, S.; Murtagh, W. J.; Clarke, S. W.

    2017-12-01

    The National Space Weather Action Plan identifies approximately 100 distinct activities across six strategic goals. Many of these activities depend on the identification of a series of benchmarks that describe the physical characteristics of space weather events on or near Earth. My talk will provide an overview of Goal 1 (Establish Benchmarks for Space-Weather Events) of the National Space Weather Action Plan which will provide an introduction to the panel presentations and discussions.

  15. Benchmarking and Its Relevance to the Library and Information Sector. Interim Findings of "Best Practice Benchmarking in the Library and Information Sector," a British Library Research and Development Department Project.

    ERIC Educational Resources Information Center

    Kinnell, Margaret; Garrod, Penny

    This British Library Research and Development Department study assesses current activities and attitudes toward quality management in library and information services (LIS) in the academic sector as well as the commercial/industrial sector. Definitions and types of benchmarking are described, and the relevance of benchmarking to LIS is evaluated.…

  16. Toward benchmarking in catalysis science: Best practices, challenges, and opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bligaard, Thomas; Bullock, R. Morris; Campbell, Charles T.

    Benchmarking is a community-based and (preferably) community-driven activity involving consensus-based decisions on how to make reproducible, fair, and relevant assessments. In catalysis science, important catalyst performance metrics include activity, selectivity, and the deactivation profile, which enable comparisons between new and standard catalysts. Benchmarking also requires careful documentation, archiving, and sharing of methods and measurements, to ensure that the full value of research data can be realized. Beyond these goals, benchmarking presents unique opportunities to advance and accelerate understanding of complex reaction systems by combining and comparing experimental information from multiple, in situ and operando techniques with theoretical insights derived frommore » calculations characterizing model systems. This Perspective describes the origins and uses of benchmarking and its applications in computational catalysis, heterogeneous catalysis, molecular catalysis, and electrocatalysis. As a result, it also discusses opportunities and challenges for future developments in these fields.« less

  17. Toward benchmarking in catalysis science: Best practices, challenges, and opportunities

    DOE PAGES

    Bligaard, Thomas; Bullock, R. Morris; Campbell, Charles T.; ...

    2016-03-07

    Benchmarking is a community-based and (preferably) community-driven activity involving consensus-based decisions on how to make reproducible, fair, and relevant assessments. In catalysis science, important catalyst performance metrics include activity, selectivity, and the deactivation profile, which enable comparisons between new and standard catalysts. Benchmarking also requires careful documentation, archiving, and sharing of methods and measurements, to ensure that the full value of research data can be realized. Beyond these goals, benchmarking presents unique opportunities to advance and accelerate understanding of complex reaction systems by combining and comparing experimental information from multiple, in situ and operando techniques with theoretical insights derived frommore » calculations characterizing model systems. This Perspective describes the origins and uses of benchmarking and its applications in computational catalysis, heterogeneous catalysis, molecular catalysis, and electrocatalysis. As a result, it also discusses opportunities and challenges for future developments in these fields.« less

  18. Outcome Benchmarks for Adaptations of Research-Supported Treatments for Adult Traumatic Stress

    ERIC Educational Resources Information Center

    Rubin, Allen; Parrish, Danielle E.; Washburn, Micki

    2016-01-01

    This article provides benchmark data on within-group effect sizes from published randomized controlled trials (RCTs) that evaluated the efficacy of research-supported treatments (RSTs) for adult traumatic stress. Agencies can compare these benchmarks to their treatment group effect size to inform their decisions as to whether the way they are…

  19. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  20. Neutron Deep Penetration Calculations in Light Water with Monte Carlo TRIPOLI-4® Variance Reduction Techniques

    NASA Astrophysics Data System (ADS)

    Lee, Yi-Kang

    2017-09-01

    Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.

  1. A semi-active H∞ control strategy with application to the vibration suppression of nonlinear high-rise building under earthquake excitations.

    PubMed

    Yan, Guiyun; Chen, Fuquan; Wu, Yingxiong

    2016-01-01

    Different from previous researches which mostly focused on linear response control of seismically excited high-rise buildings, this study aims to control nonlinear seismic response of high-rise buildings. To this end, a semi-active control strategy, in which H∞ control algorithm is used and magneto-rheological dampers are employed for an actuator, is presented to suppress the nonlinear vibration. In this strategy, a modified Kalman-Bucy observer which is suitable for the proposed semi-active strategy is developed to obtain the state vector from the measured semi-active control force and acceleration feedback, taking into account of the effects of nonlinearity, disturbance and uncertainty of controlled system parameters by the observed nonlinear accelerations. Then, the proposed semi-active H∞ control strategy is applied to the ASCE 20-story benchmark building when subjected to earthquake excitation and compared with the other control approaches by some control criteria. It is indicated that the proposed semi-active H∞ control strategy provides much better control performances by comparison with the semi-active MPC and Clipped-LQG control approaches, and can reduce nonlinear seismic response and minimize the damage in the buildings. Besides, it enhances the reliability of the control performance when compared with the active control strategy. Thus, the proposed semi-active H∞ control strategy is suitable for suppressing the nonlinear vibration of high-rise buildings.

  2. Benchmarking Data Sets for the Evaluation of Virtual Ligand Screening Methods: Review and Perspectives.

    PubMed

    Lagarde, Nathalie; Zagury, Jean-François; Montes, Matthieu

    2015-07-27

    Virtual screening methods are commonly used nowadays in drug discovery processes. However, to ensure their reliability, they have to be carefully evaluated. The evaluation of these methods is often realized in a retrospective way, notably by studying the enrichment of benchmarking data sets. To this purpose, numerous benchmarking data sets were developed over the years, and the resulting improvements led to the availability of high quality benchmarking data sets. However, some points still have to be considered in the selection of the active compounds, decoys, and protein structures to obtain optimal benchmarking data sets.

  3. Key performance indicators to benchmark hospital information systems - a delphi study.

    PubMed

    Hübner-Bloder, G; Ammenwerth, E

    2009-01-01

    To identify the key performance indicators for hospital information systems (HIS) that can be used for HIS benchmarking. A Delphi survey with one qualitative and two quantitative rounds. Forty-four HIS experts from health care IT practice and academia participated in all three rounds. Seventy-seven performance indicators were identified and organized into eight categories: technical quality, software quality, architecture and interface quality, IT vendor quality, IT support and IT department quality, workflow support quality, IT outcome quality, and IT costs. The highest ranked indicators are related to clinical workflow support and user satisfaction. Isolated technical indicators or cost indicators were not seen as useful. The experts favored an interdisciplinary group of all the stakeholders, led by hospital management, to conduct the HIS benchmarking. They proposed benchmarking activities both in regular (annual) intervals as well as at defined events (for example after IT introduction). Most of the experts stated that in their institutions no HIS benchmarking activities are being performed at the moment. In the context of IT governance, IT benchmarking is gaining importance in the healthcare area. The found indicators reflect the view of health care IT professionals and researchers. Research is needed to further validate and operationalize key performance indicators, to provide an IT benchmarking framework, and to provide open repositories for a comparison of the HIS benchmarks of different hospitals.

  4. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part I: Benchmark comparisons of WIMS-D5 and DRAGON cell and control rod parameters with MCNP5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mollerach, R.; Leszczynski, F.; Fink, J.

    2006-07-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure-vessel design with 451 vertical coolant channels, and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update calculation methods and models (cell, supercell and reactor) was recently carried out coveringmore » cell, supercell (control rod) and core calculations. As a validation of the new models some benchmark comparisons were done with Monte Carlo calculations with MCNP5. This paper presents comparisons of cell and supercell benchmark problems based on a slightly idealized model of the Atucha-I core obtained with the WIMS-D5 and DRAGON codes with MCNP5 results. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, and more symmetric than Atucha-II Cell parameters compared include cell k-infinity, relative power levels of the different rings of fuel rods, and some two-group macroscopic cross sections. Supercell comparisons include supercell k-infinity changes due to the control rods (tubes) of steel and hafnium. (authors)« less

  5. BENCHMARKING SUSTAINABILITY ENGINEERING EDUCATION

    EPA Science Inventory

    The goals of this project are to develop and apply a methodology for benchmarking curricula in sustainability engineering and to identify individuals active in sustainability engineering education.

  6. [Do you mean benchmarking?].

    PubMed

    Bonnet, F; Solignac, S; Marty, J

    2008-03-01

    The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.

  7. Passivity-based Robust Control of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Kelkar, Atul G.; Joshi, Suresh M. (Technical Monitor)

    2000-01-01

    This report provides a brief summary of the research work performed over the duration of the cooperative research agreement between NASA Langley Research Center and Kansas State University. The cooperative agreement which was originally for the duration the three years was extended by another year through no-cost extension in order to accomplish the goals of the project. The main objective of the research was to develop passivity-based robust control methodology for passive and non-passive aerospace systems. The focus of the first-year's research was limited to the investigation of passivity-based methods for the robust control of Linear Time-Invariant (LTI) single-input single-output (SISO), open-loop stable, minimum-phase non-passive systems. The second year's focus was mainly on extending the passivity-based methodology to a larger class of non-passive LTI systems which includes unstable and nonminimum phase SISO systems. For LTI non-passive systems, five different passification. methods were developed. The primary effort during the years three and four was on the development of passification methodology for MIMO systems, development of methods for checking robustness of passification, and developing synthesis techniques for passifying compensators. For passive LTI systems optimal synthesis procedure was also developed for the design of constant-gain positive real controllers. For nonlinear passive systems, numerical optimization-based technique was developed for the synthesis of constant as well as time-varying gain positive-real controllers. The passivity-based control design methodology developed during the duration of this project was demonstrated by its application to various benchmark examples. These example systems included longitudinal model of an F-18 High Alpha Research Vehicle (HARV) for pitch axis control, NASA's supersonic transport wind tunnel model, ACC benchmark model, 1-D acoustic duct model, piezo-actuated flexible link model, and NASA's Benchmark Active Controls Technology (BACT) Wing model. Some of the stability results for linear passive systems were also extended to nonlinear passive systems. Several publications and conference presentations resulted from this research.

  8. 2-D Circulation Control Airfoil Benchmark Experiments Intended for CFD Code Validation

    NASA Technical Reports Server (NTRS)

    Englar, Robert J.; Jones, Gregory S.; Allan, Brian G.; Lin, Johb C.

    2009-01-01

    A current NASA Research Announcement (NRA) project being conducted by Georgia Tech Research Institute (GTRI) personnel and NASA collaborators includes the development of Circulation Control (CC) blown airfoils to improve subsonic aircraft high-lift and cruise performance. The emphasis of this program is the development of CC active flow control concepts for both high-lift augmentation, drag control, and cruise efficiency. A collaboration in this project includes work by NASA research engineers, whereas CFD validation and flow physics experimental research are part of NASA s systematic approach to developing design and optimization tools for CC applications to fixed-wing aircraft. The design space for CESTOL type aircraft is focusing on geometries that depend on advanced flow control technologies that include Circulation Control aerodynamics. The ability to consistently predict advanced aircraft performance requires improvements in design tools to include these advanced concepts. Validation of these tools will be based on experimental methods applied to complex flows that go beyond conventional aircraft modeling techniques. This paper focuses on recent/ongoing benchmark high-lift experiments and CFD efforts intended to provide 2-D CFD validation data sets related to NASA s Cruise Efficient Short Take Off and Landing (CESTOL) study. Both the experimental data and related CFD predictions are discussed.

  9. Groundwater-quality data for the Madera/Chowchilla–Kings shallow aquifer study unit, 2013–14: Results from the California GAMA Program

    USGS Publications Warehouse

    Shelton, Jennifer L.; Fram, Miranda S.

    2017-02-03

    Groundwater quality in the 2,390-square-mile Madera/Chowchilla–Kings Shallow Aquifer study unit was investigated by the U.S. Geological Survey from August 2013 to April 2014 as part of the California State Water Resources Control Board Groundwater Ambient Monitoring and Assessment Program’s Priority Basin Project. The study was designed to provide a statistically unbiased, spatially distributed assessment of untreated groundwater quality in the shallow aquifer systems of the Madera, Chowchilla, and Kings subbasins of the San Joaquin Valley groundwater basin. The shallow aquifer system corresponds to the part of the aquifer system generally used by domestic wells and is shallower than the part of the aquifer system generally used by public-supply wells. This report presents the data collected for the study and a brief preliminary description of the results.Groundwater samples were collected from 77 wells and were analyzed for organic constituents, inorganic constituents, selected isotopic and age-dating tracers, and microbial indicators. Most of the wells sampled for this study were private domestic wells. Unlike groundwater from public-supply wells, the groundwater from private domestic wells is not regulated for quality in California and is rarely analyzed for water-quality constituents. To provide context for the sampling results, however, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory benchmarks established for drinking-water quality by the U.S. Environmental Protection Agency, the State of California, and the U.S. Geological Survey.Of the 319 organic constituents assessed in this study (90 volatile organic compounds and 229 pesticides and pesticide degradates), 17 volatile organic compounds and 23 pesticides and pesticide degradates were detected in groundwater samples; concentrations of all but 2 were less than the respective benchmarks. The fumigants 1,2-dibromo-3-chloropropane (DBCP) and 1,2-dibromoethane (EDB) were detected at concentrations above their respective regulatory benchmarks in samples from a total of four wells.Most detections of inorganic constituents were at concentrations or activities less than the respective benchmark levels. Five inorganic constituents were detected in groundwater samples from one or more wells at concentrations or activities greater than their respective regulatory, health-based benchmarks: arsenic, uranium, nitrate, adjusted gross alpha particle activity, and gross beta particle activity. Four inorganic constituents were detected in samples from one or more wells at concentrations or activities greater than their respective non-regulatory, health-based benchmarks: manganese, molybdenum, vanadium, and radon-222. Three inorganic constituents were detected in groundwater samples from one or more wells at concentrations greater than their respective non-regulatory, aesthetic-based benchmarks: iron, sulfate, and total dissolved solids.Microbial indicators (Escherichia coli, total coliform, and enterococci) were analyzed for presence or absence. The presence of Escherichia coli (E. coli) was not detected; the presence of total coliform was detected in samples from 10 of the 72 grid wells for which it was analyzed, and the presence of enterococci was detected in samples from 5 of the 73 grid wells analyzed.

  10. Depollution benchmarks for capacitors, batteries and printed wiring boards from waste electrical and electronic equipment (WEEE).

    PubMed

    Savi, Daniel; Kasser, Ueli; Ott, Thomas

    2013-12-01

    The article compiles and analyses sample data for toxic components removed from waste electronic and electrical equipment (WEEE) from more than 30 recycling companies in Switzerland over the past ten years. According to European and Swiss legislation, toxic components like batteries, capacitors and printed wiring boards have to be removed from WEEE. The control bodies of the Swiss take back schemes have been monitoring the activities of WEEE recyclers in Switzerland for about 15 years. All recyclers have to provide annual mass balance data for every year of operation. From this data, percentage shares of removed batteries and capacitors are calculated in relation to the amount of each respective WEEE category treated. A rationale is developed, why such an indicator should not be calculated for printed wiring boards. The distributions of these de-pollution indicators are analysed and their suitability for defining lower threshold values and benchmarks for the depollution of WEEE is discussed. Recommendations for benchmarks and threshold values for the removal of capacitors and batteries are given. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Benchmark calculation for radioactivity inventory using MAXS library based on JENDL-4.0 and JEFF-3.0/A for decommissioning BWR plants

    NASA Astrophysics Data System (ADS)

    Tanaka, Ken-ichi

    2016-06-01

    We performed benchmark calculation for radioactivity activated in a Primary Containment Vessel (PCV) of a Boiling Water Reactor (BWR) by using MAXS library, which was developed by collapsing with neutron energy spectra in the PCV of the BWR. Radioactivities due to neutron irradiation were measured by using activation foil detector of Gold (Au) and Nickel (Ni) at thirty locations in the PCV. We performed activation calculations of the foils with SCALE5.1/ORIGEN-S code with irradiation conditions of each foil location as the benchmark calculation. We compared calculations and measurements to estimate an effectiveness of MAXS library.

  12. Study rationale and design of OPTIMISE, a randomised controlled trial on the effect of benchmarking on quality of care in type 2 diabetes mellitus.

    PubMed

    Nobels, Frank; Debacker, Noëmi; Brotons, Carlos; Elisaf, Moses; Hermans, Michel P; Michel, Georges; Muls, Erik

    2011-09-22

    To investigate the effect of physician- and patient-specific feedback with benchmarking on the quality of care in adults with type 2 diabetes mellitus (T2DM). Study centres in six European countries were randomised to either a benchmarking or control group. Physicians in both groups received feedback on modifiable outcome indicators (glycated haemoglobin [HbA1c], glycaemia, total cholesterol, high density lipoprotein-cholesterol, low density lipoprotein [LDL]-cholesterol and triglycerides) for each patient at 0, 4, 8 and 12 months, based on the four times yearly control visits recommended by international guidelines. The benchmarking group also received comparative results on three critical quality indicators of vascular risk (HbA1c, LDL-cholesterol and systolic blood pressure [SBP]), checked against the results of their colleagues from the same country, and versus pre-set targets. After 12 months of follow up, the percentage of patients achieving the pre-determined targets for the three critical quality indicators will be assessed in the two groups. Recruitment was completed in December 2008 with 3994 evaluable patients. This paper discusses the study rationale and design of OPTIMISE, a randomised controlled study, that will help assess whether benchmarking is a useful clinical tool for improving outcomes in T2DM in primary care. NCT00681850.

  13. Study rationale and design of OPTIMISE, a randomised controlled trial on the effect of benchmarking on quality of care in type 2 diabetes mellitus

    PubMed Central

    2011-01-01

    Background To investigate the effect of physician- and patient-specific feedback with benchmarking on the quality of care in adults with type 2 diabetes mellitus (T2DM). Methods Study centres in six European countries were randomised to either a benchmarking or control group. Physicians in both groups received feedback on modifiable outcome indicators (glycated haemoglobin [HbA1c], glycaemia, total cholesterol, high density lipoprotein-cholesterol, low density lipoprotein [LDL]-cholesterol and triglycerides) for each patient at 0, 4, 8 and 12 months, based on the four times yearly control visits recommended by international guidelines. The benchmarking group also received comparative results on three critical quality indicators of vascular risk (HbA1c, LDL-cholesterol and systolic blood pressure [SBP]), checked against the results of their colleagues from the same country, and versus pre-set targets. After 12 months of follow up, the percentage of patients achieving the pre-determined targets for the three critical quality indicators will be assessed in the two groups. Results Recruitment was completed in December 2008 with 3994 evaluable patients. Conclusions This paper discusses the study rationale and design of OPTIMISE, a randomised controlled study, that will help assess whether benchmarking is a useful clinical tool for improving outcomes in T2DM in primary care. Trial registration NCT00681850 PMID:21939502

  14. Combining Protein and Strain Engineering for the Production of Glyco-Engineered Horseradish Peroxidase C1A in Pichia pastoris

    PubMed Central

    Capone, Simona; Ćorajević, Lejla; Bonifert, Günther; Murth, Patrick; Maresch, Daniel; Altmann, Friedrich; Herwig, Christoph; Spadiut, Oliver

    2015-01-01

    Horseradish peroxidase (HRP), conjugated to antibodies and lectins, is widely used in medical diagnostics. Since recombinant production of the enzyme is difficult, HRP isolated from plant is used for these applications. Production in the yeast Pichia pastoris (P. pastoris), the most promising recombinant production platform to date, causes hyperglycosylation of HRP, which in turn complicates conjugation to antibodies and lectins. In this study we combined protein and strain engineering to obtain an active and stable HRP variant with reduced surface glycosylation. We combined four mutations, each being beneficial for either catalytic activity or thermal stability, and expressed this enzyme variant as well as the unmutated wildtype enzyme in both a P. pastoris benchmark strain and a strain where the native α-1,6-mannosyltransferase (OCH1) was knocked out. Considering productivity in the bioreactor as well as enzyme activity and thermal stability, the mutated HRP variant produced in the P. pastoris benchmark strain turned out to be interesting for medical diagnostics. This variant shows considerable catalytic activity and thermal stability and is less glycosylated, which might allow more controlled and efficient conjugation to antibodies and lectins. PMID:26404235

  15. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  16. Short Activity: A Half; More or Less

    ERIC Educational Resources Information Center

    Russo, James

    2017-01-01

    Benchmarking is an important strategy for comparing the size of fractions. In addition, knowing whether a given fraction is greater or less than a particular benchmark (e.g., one-half) can support students with accurately locating the fraction on a number line. This article offers a game-based activity that engages students in discussions around…

  17. Benchmarking. Issues in the Design and Implementation of a Benchmarking System for Employment and Training Programs for Young People.

    ERIC Educational Resources Information Center

    Coughlin, David C.; Bielen, Rhonda P.

    This paper has been prepared to assist the United States Department of Labor to explore new approaches to evaluating and measuring the performance of employment and training activities for youth. As one of several tools for evaluating success of local youth training programs, "benchmarking" provides a system for measuring the development…

  18. The Medical Library Association Benchmarking Network: development and implementation.

    PubMed

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C; Smith, Bernie Todd

    2006-04-01

    This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries. The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described. The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004. The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program.

  19. The Medical Library Association Benchmarking Network: development and implementation*

    PubMed Central

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd

    2006-01-01

    Objective: This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries. Methods: The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described. Results: The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004. Conclusions: The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program. PMID:16636702

  20. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savi, Daniel, E-mail: d.savi@umweltchemie.ch; Kasser, Ueli; Ott, Thomas

    Highlights: • We’ve analysed data on the dismantling of electronic and electrical appliances. • Ten years of mass balance data of more than recycling companies have been considered. • Percentages of dismantled batteries, capacitors and PWB have been studied. • Threshold values and benchmarks for batteries and capacitors have been identified. • No benchmark for the dismantling of printed wiring boards should be set. - Abstract: The article compiles and analyses sample data for toxic components removed from waste electronic and electrical equipment (WEEE) from more than 30 recycling companies in Switzerland over the past ten years. According to Europeanmore » and Swiss legislation, toxic components like batteries, capacitors and printed wiring boards have to be removed from WEEE. The control bodies of the Swiss take back schemes have been monitoring the activities of WEEE recyclers in Switzerland for about 15 years. All recyclers have to provide annual mass balance data for every year of operation. From this data, percentage shares of removed batteries and capacitors are calculated in relation to the amount of each respective WEEE category treated. A rationale is developed, why such an indicator should not be calculated for printed wiring boards. The distributions of these de-pollution indicators are analysed and their suitability for defining lower threshold values and benchmarks for the depollution of WEEE is discussed. Recommendations for benchmarks and threshold values for the removal of capacitors and batteries are given.« less

  2. Benchmarking of OEM Hybrid Electric Vehicles at NREL: Milestone Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, K. J.; Rajagopalan, A.

    2001-10-26

    A milestone report that describes the NREL's progress and activities related to the DOE FY2001 Annual Operating Plan milestone entitled ''Benchmark 2 new production or pre-production hybrids with ADVISOR.''

  3. ViSAPy: a Python tool for biophysics-based generation of virtual spiking activity for evaluation of spike-sorting algorithms.

    PubMed

    Hagen, Espen; Ness, Torbjørn V; Khosrowshahi, Amir; Sørensen, Christina; Fyhn, Marianne; Hafting, Torkel; Franke, Felix; Einevoll, Gaute T

    2015-04-30

    New, silicon-based multielectrodes comprising hundreds or more electrode contacts offer the possibility to record spike trains from thousands of neurons simultaneously. This potential cannot be realized unless accurate, reliable automated methods for spike sorting are developed, in turn requiring benchmarking data sets with known ground-truth spike times. We here present a general simulation tool for computing benchmarking data for evaluation of spike-sorting algorithms entitled ViSAPy (Virtual Spiking Activity in Python). The tool is based on a well-established biophysical forward-modeling scheme and is implemented as a Python package built on top of the neuronal simulator NEURON and the Python tool LFPy. ViSAPy allows for arbitrary combinations of multicompartmental neuron models and geometries of recording multielectrodes. Three example benchmarking data sets are generated, i.e., tetrode and polytrode data mimicking in vivo cortical recordings and microelectrode array (MEA) recordings of in vitro activity in salamander retinas. The synthesized example benchmarking data mimics salient features of typical experimental recordings, for example, spike waveforms depending on interspike interval. ViSAPy goes beyond existing methods as it includes biologically realistic model noise, synaptic activation by recurrent spiking networks, finite-sized electrode contacts, and allows for inhomogeneous electrical conductivities. ViSAPy is optimized to allow for generation of long time series of benchmarking data, spanning minutes of biological time, by parallel execution on multi-core computers. ViSAPy is an open-ended tool as it can be generalized to produce benchmarking data or arbitrary recording-electrode geometries and with various levels of complexity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Material Activation Benchmark Experiments at the NuMI Hadron Absorber Hall in Fermilab

    NASA Astrophysics Data System (ADS)

    Matsumura, H.; Matsuda, N.; Kasugai, Y.; Toyoda, A.; Yashima, H.; Sekimoto, S.; Iwase, H.; Oishi, K.; Sakamoto, Y.; Nakashima, H.; Leveling, A.; Boehnlein, D.; Lauten, G.; Mokhov, N.; Vaziri, K.

    2014-06-01

    In our previous study, double and mirror symmetric activation peaks found for Al and Au arranged spatially on the back of the Hadron absorber of the NuMI beamline in Fermilab were considerably higher than those expected purely from muon-induced reactions. From material activation bench-mark experiments, we conclude that this activation is due to hadrons with energy greater than 3 GeV that had passed downstream through small gaps in the hadron absorber.

  5. Benchmarking to improve the quality of cystic fibrosis care.

    PubMed

    Schechter, Michael S

    2012-11-01

    Benchmarking involves the ascertainment of healthcare programs with most favorable outcomes as a means to identify and spread effective strategies for delivery of care. The recent interest in the development of patient registries for patients with cystic fibrosis (CF) has been fueled in part by an interest in using them to facilitate benchmarking. This review summarizes reports of how benchmarking has been operationalized in attempts to improve CF care. Although certain goals of benchmarking can be accomplished with an exclusive focus on registry data analysis, benchmarking programs in Germany and the United States have supplemented these data analyses with exploratory interactions and discussions to better understand successful approaches to care and encourage their spread throughout the care network. Benchmarking allows the discovery and facilitates the spread of effective approaches to care. It provides a pragmatic alternative to traditional research methods such as randomized controlled trials, providing insights into methods that optimize delivery of care and allowing judgments about the relative effectiveness of different therapeutic approaches.

  6. Implementing a benchmarking and feedback concept decreases postoperative pain after total knee arthroplasty: A prospective study including 256 patients.

    PubMed

    Benditz, A; Drescher, J; Greimel, F; Zeman, F; Grifka, J; Meißner, W; Völlner, F

    2016-12-05

    Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16 th in terms of activity-related pain and 9 th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1 st activity-related pain and to 2 nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA.

  7. Implementing a benchmarking and feedback concept decreases postoperative pain after total knee arthroplasty: A prospective study including 256 patients

    PubMed Central

    Benditz, A.; Drescher, J.; Greimel, F.; Zeman, F.; Grifka, J.; Meißner, W.; Völlner, F.

    2016-01-01

    Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16th in terms of activity-related pain and 9th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1st activity-related pain and to 2nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA. PMID:27917911

  8. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Soil and Litter Invertebrates and Heterotrophic Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Will, M.E.

    1994-01-01

    This report presents a standard method for deriving benchmarks for the purpose of ''contaminant screening,'' performed by comparing measured ambient concentrations of chemicals. The work was performed under Work Breakdown Structure 1.4.12.2.3.04.07.02 (Activity Data Sheet 8304). In addition, this report presents sets of data concerning the effects of chemicals in soil on invertebrates and soil microbial processes, benchmarks for chemicals potentially associated with United States Department of Energy sites, and literature describing the experiments from which data were drawn for benchmark derivation.

  9. Experimental benchmarking of quantum control in zero-field nuclear magnetic resonance.

    PubMed

    Jiang, Min; Wu, Teng; Blanchard, John W; Feng, Guanru; Peng, Xinhua; Budker, Dmitry

    2018-06-01

    Demonstration of coherent control and characterization of the control fidelity is important for the development of quantum architectures such as nuclear magnetic resonance (NMR). We introduce an experimental approach to realize universal quantum control, and benchmarking thereof, in zero-field NMR, an analog of conventional high-field NMR that features less-constrained spin dynamics. We design a composite pulse technique for both arbitrary one-spin rotations and a two-spin controlled-not (CNOT) gate in a heteronuclear two-spin system at zero field, which experimentally demonstrates universal quantum control in such a system. Moreover, using quantum information-inspired randomized benchmarking and partial quantum process tomography, we evaluate the quality of the control, achieving single-spin control for 13 C with an average fidelity of 0.9960(2) and two-spin control via a CNOT gate with a fidelity of 0.9877(2). Our method can also be extended to more general multispin heteronuclear systems at zero field. The realization of universal quantum control in zero-field NMR is important for quantum state/coherence preparation, pulse sequence design, and is an essential step toward applications to materials science, chemical analysis, and fundamental physics.

  10. Experimental benchmarking of quantum control in zero-field nuclear magnetic resonance

    PubMed Central

    Feng, Guanru

    2018-01-01

    Demonstration of coherent control and characterization of the control fidelity is important for the development of quantum architectures such as nuclear magnetic resonance (NMR). We introduce an experimental approach to realize universal quantum control, and benchmarking thereof, in zero-field NMR, an analog of conventional high-field NMR that features less-constrained spin dynamics. We design a composite pulse technique for both arbitrary one-spin rotations and a two-spin controlled-not (CNOT) gate in a heteronuclear two-spin system at zero field, which experimentally demonstrates universal quantum control in such a system. Moreover, using quantum information–inspired randomized benchmarking and partial quantum process tomography, we evaluate the quality of the control, achieving single-spin control for 13C with an average fidelity of 0.9960(2) and two-spin control via a CNOT gate with a fidelity of 0.9877(2). Our method can also be extended to more general multispin heteronuclear systems at zero field. The realization of universal quantum control in zero-field NMR is important for quantum state/coherence preparation, pulse sequence design, and is an essential step toward applications to materials science, chemical analysis, and fundamental physics. PMID:29922714

  11. Verification of cardiac mechanics software: benchmark problems and solutions for testing active and passive material behaviour.

    PubMed

    Land, Sander; Gurev, Viatcheslav; Arens, Sander; Augustin, Christoph M; Baron, Lukas; Blake, Robert; Bradley, Chris; Castro, Sebastian; Crozier, Andrew; Favino, Marco; Fastl, Thomas E; Fritz, Thomas; Gao, Hao; Gizzi, Alessio; Griffith, Boyce E; Hurtado, Daniel E; Krause, Rolf; Luo, Xiaoyu; Nash, Martyn P; Pezzuto, Simone; Plank, Gernot; Rossi, Simone; Ruprecht, Daniel; Seemann, Gunnar; Smith, Nicolas P; Sundnes, Joakim; Rice, J Jeremy; Trayanova, Natalia; Wang, Dafang; Jenny Wang, Zhinuo; Niederer, Steven A

    2015-12-08

    Models of cardiac mechanics are increasingly used to investigate cardiac physiology. These models are characterized by a high level of complexity, including the particular anisotropic material properties of biological tissue and the actively contracting material. A large number of independent simulation codes have been developed, but a consistent way of verifying the accuracy and replicability of simulations is lacking. To aid in the verification of current and future cardiac mechanics solvers, this study provides three benchmark problems for cardiac mechanics. These benchmark problems test the ability to accurately simulate pressure-type forces that depend on the deformed objects geometry, anisotropic and spatially varying material properties similar to those seen in the left ventricle and active contractile forces. The benchmark was solved by 11 different groups to generate consensus solutions, with typical differences in higher-resolution solutions at approximately 0.5%, and consistent results between linear, quadratic and cubic finite elements as well as different approaches to simulating incompressible materials. Online tools and solutions are made available to allow these tests to be effectively used in verification of future cardiac mechanics software.

  12. A Gravimetric Geoid Model for Vertical Datum in Canada

    NASA Astrophysics Data System (ADS)

    Veronneau, M.; Huang, J.

    2004-05-01

    The need to realize a new vertical datum for Canada dates back to 1976 when a study group at Geodetic Survey Division (GSD) investigated problems related to the existing vertical system (CGVD28) and recommended a redefinition of the vertical datum. The US National Geodetic Survey and GSD cooperated in the development of a new North American Vertical Datum (NAVD88). Although the USA adopted NAVD88 in 1993 as its datum, Canada declined to do so as a result of unexplained discrepancies of about 1.5 m from east to west coasts (likely due to systematic errors). The high cost of maintaining the vertical datum by the traditional spirit leveling technique coupled with budgetary constraints has forced GSD to modify its approach. A new attempt (project) to modernize the vertical datum is currently in process in Canada. The advance in space-based technologies (e.g. GPS, satellite radar altimetry, satellite gravimetry) and new developments in geoid modeling offer an alternative to spirit leveling. GSD is planning to implement, after stakeholder consultations, a geoid model as the new vertical datum for Canada, which will allow space-based technology users access to an accurate and uniform datum all across the Canadian landmass and surrounding oceans. CGVD28 is only accessible through a limited number of benchmarks, primarily located in southern Canada. The new vertical datum would be less sensitive to geodynamic activities (post-glacial rebound and earthquake), local uplift and subsidence, and deterioration of the benchmarks. The adoption of a geoid model as a vertical datum does not mean that GSD is neglecting the current benchmarks. New heights will be given to the benchmarks by a new adjustment of the leveling observations, which will be constrained to the geoid model at selected stations of the Active Control System (ACS) and Canadian Base Network (CBN). This adjustment will not correct vertical motion at benchmarks, which has occurred since the last leveling observations. The presentation provides an overview of the "Height Modernization" project, and discusses the accuracy of the existing geoid models in Canada.

  13. Do Medicare Advantage Plans Minimize Costs? Investigating the Relationship Between Benchmarks, Costs, and Rebates.

    PubMed

    Zuckerman, Stephen; Skopec, Laura; Guterman, Stuart

    2017-12-01

    Medicare Advantage (MA), the program that allows people to receive their Medicare benefits through private health plans, uses a benchmark-and-bidding system to induce plans to provide benefits at lower costs. However, prior research suggests medical costs, profits, and other plan costs are not as low under this system as they might otherwise be. To examine how well the current system encourages MA plans to bid their lowest cost by examining the relationship between costs and bonuses (rebates) and the benchmarks Medicare uses in determining plan payments. Regression analysis using 2015 data for HMO and local PPO plans. Costs and rebates are higher for MA plans in areas with higher benchmarks, and plan costs vary less than benchmarks do. A one-dollar increase in benchmarks is associated with 32-cent-higher plan costs and a 52-cent-higher rebate, even when controlling for market and plan factors that can affect costs. This suggests the current benchmark-and-bidding system allows plans to bid higher than local input prices and other market conditions would seem to warrant. To incentivize MA plans to maximize efficiency and minimize costs, Medicare could change the way benchmarks are set or used.

  14. Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects

    PubMed Central

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140

  15. Benchmarking strategies for measuring the quality of healthcare: problems and prospects.

    PubMed

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.

  16. Optimal type 2 diabetes mellitus management: the randomised controlled OPTIMISE benchmarking study: baseline results from six European countries.

    PubMed

    Hermans, Michel P; Brotons, Carlos; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank

    2013-12-01

    Micro- and macrovascular complications of type 2 diabetes have an adverse impact on survival, quality of life and healthcare costs. The OPTIMISE (OPtimal Type 2 dIabetes Management Including benchmarking and Standard trEatment) trial comparing physicians' individual performances with a peer group evaluates the hypothesis that benchmarking, using assessments of change in three critical quality indicators of vascular risk: glycated haemoglobin (HbA1c), low-density lipoprotein-cholesterol (LDL-C) and systolic blood pressure (SBP), may improve quality of care in type 2 diabetes in the primary care setting. This was a randomised, controlled study of 3980 patients with type 2 diabetes. Six European countries participated in the OPTIMISE study (NCT00681850). Quality of care was assessed by the percentage of patients achieving pre-set targets for the three critical quality indicators over 12 months. Physicians were randomly assigned to receive either benchmarked or non-benchmarked feedback. All physicians received feedback on six of their patients' modifiable outcome indicators (HbA1c, fasting glycaemia, total cholesterol, high-density lipoprotein-cholesterol (HDL-C), LDL-C and triglycerides). Physicians in the benchmarking group additionally received information on levels of control achieved for the three critical quality indicators compared with colleagues. At baseline, the percentage of evaluable patients (N = 3980) achieving pre-set targets was 51.2% (HbA1c; n = 2028/3964); 34.9% (LDL-C; n = 1350/3865); 27.3% (systolic blood pressure; n = 911/3337). OPTIMISE confirms that target achievement in the primary care setting is suboptimal for all three critical quality indicators. This represents an unmet but modifiable need to revisit the mechanisms and management of improving care in type 2 diabetes. OPTIMISE will help to assess whether benchmarking is a useful clinical tool for improving outcomes in type 2 diabetes.

  17. Microgravity Vibration Control and Civil Applications

    NASA Technical Reports Server (NTRS)

    Whorton, Mark Stephen; Alhorn, Dean Carl

    1998-01-01

    Controlling vibration of structures is essential for both space structures as well as terrestrial structures. Due to the ambient acceleration levels anticipated for the International Space Station, active vibration isolation is required to provide a quiescent acceleration environment for many science experiments. An overview is given of systems developed and flight tested in orbit for microgravity vibration isolation. Technology developed for vibration control of flexible space structures may also be applied to control of terrestrial structures such as buildings and bridges subject to wind loading or earthquake excitation. Recent developments in modern robust control for flexible space structures are shown to provide good structural vibration control while maintaining robustness to model uncertainties. Results of a mixed H-2/H-infinity control design are provided for a benchmark problem in structural control for earthquake resistant buildings.

  18. Benchmark Results Of Active Tracer Particles In The Open Souce Code ASPECT For Modelling Convection In The Earth's Mantle

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Kaloti, A. P.; Levinson, H. R.; Nguyen, N.; Puckett, E. G.; Lokavarapu, H. V.

    2016-12-01

    We present the results of three standard benchmarks for the new active tracer particle algorithm in ASPECT. The three benchmarks are SolKz, SolCx, and SolVI (also known as the 'inclusion benchmark') first proposed by Duretz, May, Gerya, and Tackley (G Cubed, 2011) and in subsequent work by Theilman, May, and Kaus (Pure and Applied Geophysics, 2014). Each of the three benchmarks compares the accuracy of the numerical solution to a steady (time-independent) solution of the incompressible Stokes equations with a known exact solution. These benchmarks are specifically designed to test the accuracy and effectiveness of the numerical method when the viscosity varies up to six orders of magnitude. ASPECT has been shown to converge to the exact solution of each of these benchmarks at the correct design rate when all of the flow variables, including the density and viscosity, are discretized on the underlying finite element grid (Krobichler, Heister, and Bangerth, GJI, 2012). In our work we discretize the density and viscosity by initially placing the true values of the density and viscosity at the intial particle positions. At each time step, including the initialization step, the density and viscosity are interpolated from the particles onto the finite element grid. The resulting Stokes system is solved for the velocity and pressure, and the particle positions are advanced in time according to this new, numerical, velocity field. Note that this procedure effectively changes a steady solution of the Stokes equaton (i.e., one that is independent of time) to a solution of the Stokes equations that is time dependent. Furthermore, the accuracy of the active tracer particle algorithm now also depends on the accuracy of the interpolation algorithm and of the numerical method one uses to advance the particle positions in time. Finally, we will present new interpolation algorithms designed to increase the overall accuracy of the active tracer algorithms in ASPECT and interpolation algotithms designed to conserve properties, such as mass density, that are being carried by the particles.

  19. Benchmark Report on Key Outage Attributes: An Analysis of Outage Improvement Opportunities and Priorities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Germain, Shawn St.; Farris, Ronald

    2014-09-01

    Advanced Outage Control Center (AOCC), is a multi-year pilot project targeted at Nuclear Power Plant (NPP) outage improvement. The purpose of this pilot project is to improve management of NPP outages through the development of an AOCC that is specifically designed to maximize the usefulness of communication and collaboration technologies for outage coordination and problem resolution activities. This report documents the results of a benchmarking effort to evaluate the transferability of technologies demonstrated at Idaho National Laboratory and the primary pilot project partner, Palo Verde Nuclear Generating Station. The initial assumption for this pilot project was that NPPs generally domore » not take advantage of advanced technology to support outage management activities. Several researchers involved in this pilot project have commercial NPP experience and believed that very little technology has been applied towards outage communication and collaboration. To verify that the technology options researched and demonstrated through this pilot project would in fact have broad application for the US commercial nuclear fleet, and to look for additional outage management best practices, LWRS program researchers visited several additional nuclear facilities.« less

  20. What are the assets and weaknesses of HFO detectors? A benchmark framework based on realistic simulations

    PubMed Central

    Pizzo, Francesca; Bartolomei, Fabrice; Wendling, Fabrice; Bénar, Christian-George

    2017-01-01

    High-frequency oscillations (HFO) have been suggested as biomarkers of epileptic tissues. While visual marking of these short and small oscillations is tedious and time-consuming, automatic HFO detectors have not yet met a large consensus. Even though detectors have been shown to perform well when validated against visual marking, the large number of false detections due to their lack of robustness hinder their clinical application. In this study, we developed a validation framework based on realistic and controlled simulations to quantify precisely the assets and weaknesses of current detectors. We constructed a dictionary of synthesized elements—HFOs and epileptic spikes—from different patients and brain areas by extracting these elements from the original data using discrete wavelet transform coefficients. These elements were then added to their corresponding simulated background activity (preserving patient- and region- specific spectra). We tested five existing detectors against this benchmark. Compared to other studies confronting detectors, we did not only ranked them according their performance but we investigated the reasons leading to these results. Our simulations, thanks to their realism and their variability, enabled us to highlight unreported issues of current detectors: (1) the lack of robust estimation of the background activity, (2) the underestimated impact of the 1/f spectrum, and (3) the inadequate criteria defining an HFO. We believe that our benchmark framework could be a valuable tool to translate HFOs into a clinical environment. PMID:28406919

  1. Using a health promotion model to promote benchmarking.

    PubMed

    Welby, Jane

    2006-07-01

    The North East (England) Neonatal Benchmarking Group has been established for almost a decade and has researched and developed a substantial number of evidence-based benchmarks. With no firm evidence that these were being used or that there was any standardisation of neonatal care throughout the region, the group embarked on a programme to review the benchmarks and determine what evidence-based guidelines were needed to support standardisation. A health promotion planning model was used by one subgroup to structure the programme; it enabled all members of the sub group to engage in the review process and provided the motivation and supporting documentation for implementation of changes in practice. The need for a regional guideline development group to complement the activity of the benchmarking group is being addressed.

  2. Benchmarking: A strategic overview of a key management tool

    Treesearch

    Chris Leclair

    1999-01-01

    Benchmarking is a continuous, systematic process for evaluating the products, services, and work processes of organizations in an effort to identifY best practices for possible adoption in support of the objectives of enhanced activity service delivery and organizational effectiveness.

  3. Classical and modern control strategies for the deployment, reconfiguration, and station-keeping of the National Aeronautics and Space Administration (NASA) Benchmark Tetrahedron Constellation

    NASA Astrophysics Data System (ADS)

    Capo-Lugo, Pedro A.

    Formation flying consists of multiple spacecraft orbiting in a required configuration about a planet or through Space. The National Aeronautics and Space Administration (NASA) Benchmark Tetrahedron Constellation is one of the proposed constellations to be launched in the year 2009 and provides the motivation for this investigation. The problem that will be researched here consists of three stages. The first stage contains the deployment of the satellites; the second stage is the reconfiguration process to transfer the satellites through different specific sizes of the NASA benchmark problem; and, the third stage is the station-keeping procedure for the tetrahedron constellation. Every stage contains different control schemes and transfer procedures to obtain/maintain the proposed tetrahedron constellation. In the first stage, the deployment procedure will depend on a combination of two techniques in which impulsive maneuvers and a digital controller are used to deploy the satellites and to maintain the tetrahedron constellation at the following apogee point. The second stage that corresponds to the reconfiguration procedure shows a different control scheme in which the intelligent control systems are implemented to perform this procedure. In this research work, intelligent systems will eliminate the use of complex mathematical models and will reduce the computational time to perform different maneuvers. Finally, the station-keeping process, which is the third stage of this research problem, will be implemented with a two-level hierarchical control scheme to maintain the separation distance constraints of the NASA Benchmark Tetrahedron Constellation. For this station-keeping procedure, the system of equations defining the dynamics of a pair of satellites is transformed to take in account the perturbation due to the oblateness of the Earth and the disturbances due to solar pressure. The control procedures used in this research will be transformed from a continuous control system to a digital control system which will simplify the implementation into the computer onboard the satellite. In addition, this research will show an introductory chapter on attitude dynamics that can be used to maintain the orientation of the satellites, and an adaptive intelligent control scheme will be proposed to maintain the desired orientation of the spacecraft. In conclusion, a solution for the dynamics of the NASA Benchmark Tetrahedron Constellation will be presented in this research work. The main contribution of this work is the use of discrete control schemes, impulsive maneuvers, and intelligent control schemes that can be used to reduce the computational time in which these control schemes can be easily implemented in the computer onboard the satellite. These contributions are explained through the deployment, reconfiguration, and station-keeping process of the proposed NASA Benchmark Tetrahedron Constellation.

  4. Uncertainty and sensitivity analysis of control strategies using the benchmark simulation model No1 (BSM1).

    PubMed

    Flores-Alsina, Xavier; Rodriguez-Roda, Ignasi; Sin, Gürkan; Gernaey, Krist V

    2009-01-01

    The objective of this paper is to perform an uncertainty and sensitivity analysis of the predictions of the Benchmark Simulation Model (BSM) No. 1, when comparing four activated sludge control strategies. The Monte Carlo simulation technique is used to evaluate the uncertainty in the BSM1 predictions, considering the ASM1 bio-kinetic parameters and influent fractions as input uncertainties while the Effluent Quality Index (EQI) and the Operating Cost Index (OCI) are focused on as model outputs. The resulting Monte Carlo simulations are presented using descriptive statistics indicating the degree of uncertainty in the predicted EQI and OCI. Next, the Standard Regression Coefficients (SRC) method is used for sensitivity analysis to identify which input parameters influence the uncertainty in the EQI predictions the most. The results show that control strategies including an ammonium (S(NH)) controller reduce uncertainty in both overall pollution removal and effluent total Kjeldahl nitrogen. Also, control strategies with an external carbon source reduce the effluent nitrate (S(NO)) uncertainty increasing both their economical cost and variability as a trade-off. Finally, the maximum specific autotrophic growth rate (micro(A)) causes most of the variance in the effluent for all the evaluated control strategies. The influence of denitrification related parameters, e.g. eta(g) (anoxic growth rate correction factor) and eta(h) (anoxic hydrolysis rate correction factor), becomes less important when a S(NO) controller manipulating an external carbon source addition is implemented.

  5. NASA/Navy Benchmarking Exchange (NNBE). Volume 1. Interim Report. Navy Submarine Program Safety Assurance

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The NASA/Navy Benchmarking Exchange (NNBE) was undertaken to identify practices and procedures and to share lessons learned in the Navy's submarine and NASA's human space flight programs. The NNBE focus is on safety and mission assurance policies, processes, accountability, and control measures. This report is an interim summary of activity conducted through October 2002, and it coincides with completion of the first phase of a two-phase fact-finding effort.In August 2002, a team was formed, co-chaired by senior representatives from the NASA Office of Safety and Mission Assurance and the NAVSEA 92Q Submarine Safety and Quality Assurance Division. The team closely examined the two elements of submarine safety (SUBSAFE) certification: (1) new design/construction (initial certification) and (2) maintenance and modernization (sustaining certification), with a focus on: (1) Management and Organization, (2) Safety Requirements (technical and administrative), (3) Implementation Processes, (4) Compliance Verification Processes, and (5) Certification Processes.

  6. Modeling Urban Scenarios & Experiments: Fort Indiantown Gap Data Collections Summary and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Daniel E.; Bandstra, Mark S.; Davidson, Gregory G.

    This report summarizes experimental radiation detector, contextual sensor, weather, and global positioning system (GPS) data collected to inform and validate a comprehensive, operational radiation transport modeling framework to evaluate radiation detector system and algorithm performance. This framework will be used to study the influence of systematic effects (such as geometry, background activity, background variability, environmental shielding, etc.) on detector responses and algorithm performance using synthetic time series data. This work consists of performing data collection campaigns at a canonical, controlled environment for complete radiological characterization to help construct and benchmark a high-fidelity model with quantified system geometries, detector response functions,more » and source terms for background and threat objects. This data also provides an archival, benchmark dataset that can be used by the radiation detection community. The data reported here spans four data collection campaigns conducted between May 2015 and September 2016.« less

  7. A comparison of two adaptive algorithms for the control of active engine mounts

    NASA Astrophysics Data System (ADS)

    Hillis, A. J.; Harrison, A. J. L.; Stoten, D. P.

    2005-08-01

    This paper describes work conducted in order to control automotive active engine mounts, consisting of a conventional passive mount and an internal electromagnetic actuator. Active engine mounts seek to cancel the oscillatory forces generated by the rotation of out-of-balance masses within the engine. The actuator generates a force dependent on a control signal from an algorithm implemented with a real-time DSP. The filtered-x least-mean-square (FXLMS) adaptive filter is used as a benchmark for comparison with a new implementation of the error-driven minimal controller synthesis (Er-MCSI) adaptive controller. Both algorithms are applied to an active mount fitted to a saloon car equipped with a four-cylinder turbo-diesel engine, and have no a priori knowledge of the system dynamics. The steady-state and transient performance of the two algorithms are compared and the relative merits of the two approaches are discussed. The Er-MCSI strategy offers significant computational advantages as it requires no cancellation path modelling. The Er-MCSI controller is found to perform in a fashion similar to the FXLMS filter—typically reducing chassis vibration by 50-90% under normal driving conditions.

  8. Benchmarking organic micropollutants in wastewater, recycled water and drinking water with in vitro bioassays.

    PubMed

    Escher, Beate I; Allinson, Mayumi; Altenburger, Rolf; Bain, Peter A; Balaguer, Patrick; Busch, Wibke; Crago, Jordan; Denslow, Nancy D; Dopp, Elke; Hilscherova, Klara; Humpage, Andrew R; Kumar, Anu; Grimaldi, Marina; Jayasinghe, B Sumith; Jarosova, Barbora; Jia, Ai; Makarov, Sergei; Maruya, Keith A; Medvedev, Alex; Mehinto, Alvine C; Mendez, Jamie E; Poulsen, Anita; Prochazka, Erik; Richard, Jessica; Schifferli, Andrea; Schlenk, Daniel; Scholz, Stefan; Shiraishi, Fujio; Snyder, Shane; Su, Guanyong; Tang, Janet Y M; van der Burg, Bart; van der Linden, Sander C; Werner, Inge; Westerheide, Sandy D; Wong, Chris K C; Yang, Min; Yeung, Bonnie H Y; Zhang, Xiaowei; Leusch, Frederic D L

    2014-01-01

    Thousands of organic micropollutants and their transformation products occur in water. Although often present at low concentrations, individual compounds contribute to mixture effects. Cell-based bioassays that target health-relevant biological endpoints may therefore complement chemical analysis for water quality assessment. The objective of this study was to evaluate cell-based bioassays for their suitability to benchmark water quality and to assess efficacy of water treatment processes. The selected bioassays cover relevant steps in the toxicity pathways including induction of xenobiotic metabolism, specific and reactive modes of toxic action, activation of adaptive stress response pathways and system responses. Twenty laboratories applied 103 unique in vitro bioassays to a common set of 10 water samples collected in Australia, including wastewater treatment plant effluent, two types of recycled water (reverse osmosis and ozonation/activated carbon filtration), stormwater, surface water, and drinking water. Sixty-five bioassays (63%) showed positive results in at least one sample, typically in wastewater treatment plant effluent, and only five (5%) were positive in the control (ultrapure water). Each water type had a characteristic bioanalytical profile with particular groups of toxicity pathways either consistently responsive or not responsive across test systems. The most responsive health-relevant endpoints were related to xenobiotic metabolism (pregnane X and aryl hydrocarbon receptors), hormone-mediated modes of action (mainly related to the estrogen, glucocorticoid, and antiandrogen activities), reactive modes of action (genotoxicity) and adaptive stress response pathway (oxidative stress response). This study has demonstrated that selected cell-based bioassays are suitable to benchmark water quality and it is recommended to use a purpose-tailored panel of bioassays for routine monitoring.

  9. Developing a dashboard for benchmarking the productivity of a medication therapy management program.

    PubMed

    Umbreit, Audrey; Holm, Emily; Gander, Kelsey; Davis, Kelsie; Dittrich, Kristina; Jandl, Vanda; Odell, Laura; Sweeten, Perry

    To describe a method for internal benchmarking of medication therapy management (MTM) pharmacist activities. Multisite MTM pharmacist practices within an integrated health care system. MTM pharmacists are located within primary care clinics and provide medication management through collaborative practice. MTM pharmacist activity is grouped into 3 categories: direct patient care, nonvisit patient care, and professional activities. MTM pharmacist activities were tracked with the use of the computer-based application Pharmacist Ambulatory Resource Management System (PhARMS) over a 12-month period to measure growth during a time of expansion. A total of 81% of MTM pharmacist time was recorded. A total of 1655.1 hours (41%) was nonvisit patient care, 1185.2 hours (29%) was direct patient care, and 1190.4 hours (30%) was professional activities. The number of patient visits per month increased during the study period. There were 1496 direct patient care encounters documented. Of those, 1051 (70.2%) were face-to-face visits, 257 (17.2%) were by telephone, and 188 (12.6%) were chart reviews. Nonvisit patient care and professional activities also increased during the period. PhARMS reported MTM pharmacist activities and captured nonvisit patient care work not tracked elsewhere. Internal benchmarking data proved to be useful for justifying increases in MTM pharmacist personnel resources. Reviewing data helped to identify best practices from high-performing sites. Limitations include potential for self-reporting bias and lack of patient outcomes data. Implementing PhARMS facilitated internal benchmarking of patient care and nonpatient care activities in a regional MTM program. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  10. Polarization Control with Piezoelectric and LiNbO3 Transducers

    NASA Astrophysics Data System (ADS)

    Bradley, E.; Miles, E.; Loginov, B.; Vu, N.

    Several Polarization control transducers have appeared on the market, and now automated, endless polarization control systems using these transducers are becoming available. Unfortunately it is not entirely clear what benchmark performance tests a polarization control system must pass, and the polarization disturbances a system must handle are open to some debate. We present quantitative measurements of realistic polarization disturbances and two benchmark tests we have successfully used to evaluate the performance of an automated, endless polarization control system. We use these tests to compare the performance of a system using piezoelectric transducers to that of a system using LiNbO3 transducers.

  11. Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.

  12. Benchmark Problems for Spacecraft Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.

    2003-01-01

    To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.

  13. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    USGS Publications Warehouse

    Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.

    2016-01-01

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics of sediment, and uncertainty in TEB values. Additional evaluations of benchmarks in relation to sediment chemistry and toxicity are ongoing.

  14. Developing integrated benchmarks for DOE performance measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance couldmore » be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.« less

  15. What Is the Impact of Subject Benchmarking?

    ERIC Educational Resources Information Center

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  16. The Role of Institutional Research in Conducting Comparative Analysis of Peers

    ERIC Educational Resources Information Center

    Trainer, James F.

    2008-01-01

    In this age of accountability, transparency, and accreditation, colleges and universities increasingly conduct comparative analyses and engage in benchmarking activities. Meant to inform institutional planning and decision making, comparative analyses and benchmarking are employed to let stakeholders know how an institution stacks up against its…

  17. Robust control of seismically excited cable stayed bridges with MR dampers

    NASA Astrophysics Data System (ADS)

    YeganehFallah, Arash; Khajeh Ahamd Attari, Nader

    2017-03-01

    In recent decades active and semi-active structural control are becoming attractive alternatives for enhancing performance of civil infrastructures subjected to seismic and winds loads. However, in order to have reliable active and semi-active control, there is a need to include information of uncertainties in design of the controller. In real world for civil structures, parameters such as loading places, stiffness, mass and damping are time variant and uncertain. These uncertainties in many cases model as parametric uncertainties. The motivation of this research is to design a robust controller for attenuating the vibrational responses of civil infrastructures, regarding their dynamical uncertainties. Uncertainties in structural dynamic’s parameters are modeled as affine uncertainties in state space modeling. These uncertainties are decoupled from the system through Linear Fractional Transformation (LFT) and are assumed to be unknown input to the system but norm bounded. The robust H ∞ controller is designed for the decoupled system to regulate the evaluation outputs and it is robust to effects of uncertainties, disturbance and sensors noise. The cable stayed bridge benchmark which is equipped with MR damper is considered for the numerical simulation. The simulated results show that the proposed robust controller can effectively mitigate undesired uncertainties effects on systems’ responds under seismic loading.

  18. Solution of the neutronics code dynamic benchmark by finite element method

    NASA Astrophysics Data System (ADS)

    Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.

    2016-10-01

    The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.

  19. Benchmarking computational fluid dynamics models of lava flow simulation for hazard assessment, forecasting, and risk management

    USGS Publications Warehouse

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.

    2017-01-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.

  20. Benchmarking Controlled Trial--a novel concept covering all observational effectiveness studies.

    PubMed

    Malmivaara, Antti

    2015-06-01

    The Benchmarking Controlled Trial (BCT) is a novel concept which covers all observational studies aiming to assess effectiveness. BCTs provide evidence of the comparative effectiveness between health service providers, and of effectiveness due to particular features of the health and social care systems. BCTs complement randomized controlled trials (RCTs) as the sources of evidence on effectiveness. This paper presents a definition of the BCT; compares the position of BCTs in assessing effectiveness with that of RCTs; presents a checklist for assessing methodological validity of a BCT; and pilot-tests the checklist with BCTs published recently in the leading medical journals.

  1. Evaluating the Effect of Labeled Benchmarks on Children’s Number Line Estimation Performance and Strategy Use

    PubMed Central

    Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen

    2017-01-01

    Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children’s strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders’ NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children’s NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders’ NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children’s benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children’s age and familiarity with the number range, these additional external benchmarks might need to be labeled. PMID:28713302

  2. Evaluating the Effect of Labeled Benchmarks on Children's Number Line Estimation Performance and Strategy Use.

    PubMed

    Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen

    2017-01-01

    Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children's age and familiarity with the number range, these additional external benchmarks might need to be labeled.

  3. An overview of the ENEA activities in the field of coupled codes NPP simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parisi, C.; Negrenti, E.; Sepielli, M.

    2012-07-01

    In the framework of the nuclear research activities in the fields of safety, training and education, ENEA (the Italian National Agency for New Technologies, Energy and the Sustainable Development) is in charge of defining and pursuing all the necessary steps for the development of a NPP engineering simulator at the 'Casaccia' Research Center near Rome. A summary of the activities in the field of the nuclear power plants simulation by coupled codes is here presented with the long term strategy for the engineering simulator development. Specifically, results from the participation in international benchmarking activities like the OECD/NEA 'Kalinin-3' benchmark andmore » the 'AER-DYN-002' benchmark, together with simulations of relevant events like the Fukushima accident, are here reported. The ultimate goal of such activities performed using state-of-the-art technology is the re-establishment of top level competencies in the NPP simulation field in order to facilitate the development of Enhanced Engineering Simulators and to upgrade competencies for supporting national energy strategy decisions, the nuclear national safety authority, and the R and D activities on NPP designs. (authors)« less

  4. Benchmarking child and adolescent mental health organizations.

    PubMed

    Brann, Peter; Walter, Garry; Coombs, Tim

    2011-04-01

    This paper describes aspects of the child and adolescent benchmarking forums that were part of the National Mental Health Benchmarking Project (NMHBP). These forums enabled participating child and adolescent mental health organizations to benchmark themselves against each other, with a view to understanding variability in performance against a range of key performance indicators (KPIs). Six child and adolescent mental health organizations took part in the NMHBP. Representatives from these organizations attended eight benchmarking forums at which they documented their performance against relevant KPIs. They also undertook two special projects designed to help them understand the variation in performance on given KPIs. There was considerable inter-organization variability on many of the KPIs. Even within organizations, there was often substantial variability over time. The variability in indicator data raised many questions for participants. This challenged participants to better understand and describe their local processes, prompted them to collect additional data, and stimulated them to make organizational comparisons. These activities fed into a process of reflection about their performance. Benchmarking has the potential to illuminate intra- and inter-organizational performance in the child and adolescent context.

  5. Paradoxical ventilator associated pneumonia incidences among selective digestive decontamination studies versus other studies of mechanically ventilated patients: benchmarking the evidence base

    PubMed Central

    2011-01-01

    Introduction Selective digestive decontamination (SDD) appears to have a more compelling evidence base than non-antimicrobial methods for the prevention of ventilator associated pneumonia (VAP). However, the striking variability in ventilator associated pneumonia-incidence proportion (VAP-IP) among the SDD studies remains unexplained and a postulated contextual effect remains untested for. Methods Nine reviews were used to source 45 observational (benchmark) groups and 137 component (control and intervention) groups of studies of SDD and studies of three non-antimicrobial methods of VAP prevention. The logit VAP-IP data were summarized by meta-analysis using random effects methods and the associated heterogeneity (tau2) was measured. As group level predictors of logit VAP-IP, the mode of VAP diagnosis, proportion of trauma admissions, the proportion receiving prolonged ventilation and the intervention method under study were examined in meta-regression models containing the benchmark groups together with either the control (models 1 to 3) or intervention (models 4 to 6) groups of the prevention studies. Results The VAP-IP benchmark derived here is 22.1% (95% confidence interval; 95% CI; 19.2 to 25.5; tau2 0.34) whereas the mean VAP-IP of control groups from studies of SDD and of non-antimicrobial methods, is 35.7 (29.7 to 41.8; tau2 0.63) versus 20.4 (17.2 to 24.0; tau2 0.41), respectively (P < 0.001). The disparity between the benchmark groups and the control groups of the SDD studies, which was most apparent for the highest quality studies, could not be explained in the meta-regression models after adjusting for various group level factors. The mean VAP-IP (95% CI) of intervention groups is 16.0 (12.6 to 20.3; tau2 0.59) and 17.1 (14.2 to 20.3; tau2 0.35) for SDD studies versus studies of non-antimicrobial methods, respectively. Conclusions The VAP-IP among the intervention groups within the SDD evidence base is less variable and more similar to the benchmark than among the control groups. These paradoxical observations cannot readily be explained. The interpretation of the SDD evidence base cannot proceed without further consideration of this contextual effect. PMID:21214897

  6. Performance Against WELCOA's Worksite Health Promotion Benchmarks Across Years Among Selected US Organizations.

    PubMed

    Weaver, GracieLee M; Mendenhall, Brandon N; Hunnicutt, David; Picarella, Ryan; Leffelman, Brittanie; Perko, Michael; Bibeau, Daniel L

    2018-05-01

    The purpose of this study was to quantify the performance of organizations' worksite health promotion (WHP) activities against the benchmarking criteria included in the Well Workplace Checklist (WWC). The Wellness Council of America (WELCOA) developed a tool to assess WHP with its 100-item WWC, which represents WELCOA's 7 performance benchmarks. Workplaces. This study includes a convenience sample of organizations who completed the checklist from 2008 to 2015. The sample size was 4643 entries from US organizations. The WWC includes demographic questions, general questions about WHP programs, and scales to measure the performance against the WELCOA 7 benchmarks. Descriptive analyses of WWC items were completed separately for each year of the study period. The majority of the organizations represented each year were multisite, multishift, medium- to large-sized companies mostly in the services industry. Despite yearly changes in participating organizations, results across the WELCOA 7 benchmark scores were consistent year to year. Across all years, benchmarks that organizations performed the lowest were senior-level support, data collection, and programming; wellness teams and supportive environments were the highest scoring benchmarks. In an era marked with economic swings and health-care reform, it appears that organizations are staying consistent in their performance across these benchmarks. The WWC could be useful for organizations, practitioners, and researchers in assessing the quality of WHP programs.

  7. Benchmarking and Threshold Standards in Higher Education. Staff and Educational Development Series.

    ERIC Educational Resources Information Center

    Smith, Helen, Ed.; Armstrong, Michael, Ed.; Brown, Sally, Ed.

    This book explores the issues involved in developing standards in higher education, examining the practical issues involved in benchmarking and offering a critical analysis of the problems associated with this developmental tool. The book focuses primarily on experience in the United Kingdom (UK), but looks also at international activity in this…

  8. Improving Federal Education Programs through an Integrated Performance and Benchmarking System.

    ERIC Educational Resources Information Center

    Department of Education, Washington, DC. Office of the Under Secretary.

    This document highlights the problems with current federal education program data collection activities and lists several factors that make movement toward a possible solution, then discusses the vision for the Integrated Performance and Benchmarking System (IPBS), a vision of an Internet-based system for harvesting information from states about…

  9. Academic Achievement and Extracurricular School Activities of At-Risk High School Students

    ERIC Educational Resources Information Center

    Marchetti, Ryan; Wilson, Randal H.; Dunham, Mardis

    2016-01-01

    This study compared the employment, extracurricular participation, and family structure status of students from low socioeconomic families that achieved state-approved benchmarks on ACT reading and mathematics tests to those that did not achieve the benchmarks. Free and reduced lunch eligibility was used to determine SES. Participants included 211…

  10. Using Web-Based Peer Benchmarking to Manage the Client-Based Project

    ERIC Educational Resources Information Center

    Raska, David; Keller, Eileen Weisenbach; Shaw, Doris

    2013-01-01

    The complexities of integrating client-based projects into marketing courses provide challenges for the instructor but produce richness of context and active learning for the student. This paper explains the integration of Web-based peer benchmarking as a means of improving student performance on client-based projects within a single semester in…

  11. Benchmarking NNWSI flow and transport codes: COVE 1 results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of themore » codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs.« less

  12. Benchmarking Controlled Trial—a novel concept covering all observational effectiveness studies

    PubMed Central

    Malmivaara, Antti

    2015-01-01

    Abstract The Benchmarking Controlled Trial (BCT) is a novel concept which covers all observational studies aiming to assess effectiveness. BCTs provide evidence of the comparative effectiveness between health service providers, and of effectiveness due to particular features of the health and social care systems. BCTs complement randomized controlled trials (RCTs) as the sources of evidence on effectiveness. This paper presents a definition of the BCT; compares the position of BCTs in assessing effectiveness with that of RCTs; presents a checklist for assessing methodological validity of a BCT; and pilot-tests the checklist with BCTs published recently in the leading medical journals. PMID:25965700

  13. Fundamental constraints on the performance of broadband ultrasonic matching structures and absorbers.

    PubMed

    Acher, O; Bernard, J M L; Maréchal, P; Bardaine, A; Levassort, F

    2009-04-01

    Recent fundamental results concerning the ultimate performance of electromagnetic absorbers were adapted and extrapolated to the field of sound waves. It was possible to deduce some appropriate figures of merit indicating whether a particular structure was close to the best possible matching properties. These figures of merit had simple expressions and were easy to compute in practical cases. Numerical examples illustrated that conventional state-of-the-art matching structures had an overall efficiency of approximately 50% of the fundamental limit. However, if the bandwidth at -6 dB was retained as a benchmark, the achieved bandwidth would be, at most, 12% of the fundamental limit associated with the same mass for the matching structure. Consequently, both encouragement for future improvements and accurate estimates of the surface mass required to obtain certain desired broadband properties could be provided. The results presented here can be used to investigate the broadband sound absorption and to benchmark passive and active noise control systems.

  14. State Education Agency Communications Process: Benchmark and Best Practices Project. Benchmark and Best Practices Project. Issue No. 01

    ERIC Educational Resources Information Center

    Zavadsky, Heather

    2014-01-01

    The role of state education agencies (SEAs) has shifted significantly from low-profile, compliance activities like managing federal grants to engaging in more complex and politically charged tasks like setting curriculum standards, developing accountability systems, and creating new teacher evaluation systems. The move from compliance-monitoring…

  15. ICSBEP Benchmarks For Nuclear Data Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briggs, J. Blair

    2005-05-24

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) -- Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive andmore » internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled ''International Handbook of Evaluated Criticality Safety Benchmark Experiments.'' The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.« less

  16. Access to a simulator is not enough: the benefits of virtual reality training based on peer-group-derived benchmarks--a randomized controlled trial.

    PubMed

    von Websky, Martin W; Raptis, Dimitri A; Vitz, Martina; Rosenthal, Rachel; Clavien, P A; Hahnloser, Dieter

    2013-11-01

    Virtual reality (VR) simulators are widely used to familiarize surgical novices with laparoscopy, but VR training methods differ in efficacy. In the present trial, self-controlled basic VR training (SC-training) was tested against training based on peer-group-derived benchmarks (PGD-training). First, novice laparoscopic residents were randomized into a SC group (n = 34), and a group using PGD-benchmarks (n = 34) for basic laparoscopic training. After completing basic training, both groups performed 60 VR laparoscopic cholecystectomies for performance analysis. Primary endpoints were simulator metrics; secondary endpoints were program adherence, trainee motivation, and training efficacy. Altogether, 66 residents completed basic training, and 3,837 of 3,960 (96.8 %) cholecystectomies were available for analysis. Course adherence was good, with only two dropouts, both in the SC-group. The PGD-group spent more time and repetitions in basic training until the benchmarks were reached and subsequently showed better performance in the readout cholecystectomies: Median time (gallbladder extraction) showed significant differences of 520 s (IQR 354-738 s) in SC-training versus 390 s (IQR 278-536 s) in the PGD-group (p < 0.001) and 215 s (IQR 175-276 s) in experts, respectively. Path length of the right instrument also showed significant differences, again with the PGD-training group being more efficient. Basic VR laparoscopic training based on PGD benchmarks with external assessment is superior to SC training, resulting in higher trainee motivation and better performance in simulated laparoscopic cholecystectomies. We recommend such a basic course based on PGD benchmarks before advancing to more elaborate VR training.

  17. 75 FR 43554 - Notice of Lodging of Consent Decree Under the Federal Water Pollution Control Act (“Clean Water...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-26

    ... Benchmark Engineering Corp., Civil Action No. 10-40131 was lodged with the United States District Court for... requires Defendants to pay a civil penalty of $150,000, perform a Supplemental Environmental Project, and.... Fafard Real Estate and Development Corp., FRE Building Co. Inc., and Benchmark Engineering Corp., D.J...

  18. PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan

    2015-03-10

    In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.

  19. Hand washing frequencies and procedures used in retail food services.

    PubMed

    Strohbehn, Catherine; Sneed, Jeannie; Paez, Paola; Meyer, Janell

    2008-08-01

    Transmission of viruses, bacteria, and parasites to food by way of improperly washed hands is a major contributing factor in the spread of foodborne illnesses. Field observers have assessed compliance with hand washing regulations, yet few studies have included consideration of frequency and methods used by sectors of the food service industry or have included benchmarks for hand washing. Five 3-h observation periods of employee (n = 80) hand washing behaviors during menu production, service, and cleaning were conducted in 16 food service operations for a total of 240 h of direct observation. Four operations from each of four sectors of the retail food service industry participated in the study: assisted living for the elderly, childcare, restaurants, and schools. A validated observation form, based on 2005 Food Code guidelines, was used by two trained researchers. Researchers noted when hands should have been washed, when hands were washed, and how hands were washed. Overall compliance with Food Code recommendations for frequency during production, service, and cleaning phases ranged from 5% in restaurants to 33% in assisted living facilities. Procedural compliance rates also were low. Proposed benchmarks for the number of times hand washing should occur by each employee for each sector of food service during each phase of operation are seven times per hour for assisted living, nine times per hour for childcare, 29 times per hour for restaurants, and 11 times per hour for schools. These benchmarks are high, especially for restaurant employees. Implementation would mean lost productivity and potential for dermatitis; thus, active managerial control over work assignments is needed. These benchmarks can be used for training and to guide employee hand washing behaviors.

  20. Developing a benchmark for emotional analysis of music

    PubMed Central

    Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER. PMID:28282400

  1. Space Weather Action Plan Ionizing Radiation Benchmarks: Phase 1 update and plans for Phase 2

    NASA Astrophysics Data System (ADS)

    Talaat, E. R.; Kozyra, J.; Onsager, T. G.; Posner, A.; Allen, J. E., Jr.; Black, C.; Christian, E. R.; Copeland, K.; Fry, D. J.; Johnston, W. R.; Kanekal, S. G.; Mertens, C. J.; Minow, J. I.; Pierson, J.; Rutledge, R.; Semones, E.; Sibeck, D. G.; St Cyr, O. C.; Xapsos, M.

    2017-12-01

    Changes in the near-Earth radiation environment can affect satellite operations, astronauts in space, commercial space activities, and the radiation environment on aircraft at relevant latitudes or altitudes. Understanding the diverse effects of increased radiation is challenging, but producing ionizing radiation benchmarks will help address these effects. The following areas have been considered in addressing the near-Earth radiation environment: the Earth's trapped radiation belts, the galactic cosmic ray background, and solar energetic-particle events. The radiation benchmarks attempt to account for any change in the near-Earth radiation environment, which, under extreme cases, could present a significant risk to critical infrastructure operations or human health. The goal of these ionizing radiation benchmarks and associated confidence levels will define at least the radiation intensity as a function of time, particle type, and energy for an occurrence frequency of 1 in 100 years and an intensity level at the theoretical maximum for the event. In this paper, we present the benchmarks that address radiation levels at all applicable altitudes and latitudes in the near-Earth environment, the assumptions made and the associated uncertainties, and the next steps planned for updating the benchmarks.

  2. Developing a benchmark for emotional analysis of music.

    PubMed

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  3. Benchmark duration of work hours for development of fatigue symptoms in Japanese workers with adjustment for job-related stress.

    PubMed

    Suwazono, Yasushi; Dochi, Mirei; Kobayashi, Etsuko; Oishi, Mitsuhiro; Okubo, Yasushi; Tanaka, Kumihiko; Sakata, Kouichi

    2008-12-01

    The objective of this study was to calculate benchmark durations and lower 95% confidence limits for benchmark durations of working hours associated with subjective fatigue symptoms by applying the benchmark dose approach while adjusting for job-related stress using multiple logistic regression analyses. A self-administered questionnaire was completed by 3,069 male and 412 female daytime workers (age 18-67 years) in a Japanese steel company. The eight dependent variables in the Cumulative Fatigue Symptoms Index were decreased vitality, general fatigue, physical disorders, irritability, decreased willingness to work, anxiety, depressive feelings, and chronic tiredness. Independent variables were daily working hours, four subscales (job demand, job control, interpersonal relationship, and job suitability) of the Brief Job Stress Questionnaire, and other potential covariates. Using significant parameters for working hours and those for other covariates, the benchmark durations of working hours were calculated for the corresponding Index property. Benchmark response was set at 5% or 10%. Assuming a condition of worst job stress, the benchmark duration/lower 95% confidence limit for benchmark duration of working hours per day with a benchmark response of 5% or 10% were 10.0/9.4 or 11.7/10.7 (irritability) and 9.2/8.9 or 10.4/9.8 (chronic tiredness) in men and 8.9/8.4 or 9.8/8.9 (chronic tiredness) in women. The threshold amounts of working hours for fatigue symptoms under the worst job-related stress were very close to the standard daily working hours in Japan. The results strongly suggest that special attention should be paid to employees whose working hours exceed threshold amounts based on individual levels of job-related stress.

  4. Estimation of hand hygiene opportunities on an adult medical ward using 24-hour camera surveillance: validation of the HOW2 Benchmark Study.

    PubMed

    Diller, Thomas; Kelly, J William; Blackhurst, Dawn; Steed, Connie; Boeker, Sue; McElveen, Danielle C

    2014-06-01

    We previously published a formula to estimate the number of hand hygiene opportunities (HHOs) per patient-day using the World Health Organization's "Five Moments for Hand Hygiene" methodology (HOW2 Benchmark Study). HHOs can be used as a denominator for calculating hand hygiene compliance rates when product utilization data are available. This study validates the previously derived HHO estimate using 24-hour video surveillance of health care worker hand hygiene activity. The validation study utilized 24-hour video surveillance recordings of 26 patients' hospital stays to measure the actual number of HHOs per patient-day on a medicine ward in a large teaching hospital. Statistical methods were used to compare these results to those obtained by episodic observation of patient activity in the original derivation study. Total hours of data collection were 81.3 and 1,510.8, resulting in 1,740 and 4,522 HHOs in the derivation and validation studies, respectively. Comparisons of the mean and median HHOs per 24-hour period did not differ significantly. HHOs were 71.6 (95% confidence interval: 64.9-78.3) and 73.9 (95% confidence interval: 69.1-84.1), respectively. This study validates the HOW2 Benchmark Study and confirms that expected numbers of HHOs can be estimated from the unit's patient census and patient-to-nurse ratio. These data can be used as denominators in calculations of hand hygiene compliance rates from electronic monitoring using the "Five Moments for Hand Hygiene" methodology. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  5. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    NASA Astrophysics Data System (ADS)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  6. Gatemon Benchmarking and Two-Qubit Operation

    NASA Astrophysics Data System (ADS)

    Casparis, Lucas; Larsen, Thorvald; Olsen, Michael; Petersson, Karl; Kuemmeth, Ferdinand; Krogstrup, Peter; Nygard, Jesper; Marcus, Charles

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability singular to semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors of ~0.5 % for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent SWAP operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of ~91 %, demonstrating the potential of gatemon qubits for building scalable quantum processors. We acknowledge financial support from Microsoft Project Q and the Danish National Research Foundation.

  7. Design and Implementation of a Web-Based Reporting and Benchmarking Center for Inpatient Glucometrics

    PubMed Central

    Schnipper, Jeffrey Lawrence; Messler, Jordan; Ramos, Pedro; Kulasa, Kristen; Nolan, Ann; Rogers, Kendall

    2014-01-01

    Background: Insulin is a top source of adverse drug events in the hospital, and glycemic control is a focus of improvement efforts across the country. Yet, the majority of hospitals have no data to gauge their performance on glycemic control, hypoglycemia rates, or hypoglycemic management. Current tools to outsource glucometrics reports are limited in availability or function. Methods: Society of Hospital Medicine (SHM) faculty designed and implemented a web-based data and reporting center that calculates glucometrics on blood glucose data files securely uploaded by users. Unit labels, care type (critical care, non–critical care), and unit type (eg, medical, surgical, mixed, pediatrics) are defined on upload allowing for robust, flexible reporting. Reports for any date range, care type, unit type, or any combination of units are available on demand for review or downloading into a variety of file formats. Four reports with supporting graphics depict glycemic control, hypoglycemia, and hypoglycemia management by patient day or patient stay. Benchmarking and performance ranking reports are generated periodically for all hospitals in the database. Results: In all, 76 hospitals have uploaded at least 12 months of data for non–critical care areas and 67 sites have uploaded critical care data. Critical care benchmarking reveals wide variability in performance. Some hospitals achieve top quartile performance in both glycemic control and hypoglycemia parameters. Conclusions: This new web-based glucometrics data and reporting tool allows hospitals to track their performance with a flexible reporting system, and provides them with external benchmarking. Tools like this help to establish standardized glucometrics and performance standards. PMID:24876426

  8. Design and implementation of a web-based reporting and benchmarking center for inpatient glucometrics.

    PubMed

    Maynard, Greg; Schnipper, Jeffrey Lawrence; Messler, Jordan; Ramos, Pedro; Kulasa, Kristen; Nolan, Ann; Rogers, Kendall

    2014-07-01

    Insulin is a top source of adverse drug events in the hospital, and glycemic control is a focus of improvement efforts across the country. Yet, the majority of hospitals have no data to gauge their performance on glycemic control, hypoglycemia rates, or hypoglycemic management. Current tools to outsource glucometrics reports are limited in availability or function. Society of Hospital Medicine (SHM) faculty designed and implemented a web-based data and reporting center that calculates glucometrics on blood glucose data files securely uploaded by users. Unit labels, care type (critical care, non-critical care), and unit type (eg, medical, surgical, mixed, pediatrics) are defined on upload allowing for robust, flexible reporting. Reports for any date range, care type, unit type, or any combination of units are available on demand for review or downloading into a variety of file formats. Four reports with supporting graphics depict glycemic control, hypoglycemia, and hypoglycemia management by patient day or patient stay. Benchmarking and performance ranking reports are generated periodically for all hospitals in the database. In all, 76 hospitals have uploaded at least 12 months of data for non-critical care areas and 67 sites have uploaded critical care data. Critical care benchmarking reveals wide variability in performance. Some hospitals achieve top quartile performance in both glycemic control and hypoglycemia parameters. This new web-based glucometrics data and reporting tool allows hospitals to track their performance with a flexible reporting system, and provides them with external benchmarking. Tools like this help to establish standardized glucometrics and performance standards. © 2014 Diabetes Technology Society.

  9. Benchmarking nitrogen removal suspended-carrier biofilm systems using dynamic simulation.

    PubMed

    Vanhooren, H; Yuan, Z; Vanrolleghem, P A

    2002-01-01

    We are witnessing an enormous growth in biological nitrogen removal from wastewater. It presents specific challenges beyond traditional COD (carbon) removal. A possibility for optimised process design is the use of biomass-supporting media. In this paper, attached growth processes (AGP) are evaluated using dynamic simulations. The advantages of these systems that were qualitatively described elsewhere, are validated quantitatively based on a simulation benchmark for activated sludge treatment systems. This simulation benchmark is extended with a biofilm model that allows for fast and accurate simulation of the conversion of different substrates in a biofilm. The economic feasibility of this system is evaluated using the data generated with the benchmark simulations. Capital savings due to volume reduction and reduced sludge production are weighed out against increased aeration costs. In this evaluation, effluent quality is integrated as well.

  10. NACA0012 benchmark model experimental flutter results with unsteady pressure distributions

    NASA Technical Reports Server (NTRS)

    Rivera, Jose A., Jr.; Dansberry, Bryan E.; Bennett, Robert M.; Durham, Michael H.; Silva, Walter A.

    1992-01-01

    The Structural Dynamics Division at NASA Langley Research Center has started a wind tunnel activity referred to as the Benchmark Models Program. The primary objective of this program is to acquire measured dynamic instability and corresponding pressure data that will be useful for developing and evaluating aeroelastic type computational fluid dynamics codes currently in use or under development. The program is a multi-year activity that will involve testing of several different models to investigate various aeroelastic phenomena. This paper describes results obtained from a second wind tunnel test of the first model in the Benchmark Models Program. This first model consisted of a rigid semispan wing having a rectangular planform and a NACA 0012 airfoil shape which was mounted on a flexible two degree of freedom mount system. Experimental flutter boundaries and corresponding unsteady pressure distribution data acquired over two model chords located at the 60 and 95 percent span stations are presented.

  11. Teaching Medical Students at a Distance: Using Distance Learning Benchmarks to Plan and Evaluate a Web-Enhanced Medical Student Curriculum

    ERIC Educational Resources Information Center

    Olney, Cynthia A.; Chumley, Heidi; Parra, Juan M.

    2004-01-01

    A team designing a Web-enhanced third-year medical education didactic curriculum based their course planning and evaluation activities on the Institute for Higher Education Policy's (2000) 24 benchmarks for online distance learning. The authors present the team's blueprint for planning and evaluating the Web-enhanced curriculum, which incorporates…

  12. Benchmarking of DFLAW Solid Secondary Wastes and Processes with UK/Europe Counterparts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Elvie E.; Swanberg, David J.; Surman, J.

    This report provides information and background on UK solid wastes and waste processes that are similar to those which will be generated by the Direct-Feed Low Activity Waste (DFLAW) facilities at Hanford. The aim is to further improve the design case for stabilizing and immobilizing of solid secondary wastes, establish international benchmarking and review possibilities for innovation.

  13. Successful implementation of diabetes audits in Australia: the Australian National Diabetes Information Audit and Benchmarking (ANDIAB) initiative.

    PubMed

    Lee, A S; Colagiuri, S; Flack, J R

    2018-04-06

    We developed and implemented a national audit and benchmarking programme to describe the clinical status of people with diabetes attending specialist diabetes services in Australia. The Australian National Diabetes Information Audit and Benchmarking (ANDIAB) initiative was established as a quality audit activity. De-identified data on demographic, clinical, biochemical and outcome items were collected from specialist diabetes services across Australia to provide cross-sectional data on people with diabetes attending specialist centres at least biennially during the years 1998 to 2011. In total, 38 155 sets of data were collected over the eight ANDIAB audits. Each ANDIAB audit achieved its primary objective to collect, collate, analyse, audit and report clinical diabetes data in Australia. Each audit resulted in the production of a pooled data report, as well as individual site reports allowing comparison and benchmarking against other participating sites. The ANDIAB initiative resulted in the largest cross-sectional national de-identified dataset describing the clinical status of people with diabetes attending specialist diabetes services in Australia. ANDIAB showed that people treated by specialist services had a high burden of diabetes complications. This quality audit activity provided a framework to guide planning of healthcare services. © 2018 Diabetes UK.

  14. Benchmarking: a method for continuous quality improvement in health.

    PubMed

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  15. On the efficiency of FES cycling: a framework and systematic review.

    PubMed

    Hunt, K J; Fang, J; Saengsuwan, J; Grob, M; Laubacher, M

    2012-01-01

    Research and development in the art of cycling using functional electrical stimulation (FES) of the paralysed leg muscles has been going on for around thirty years. A range of physiological benefits has been observed in clinical studies but an outstanding problem with FES-cycling is that efficiency and power output are very low. The present work had the following aims: (i) to provide a tutorial introduction to a novel framework and methods of estimation of metabolic efficiency using example data sets, and to propose benchmark measures for evaluating FES-cycling performance; (ii) to systematically review the literature pertaining specifically to the metabolic efficiency of FES-cycling, to analyse the observations and possible explanations for the low efficiency, and to pose hypotheses for future studies which aim to improve performance. We recommend the following as benchmark measures for assessment of the performance of FES-cycling: (i) total work efficiency, delta efficiency and stimulation cost; (ii) we recommend, further, that these benchmark measures be complemented by mechanical measures of maximum power output, sustainable steady-state power output and endurance. Performance assessments should be carried out at a well-defined operating point, i.e. under conditions of well controlled work rate and cadence, because these variables have a strong effect on energy expenditure. Future work should focus on the two main factors which affect FES-cycling performance, namely: (i) unfavourable biomechanics, i.e. crude recruitment of muscle groups, non-optimal timing of muscle activation, and lack of synergistic and antagonistic joint control; (ii) non-physiological recruitment of muscle fibres, i.e. mixed recruitment of fibres of different type and deterministic constant-frequency stimulation. We hypothesise that the following areas may bring better FES-cycling performance: (i) study of alternative stimulation strategies for muscle activation including irregular stimulation patterns (e.g. doublets, triplets, stochastic patterns) and variable frequency stimulation trains, where it appears that increasing frequency over time may be profitable; (ii) study of better timing parameters for the stimulated muscle groups, and addition of more muscle groups: this path may be approached using EMG studies and constrained numerical optimisation employing dynamic models; (iii) development of optimal stimulation protocols for muscle reconditioning and FES-cycle training.

  16. Modified Newton-Raphson GRAPE methods for optimal control of spin systems

    NASA Astrophysics Data System (ADS)

    Goodwin, D. L.; Kuprov, Ilya

    2016-05-01

    Quadratic convergence throughout the active space is achieved for the gradient ascent pulse engineering (GRAPE) family of quantum optimal control algorithms. We demonstrate in this communication that the Hessian of the GRAPE fidelity functional is unusually cheap, having the same asymptotic complexity scaling as the functional itself. This leads to the possibility of using very efficient numerical optimization techniques. In particular, the Newton-Raphson method with a rational function optimization (RFO) regularized Hessian is shown in this work to require fewer system trajectory evaluations than any other algorithm in the GRAPE family. This communication describes algebraic and numerical implementation aspects (matrix exponential recycling, Hessian regularization, etc.) for the RFO Newton-Raphson version of GRAPE and reports benchmarks for common spin state control problems in magnetic resonance spectroscopy.

  17. The Isolated Synthetic Jet in Crossflow: A Benchmark for Flow Control Simulation

    NASA Technical Reports Server (NTRS)

    Schaeffler, Norman W.; Jenkins, Luther N.

    2006-01-01

    An overview of the data acquisition, reduction, and uncertainty of experimental measurements made of the flowfield created by the interaction of an isolated synthetic jet and a turbulent boundary layer is presented. The experimental measurements were undertaken to serve as the second of three computational fluid dynamics validation databases for Active Flow Control. The validation databases were presented at the NASA Langley Research Center Workshop on CFD Validation of Synthetic Jets and Turbulent Separation Control in March, 2004. Detailed measurements were made to document the boundary conditions for the flow and also for the phase-averaged flowfield itself. Three component Laser-Doppler Velocimetry, 2-D Particle Image Velocimetry, and Stereo Particle Image Velocimetry were utilized to document the phase-averaged velocity field and the turbulent stresses.

  18. The Isolated Synthetic Jet in Crossflow: A Benchmark for Flow Control Simulation

    NASA Technical Reports Server (NTRS)

    Schaeffler, Norman W.; Jenkins, Luther N.

    2004-01-01

    An overview of the data acquisition, reduction, and uncertainty of experimental measurements of the flowfield created by the interaction of an isolated synthetic jet and a turbulent boundary layer is presented. The experimental measurements were undertaken to serve as the second of three computational fluid dynamics validation databases for Active Flow Control. The validation databases were presented at the NASA Langley Research Center Workshop on CFD Validation of Synthetic Jets and Turbulent Separation Control in March, 2004. Detailed measurements were made to document the boundary conditions for the flow and also for the phase-averaged flowfield itself. Three component Laser-Doppler Velocimetry, 2-D Particle Image Velocimetry, and Stereo Particle Image Velocimetry were utilized to document the phase averaged velocity field and the turbulent stresses.

  19. Posture Control-Human-Inspired Approaches for Humanoid Robot Benchmarking: Conceptualizing Tests, Protocols and Analyses.

    PubMed

    Mergner, Thomas; Lippi, Vittorio

    2018-01-01

    Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1) four basic physical disturbances (support surface (SS) tilt and translation, field and contact forces) may affect the balancing in any given degree of freedom (DoF). Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2) Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with "reactive" balancing of external disturbances and "proactive" balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3) Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking criteria build on the evoked sway magnitude, normalized to robot weight and Center of mass (COM) height, in relation to reference ranges that remain to be established. The references may include human likeness features. The proposed benchmarking concept may in principle also be applied to wearable robots, where a human user may command movements, but may not be aware of the additionally required postural control, which then needs to be implemented into the robot.

  20. Identifying future competitive business strategies for the U.S. furniture industry: Benchmarking and paradigm shifts

    Treesearch

    Albert Schuler; Urs Buehlmann

    2003-01-01

    This paper describes benchmarking activities undertaken to provide a basis for comparing the U.S. wood furniture industry with other nations that have a globally competitive furniture manufacturing industry. The second part of this paper outlines and discusses strategies that have the potential to help the U.S. furniture industry survive and thrive in a global business...

  1. Evaluation and optimization of virtual screening workflows with DEKOIS 2.0--a public library of challenging docking benchmark sets.

    PubMed

    Bauer, Matthias R; Ibrahim, Tamer M; Vogel, Simon M; Boeckler, Frank M

    2013-06-24

    The application of molecular benchmarking sets helps to assess the actual performance of virtual screening (VS) workflows. To improve the efficiency of structure-based VS approaches, the selection and optimization of various parameters can be guided by benchmarking. With the DEKOIS 2.0 library, we aim to further extend and complement the collection of publicly available decoy sets. Based on BindingDB bioactivity data, we provide 81 new and structurally diverse benchmark sets for a wide variety of different target classes. To ensure a meaningful selection of ligands, we address several issues that can be found in bioactivity data. We have improved our previously introduced DEKOIS methodology with enhanced physicochemical matching, now including the consideration of molecular charges, as well as a more sophisticated elimination of latent actives in the decoy set (LADS). We evaluate the docking performance of Glide, GOLD, and AutoDock Vina with our data sets and highlight existing challenges for VS tools. All DEKOIS 2.0 benchmark sets will be made accessible at http://www.dekois.com.

  2. Robust fuzzy output feedback controller for affine nonlinear systems via T-S fuzzy bilinear model: CSTR benchmark.

    PubMed

    Hamdy, M; Hamdan, I

    2015-07-01

    In this paper, a robust H∞ fuzzy output feedback controller is designed for a class of affine nonlinear systems with disturbance via Takagi-Sugeno (T-S) fuzzy bilinear model. The parallel distributed compensation (PDC) technique is utilized to design a fuzzy controller. The stability conditions of the overall closed loop T-S fuzzy bilinear model are formulated in terms of Lyapunov function via linear matrix inequality (LMI). The control law is robustified by H∞ sense to attenuate external disturbance. Moreover, the desired controller gains can be obtained by solving a set of LMI. A continuous stirred tank reactor (CSTR), which is a benchmark problem in nonlinear process control, is discussed in detail to verify the effectiveness of the proposed approach with a comparative study. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Human Health Benchmarks for Pesticides

    EPA Pesticide Factsheets

    Advanced testing methods now allow pesticides to be detected in water at very low levels. These small amounts of pesticides detected in drinking water or source water for drinking water do not necessarily indicate a health risk. The EPA has developed human health benchmarks for 363 pesticides to enable our partners to better determine whether the detection of a pesticide in drinking water or source waters for drinking water may indicate a potential health risk and to help them prioritize monitoring efforts.The table below includes benchmarks for acute (one-day) and chronic (lifetime) exposures for the most sensitive populations from exposure to pesticides that may be found in surface or ground water sources of drinking water. The table also includes benchmarks for 40 pesticides in drinking water that have the potential for cancer risk. The HHBP table includes pesticide active ingredients for which Health Advisories or enforceable National Primary Drinking Water Regulations (e.g., maximum contaminant levels) have not been developed.

  4. Electrochemical Characterization Laboratory | Energy Systems Integration

    Science.gov Websites

    proton exchange membrane fuel cells. Photo of an NREL researcher evaluating catalyst activity in the the following capabilities: Determination and benchmarking of novel electrocatalyst activity

  5. Making Waves.

    ERIC Educational Resources Information Center

    DeClark, Tom

    2000-01-01

    Presents an activity on waves that addresses the state standards and benchmarks of Michigan. Demonstrates waves and studies wave's medium, motion, and frequency. The activity is designed to address different learning styles. (YDS)

  6. Space Operations Training Concepts Benchmark Study (Training in a Continuous Operations Environment)

    NASA Technical Reports Server (NTRS)

    Johnston, Alan E.; Gilchrist, Michael; Underwood, Debrah (Technical Monitor)

    2002-01-01

    The NASA/USAF Benchmark Space Operations Training Concepts Study will perform a comparative analysis of the space operations training programs utilized by the United States Air Force Space Command with those utilized by the National Aeronautics and Space Administration. The concentration of the study will be focused on Ground Controller/Flight Controller Training for the International Space Station Payload Program. The duration of the study is expected to be five months with report completion by 30 June 2002. The U.S. Air Force Space Command was chosen as the most likely candidate for this benchmark study because their experience in payload operations controller training and user interfaces compares favorably with the Payload Operations Integration Center's training and user interfaces. These similarities can be seen in the dynamics of missions/payloads, controller on-console requirements, and currency/proficiency challenges to name a few. It is expected that the report will look at the respective programs and investigate goals of each training program, unique training challenges posed by space operations ground controller environments, processes of setting up controller training programs, phases of controller training, methods of controller training, techniques to evaluate adequacy of controller knowledge and the training received, and approaches to training administration. The report will provide recommendations to the respective agencies based on the findings. Attached is a preliminary outline of the study. Following selection of participants and an approval to proceed, initial contact will be made with U.S. Air Force Space Command Directorate of Training to discuss steps to accomplish the study.

  7. Benchmarking: A Method for Continuous Quality Improvement in Health

    PubMed Central

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-01-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  8. Posture Control—Human-Inspired Approaches for Humanoid Robot Benchmarking: Conceptualizing Tests, Protocols and Analyses

    PubMed Central

    Mergner, Thomas; Lippi, Vittorio

    2018-01-01

    Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1) four basic physical disturbances (support surface (SS) tilt and translation, field and contact forces) may affect the balancing in any given degree of freedom (DoF). Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2) Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with “reactive” balancing of external disturbances and “proactive” balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3) Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking criteria build on the evoked sway magnitude, normalized to robot weight and Center of mass (COM) height, in relation to reference ranges that remain to be established. The references may include human likeness features. The proposed benchmarking concept may in principle also be applied to wearable robots, where a human user may command movements, but may not be aware of the additionally required postural control, which then needs to be implemented into the robot. PMID:29867428

  9. Experimental flutter boundaries with unsteady pressure distributions for the NACA 0012 Benchmark Model

    NASA Technical Reports Server (NTRS)

    Rivera, Jose A., Jr.; Dansberry, Bryan E.; Farmer, Moses G.; Eckstrom, Clinton V.; Seidel, David A.; Bennett, Robert M.

    1991-01-01

    The Structural Dynamics Div. at NASA-Langley has started a wind tunnel activity referred to as the Benchmark Models Program. The objective is to acquire test data that will be useful for developing and evaluating aeroelastic type Computational Fluid Dynamics codes currently in use or under development. The progress is described which was achieved in testing the first model in the Benchmark Models Program. Experimental flutter boundaries are presented for a rigid semispan model (NACA 0012 airfoil section) mounted on a flexible mount system. Also, steady and unsteady pressure measurements taken at the flutter condition are presented. The pressure data were acquired over the entire model chord located at the 60 pct. span station.

  10. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Javier Ortensi; Sonat Sen

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less

  11. Arithmetic Data Cube as a Data Intensive Benchmark

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Shabano, Leonid

    2003-01-01

    Data movement across computational grids and across memory hierarchy of individual grid machines is known to be a limiting factor for application involving large data sets. In this paper we introduce the Data Cube Operator on an Arithmetic Data Set which we call Arithmetic Data Cube (ADC). We propose to use the ADC to benchmark grid capabilities to handle large distributed data sets. The ADC stresses all levels of grid memory by producing 2d views of an Arithmetic Data Set of d-tuples described by a small number of parameters. We control data intensity of the ADC by controlling the sizes of the views through choice of the tuple parameters.

  12. Groundwater flow with energy transport and water-ice phase change: Numerical simulations, benchmarks, and application to freezing in peat bogs

    USGS Publications Warehouse

    McKenzie, J.M.; Voss, C.I.; Siegel, D.I.

    2007-01-01

    In northern peatlands, subsurface ice formation is an important process that can control heat transport, groundwater flow, and biological activity. Temperature was measured over one and a half years in a vertical profile in the Red Lake Bog, Minnesota. To successfully simulate the transport of heat within the peat profile, the U.S. Geological Survey's SUTRA computer code was modified. The modified code simulates fully saturated, coupled porewater-energy transport, with freezing and melting porewater, and includes proportional heat capacity and thermal conductivity of water and ice, decreasing matrix permeability due to ice formation, and latent heat. The model is verified by correctly simulating the Lunardini analytical solution for ice formation in a porous medium with a mixed ice-water zone. The modified SUTRA model correctly simulates the temperature and ice distributions in the peat bog. Two possible benchmark problems for groundwater and energy transport with ice formation and melting are proposed that may be used by other researchers for code comparison. ?? 2006 Elsevier Ltd. All rights reserved.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodwin, D. L.; Kuprov, Ilya, E-mail: i.kuprov@soton.ac.uk

    Quadratic convergence throughout the active space is achieved for the gradient ascent pulse engineering (GRAPE) family of quantum optimal control algorithms. We demonstrate in this communication that the Hessian of the GRAPE fidelity functional is unusually cheap, having the same asymptotic complexity scaling as the functional itself. This leads to the possibility of using very efficient numerical optimization techniques. In particular, the Newton-Raphson method with a rational function optimization (RFO) regularized Hessian is shown in this work to require fewer system trajectory evaluations than any other algorithm in the GRAPE family. This communication describes algebraic and numerical implementation aspects (matrixmore » exponential recycling, Hessian regularization, etc.) for the RFO Newton-Raphson version of GRAPE and reports benchmarks for common spin state control problems in magnetic resonance spectroscopy.« less

  14. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thrower, A.W.; Patric, J.; Keister, M.

    2008-07-01

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how thesemore » findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in safely and efficiently shipping spent nuclear fuel and other radioactive materials. Additional business processes may be examined in this phase. The findings of these benchmarking efforts will help determine the organizational structure and requirements of the national transportation system. (authors)« less

  15. Back-Arc Opening in the Western End of the Okinawa Trough Revealed From GNSS/Acoustic Measurements

    NASA Astrophysics Data System (ADS)

    Chen, Horng-Yue; Ikuta, Ryoya; Lin, Cheng-Horng; Hsu, Ya-Ju; Kohmi, Takeru; Wang, Chau-Chang; Yu, Shui-Beih; Tu, Yoko; Tsujii, Toshiaki; Ando, Masataka

    2018-01-01

    We measured seafloor movement using a Global Navigation Satellite Systems (GNSS)/Acoustic technique at the south of the rifting valley in the western end of the Okinawa Trough back-arc basin, 60 km east of northeastern corner of Taiwan. The horizontal position of the seafloor benchmark, measured eight times between July 2012 and May 2016, showed a southeastward movement suggesting a back-arc opening of the Okinawa Trough. The average velocity of the seafloor benchmark shows a block motion together with Yonaguni Island. The westernmost part of the Ryukyu Arc rotates clockwise and is pulled apart from the Taiwan Island, which should cause the expansion of the Yilan Plain, Taiwan. Comparing the motion of the seafloor benchmark with adjacent seismicity, we suggest a gentle episodic opening of the rifting valley accompanying a moderate seismic activation, which differs from the case in the segment north off-Yonaguni Island where a rapid dyke intrusion occurs with a significant seismic activity.

  16. The adenosine triphosphate test is a rapid and reliable audit tool to assess manual cleaning adequacy of flexible endoscope channels.

    PubMed

    Alfa, Michelle J; Fatima, Iram; Olson, Nancy

    2013-03-01

    The study objective was to verify that the adenosine triphosphate (ATP) benchmark of <200 relative light units (RLUs) was achievable in a busy endoscopy clinic that followed the manufacturer's manual cleaning instructions. All channels from patient-used colonoscopes (20) and duodenoscopes (20) in a tertiary care hospital endoscopy clinic were sampled after manual cleaning and tested for residual ATP. The ATP test benchmark for adequate manual cleaning was set at <200 RLUs. The benchmark for protein was <6.4 μg/cm(2), and, for bioburden, it was <4-log10 colony-forming units/cm(2). Our data demonstrated that 96% (115/120) of channels from 20 colonoscopes and 20 duodenoscopes evaluated met the ATP benchmark of <200 RLUs. The 5 channels that exceeded 200 RLUs were all elevator guide-wire channels. All 120 of the manually cleaned endoscopes tested had protein and bioburden levels that were compliant with accepted benchmarks for manual cleaning for suction-biopsy, air-water, and auxiliary water channels. Our data confirmed that, by following the endoscope manufacturer's manual cleaning recommendations, 96% of channels in gastrointestinal endoscopes would have <200 RLUs for the ATP test kit evaluated and would meet the accepted clean benchmarks for protein and bioburden. Copyright © 2013 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  17. Social marketing approaches to nutrition and physical activity interventions in early care and education centres: a systematic review.

    PubMed

    Luecking, C T; Hennink-Kaminski, H; Ihekweazu, C; Vaughn, A; Mazzucca, S; Ward, D S

    2017-12-01

    Social marketing is a promising planning approach for influencing voluntary lifestyle behaviours, but its application to nutrition and physical activity interventions in the early care and education setting remains unknown. PubMed, ISI Web of Science, PsycInfo and the Cumulative Index of Nursing and Allied Health were systematically searched to identify interventions targeting nutrition and/or physical activity behaviours of children enrolled in early care centres between 1994 and 2016. Content analysis methods were used to capture information reflecting eight social marketing benchmark criteria. The review included 135 articles representing 77 interventions. Two interventions incorporated all eight benchmark criteria, but the majority included fewer than four. Each intervention included behaviour and methods mix criteria, and more than half identified audience segments. Only one-third of interventions incorporated customer orientation, theory, exchange and insight. Only six interventions addressed competing behaviours. We did not find statistical significance for the effectiveness of interventions on child-level diet, physical activity or anthropometric outcomes based on the number of benchmark criteria used. This review highlights opportunities to apply social marketing to obesity prevention interventions in early care centres. Social marketing could be an important strategy for early childhood obesity prevention efforts, and future research investigations into its effects are warranted. © 2017 World Obesity Federation.

  18. Use of benchmarking and public reporting for infection control in four high-income countries.

    PubMed

    Haustein, Thomas; Gastmeier, Petra; Holmes, Alison; Lucet, Jean-Christophe; Shannon, Richard P; Pittet, Didier; Harbarth, Stephan

    2011-06-01

    Benchmarking of surveillance data for health-care-associated infection (HCAI) has been used for more than three decades to inform prevention strategies and improve patients' safety. In recent years, public reporting of HCAI indicators has been mandated in several countries because of an increasing demand for transparency, although many methodological issues surrounding benchmarking remain unresolved and are highly debated. In this Review, we describe developments in benchmarking and public reporting of HCAI indicators in England, France, Germany, and the USA. Although benchmarking networks in these countries are derived from a common model and use similar methods, approaches to public reporting have been more diverse. The USA and England have predominantly focused on reporting of infection rates, whereas France has put emphasis on process and structure indicators. In Germany, HCAI indicators of individual institutions are treated confidentially and are not disseminated publicly. Although evidence for a direct effect of public reporting of indicators alone on incidence of HCAIs is weak at present, it has been associated with substantial organisational change. An opportunity now exists to learn from the different strategies that have been adopted. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements

    DOE PAGES

    Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.; ...

    2014-11-04

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  20. Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  1. An approach to radiation safety department benchmarking in academic and medical facilities.

    PubMed

    Harvey, Richard P

    2015-02-01

    Based on anecdotal evidence and networking with colleagues at other facilities, it has become evident that some radiation safety departments are not adequately staffed and radiation safety professionals need to increase their staffing levels. Discussions with management regarding radiation safety department staffing often lead to similar conclusions. Management acknowledges the Radiation Safety Officer (RSO) or Director of Radiation Safety's concern but asks the RSO to provide benchmarking and justification for additional full-time equivalents (FTEs). The RSO must determine a method to benchmark and justify additional staffing needs while struggling to maintain a safe and compliant radiation safety program. Benchmarking and justification are extremely important tools that are commonly used to demonstrate the need for increased staffing in other disciplines and are tools that can be used by radiation safety professionals. Parameters that most RSOs would expect to be positive predictors of radiation safety staff size generally are and can be emphasized in benchmarking and justification report summaries. Facilities with large radiation safety departments tend to have large numbers of authorized users, be broad-scope programs, be subject to increased controls regulations, have large clinical operations, have significant numbers of academic radiation-producing machines, and have laser safety responsibilities.

  2. Availability of Neutronics Benchmarks in the ICSBEP and IRPhEP Handbooks for Computational Tools Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana

    2017-02-01

    In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount.more » Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next-generation reactor design, safety analysis requirements, and all other front- and back-end activities contributing to the overall nuclear fuel cycle where quality neutronics calculations are paramount.« less

  3. Characterization of addressability by simultaneous randomized benchmarking.

    PubMed

    Gambetta, Jay M; Córcoles, A D; Merkel, S T; Johnson, B R; Smolin, John A; Chow, Jerry M; Ryan, Colm A; Rigetti, Chad; Poletto, S; Ohki, Thomas A; Ketchen, Mark B; Steffen, M

    2012-12-14

    The control and handling of errors arising from cross talk and unwanted interactions in multiqubit systems is an important issue in quantum information processing architectures. We introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking experiments run both individually and simultaneously on pairs of qubits. A relevant figure of merit for the addressability is then related to the differences in the measured average gate fidelities in the two experiments. We present results from two similar samples with differing cross talk and unwanted qubit-qubit interactions. The results agree with predictions based on simple models of the classical cross talk and Stark shifts.

  4. Report from the First CERT-RMM Users Group Workshop Series

    DTIC Science & Technology

    2012-04-01

    deploy processes to support our programs – Benchmark our programs to determine current gaps – Complements current work in CMMI® and ISO 27001 19...benchmarking program performance through process analytics and Lean/Six Sigma activities to ensure Performance Excellence. • Provides ISO Standards...Office www.cmu.edu/ iso 29 Carnegie Mellon University • Est 1967 in Pittsburgh, PA • Global, private research university • Ranked 22nd • 15,000

  5. Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2013-01-01

    The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.

  6. Aluminum-Mediated Formation of Cyclic Carbonates: Benchmarking Catalytic Performance Metrics.

    PubMed

    Rintjema, Jeroen; Kleij, Arjan W

    2017-03-22

    We report a comparative study on the activity of a series of fifteen binary catalysts derived from various reported aluminum-based complexes. A benchmarking of their initial rates in the coupling of various terminal and internal epoxides in the presence of three different nucleophilic additives was carried out, providing for the first time a useful comparison of activity metrics in the area of cyclic organic carbonate formation. These investigations provide a useful framework for how to realistically valorize relative reactivities and which features are important when considering the ideal operational window of each binary catalyst system. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Evaluation of the influence of the definition of an isolated hip fracture as an exclusion criterion for trauma system benchmarking: a multicenter cohort study.

    PubMed

    Tiao, J; Moore, L; Porgo, T V; Belcaid, A

    2016-06-01

    To assess whether the definition of an IHF used as an exclusion criterion influences the results of trauma center benchmarking. We conducted a multicenter retrospective cohort study with data from an integrated Canadian trauma system. The study population included all patients admitted between 1999 and 2010 to any of the 57 adult trauma centers. Seven definitions of IHF based on diagnostic codes, age, mechanism of injury, and secondary injuries, identified in a systematic review, were used. Trauma centers were benchmarked using risk-adjusted mortality estimates generated using the Trauma Risk Adjustment Model. The agreement between benchmarking results generated under different IHF definitions was evaluated with correlation coefficients on adjusted mortality estimates. Correlation coefficients >0.95 were considered to convey acceptable agreement. The study population consisted of 172,872 patients before exclusion of IHF and between 128,094 and 139,588 patients after exclusion. Correlation coefficients between risk-adjusted mortality estimates generated in populations including and excluding IHF varied between 0.86 and 0.90. Correlation coefficients of estimates generated under different definitions of IHF varied between 0.97 and 0.99, even when analyses were restricted to patients aged ≥65 years. Although the exclusion of patients with IHF has an influence on the results of trauma center benchmarking based on mortality, the definition of IHF in terms of diagnostic codes, age, mechanism of injury and secondary injury has no significant impact on benchmarking results. Results suggest that there is no need to obtain formal consensus on the definition of IHF for benchmarking activities.

  8. Benchmarking hardware architecture candidates for the NFIRAOS real-time controller

    NASA Astrophysics Data System (ADS)

    Smith, Malcolm; Kerley, Dan; Herriot, Glen; Véran, Jean-Pierre

    2014-07-01

    As a part of the trade study for the Narrow Field Infrared Adaptive Optics System, the adaptive optics system for the Thirty Meter Telescope, we investigated the feasibility of performing real-time control computation using a Linux operating system and Intel Xeon E5 CPUs. We also investigated a Xeon Phi based architecture which allows higher levels of parallelism. This paper summarizes both the CPU based real-time controller architecture and the Xeon Phi based RTC. The Intel Xeon E5 CPU solution meets the requirements and performs the computation for one AO cycle in an average of 767 microseconds. The Xeon Phi solution did not meet the 1200 microsecond time requirement and also suffered from unpredictable execution times. More detailed benchmark results are reported for both architectures.

  9. Statistical process control as a tool for controlling operating room performance: retrospective analysis and benchmarking.

    PubMed

    Chen, Tsung-Tai; Chang, Yun-Jau; Ku, Shei-Ling; Chung, Kuo-Piao

    2010-10-01

    There is much research using statistical process control (SPC) to monitor surgical performance, including comparisons among groups to detect small process shifts, but few of these studies have included a stabilization process. This study aimed to analyse the performance of surgeons in operating room (OR) and set a benchmark by SPC after stabilized process. The OR profile of 499 patients who underwent laparoscopic cholecystectomy performed by 16 surgeons at a tertiary hospital in Taiwan during 2005 and 2006 were recorded. SPC was applied to analyse operative and non-operative times using the following five steps: first, the times were divided into two segments; second, they were normalized; third, they were evaluated as individual processes; fourth, the ARL(0) was calculated;, and fifth, the different groups (surgeons) were compared. Outliers were excluded to ensure stability for each group and to facilitate inter-group comparison. The results showed that in the stabilized process, only one surgeon exhibited a significantly shorter total process time (including operative time and non-operative time). In this study, we use five steps to demonstrate how to control surgical and non-surgical time in phase I. There are some measures that can be taken to prevent skew and instability in the process. Also, using SPC, one surgeon can be shown to be a real benchmark. © 2010 Blackwell Publishing Ltd.

  10. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.; Kornreich, D.E.

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less

  11. [Does implementation of benchmarking in quality circles improve the quality of care of patients with asthma and reduce drug interaction?].

    PubMed

    Kaufmann-Kolle, Petra; Szecsenyi, Joachim; Broge, Björn; Haefeli, Walter Emil; Schneider, Antonius

    2011-01-01

    The purpose of this cluster-randomised controlled trial was to evaluate the efficacy of quality circles (QCs) working either with general data-based feedback or with an open benchmark within the field of asthma care and drug-drug interactions. Twelve QCs, involving 96 general practitioners from 85 practices, were randomised. Six QCs worked with traditional anonymous feedback and six with an open benchmark. Two QC meetings supported with feedback reports were held covering the topics "drug-drug interactions" and "asthma"; in both cases discussions were guided by a trained moderator. Outcome measures included health-related quality of life and patient satisfaction with treatment, asthma severity and number of potentially inappropriate drug combinations as well as the general practitioners' satisfaction in relation to the performance of the QC. A significant improvement in the treatment of asthma was observed in both trial arms. However, there was only a slight improvement regarding inappropriate drug combinations. There were no relevant differences between the group with open benchmark (B-QC) and traditional quality circles (T-QC). The physicians' satisfaction with the QC performance was significantly higher in the T-QCs. General practitioners seem to take a critical perspective about open benchmarking in quality circles. Caution should be used when implementing benchmarking in a quality circle as it did not improve healthcare when compared to the traditional procedure with anonymised comparisons. Copyright © 2011. Published by Elsevier GmbH.

  12. Comparing Hospital Processes and Outcomes in California Medicare Beneficiaries: Simulation Prompts Reconsideration.

    PubMed

    Escobar, Gabriel J; Baker, Jennifer M; Turk, Benjamin J; Draper, David; Liu, Vincent; Kipnis, Patricia

    2017-01-01

    This article is not a traditional research report. It describes how conducting a specific set of benchmarking analyses led us to broader reflections on hospital benchmarking. We reexamined an issue that has received far less attention from researchers than in the past: How variations in the hospital admission threshold might affect hospital rankings. Considering this threshold made us reconsider what benchmarking is and what future benchmarking studies might be like. Although we recognize that some of our assertions are speculative, they are based on our reading of the literature and previous and ongoing data analyses being conducted in our research unit. We describe the benchmarking analyses that led to these reflections. The Centers for Medicare and Medicaid Services' Hospital Compare Web site includes data on fee-for-service Medicare beneficiaries but does not control for severity of illness, which requires physiologic data now available in most electronic medical records.To address this limitation, we compared hospital processes and outcomes among Kaiser Permanente Northern California's (KPNC) Medicare Advantage beneficiaries and non-KPNC California Medicare beneficiaries between 2009 and 2010. We assigned a simulated severity of illness measure to each record and explored the effect of having the additional information on outcomes. We found that if the admission severity of illness in non-KPNC hospitals increased, KPNC hospitals' mortality performance would appear worse; conversely, if admission severity at non-KPNC hospitals' decreased, KPNC hospitals' performance would appear better. Future hospital benchmarking should consider the impact of variation in admission thresholds.

  13. Design of a self-tuning regulator for temperature control of a polymerization reactor.

    PubMed

    Vasanthi, D; Pranavamoorthy, B; Pappa, N

    2012-01-01

    The temperature control of a polymerization reactor described by Chylla and Haase, a control engineering benchmark problem, is used to illustrate the potential of adaptive control design by employing a self-tuning regulator concept. In the benchmark scenario, the operation of the reactor must be guaranteed under various disturbing influences, e.g., changing ambient temperatures or impurity of the monomer. The conventional cascade control provides a robust operation, but often lacks in control performance concerning the required strict temperature tolerances. The self-tuning control concept presented in this contribution solves the problem. This design calculates a trajectory for the cooling jacket temperature in order to follow a predefined trajectory of the reactor temperature. The reaction heat and the heat transfer coefficient in the energy balance are estimated online by using an unscented Kalman filter (UKF). Two simple physically motivated relations are employed, which allow the non-delayed estimation of both quantities. Simulation results under model uncertainties show the effectiveness of the self-tuning control concept. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Active transportation measurement and benchmarking development : New Orleans state of active transportation report 2010.

    DOT National Transportation Integrated Search

    2012-01-01

    Over the last decade, there has been a surge in bicycle and pedestrian use in communities that have invested in active transportation infrastruc-ture and programming. While these increases show potentially promising trends, many of the cities that ha...

  15. A Benchmark of Tractor Trailer Operator Training Between the United States Army’s 37th Transportation Command and a Selected Civilian Industry Leader

    DTIC Science & Technology

    1993-09-01

    against Japanese competitors (Camp, 1989:6; Geber , 1990:38). Due to their incredible success in controlling costs, Xerox adopted the technique company...important first step to benchmarking outside the organization ( Geber , 1990:40). Due to availability of information and cooperation of partners, this is...science ( Geber , 1990:42). Several avenues can be pursued to find best-in-class companies. Search business publications for companies frequently

  16. Vector disparity sensor with vergence control for active vision systems.

    PubMed

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  17. Vector Disparity Sensor with Vergence Control for Active Vision Systems

    PubMed Central

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P.; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system. PMID:22438737

  18. Role of the standard deviation in the estimation of benchmark doses with continuous data.

    PubMed

    Gaylor, David W; Slikker, William

    2004-12-01

    For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.

  19. Groundwater quality data in 15 GAMA study units: results from the 2006–10 Initial sampling and the 2009–13 resampling of wells, California GAMA Priority Basin Project

    USGS Publications Warehouse

    Kent, Robert

    2015-08-31

    Most constituents that were detected in groundwater samples from the trend wells were found at concentrations less than drinking-water benchmarks. Two volatile organic compounds (VOCs)—tetrachloroethene and trichloroethene—were detected in samples from one or more wells at concentrations greater than their health-based benchmarks, and three VOCs—chloroform, tetrachloroethene, and trichloroethene—were detected in at least 10 percent of the trend-well samples from the initial sampling period and the later trend sampling period. No pesticides were detected at concentrations near or greater than their health-based benchmarks. Three pesticide constituents—atrazine, deethylatrazine, and simazine—were detected in more than 10 percent of the trend-well samples in both sampling periods. Perchlorate, a constituent of special interest, was detected at a concentration greater than its health-based benchmark in samples from one trend well in the initial sampling and trend sampling periods, and in an additional trend well sample only in the trend sampling period. Most detections of nutrients, major and minor ions, and trace elements in samples from trend wells were less than health-based benchmarks in both sampling periods. Exceptions included nitrate, fluoride, arsenic, boron, molybdenum, strontium, and uranium; these were all detected at concentrations greater than their health-based benchmarks in at least one well sample in both sampling periods. Lead and vanadium were detected above their health-based benchmarks in one sample each collected in the initial sampling period only. The isotopic ratios of oxygen and hydrogen in water and the activities of tritium and carbon-14 generally changed little between sampling periods.

  20. RNA-seq mixology: designing realistic control experiments to compare protocols and analysis methods

    PubMed Central

    Holik, Aliaksei Z.; Law, Charity W.; Liu, Ruijie; Wang, Zeya; Wang, Wenyi; Ahn, Jaeil; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.

    2017-01-01

    Abstract Carefully designed control experiments provide a gold standard for benchmarking different genomics research tools. A shortcoming of many gene expression control studies is that replication involves profiling the same reference RNA sample multiple times. This leads to low, pure technical noise that is atypical of regular studies. To achieve a more realistic noise structure, we generated a RNA-sequencing mixture experiment using two cell lines of the same cancer type. Variability was added by extracting RNA from independent cell cultures and degrading particular samples. The systematic gene expression changes induced by this design allowed benchmarking of different library preparation kits (standard poly-A versus total RNA with Ribozero depletion) and analysis pipelines. Data generated using the total RNA kit had more signal for introns and various RNA classes (ncRNA, snRNA, snoRNA) and less variability after degradation. For differential expression analysis, voom with quality weights marginally outperformed other popular methods, while for differential splicing, DEXSeq was simultaneously the most sensitive and the most inconsistent method. For sample deconvolution analysis, DeMix outperformed IsoPure convincingly. Our RNA-sequencing data set provides a valuable resource for benchmarking different protocols and data pre-processing workflows. The extra noise mimics routine lab experiments more closely, ensuring any conclusions are widely applicable. PMID:27899618

  1. Imidazole derivatives as angiotensin II AT1 receptor blockers: Benchmarks, drug-like calculations and quantitative structure-activity relationships modeling

    NASA Astrophysics Data System (ADS)

    Alloui, Mebarka; Belaidi, Salah; Othmani, Hasna; Jaidane, Nejm-Eddine; Hochlaf, Majdi

    2018-03-01

    We performed benchmark studies on the molecular geometry, electron properties and vibrational analysis of imidazole using semi-empirical, density functional theory and post Hartree-Fock methods. These studies validated the use of AM1 for the treatment of larger systems. Then, we treated the structural, physical and chemical relationships for a series of imidazole derivatives acting as angiotensin II AT1 receptor blockers using AM1. QSAR studies were done for these imidazole derivatives using a combination of various physicochemical descriptors. A multiple linear regression procedure was used to design the relationships between molecular descriptor and the activity of imidazole derivatives. Results validate the derived QSAR model.

  2. ZPR-6 assembly 7 high {sup 240} PU core : a cylindrical assemby with mixed (PU, U)-oxide fuel and a central high {sup 240} PU zone.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R. M.; Schaefer, R. W.; McKnight, R. D.

    Over a period of 30 years more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited to form the basis for criticality safety benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactormore » physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. The term 'benchmark' in a ZPR program connotes a particularly simple loading aimed at gaining basic reactor physics insight, as opposed to studying a reactor design. In fact, the ZPR-6/7 Benchmark Assembly (Reference 1) had a very simple core unit cell assembled from plates of depleted uranium, sodium, iron oxide, U3O8, and plutonium. The ZPR-6/7 core cell-average composition is typical of the interior region of liquid-metal fast breeder reactors (LMFBRs) of the era. It was one part of the Demonstration Reactor Benchmark Program,a which provided integral experiments characterizing the important features of demonstration-size LMFBRs. As a benchmark, ZPR-6/7 was devoid of many 'real' reactor features, such as simulated control rods and multiple enrichment zones, in its reference form. Those kinds of features were investigated experimentally in variants of the reference ZPR-6/7 or in other critical assemblies in the Demonstration Reactor Benchmark Program.« less

  3. Interaction between control and design of a SHARON reactor: economic considerations in a plant-wide (BSM2) context.

    PubMed

    Volcke, E I P; van Loosdrecht, M C M; Vanrolleghem, P A

    2007-01-01

    The combined SHARON-Anammox process is a promising technique for nitrogen removal from wastewater streams with high ammonium concentrations. It is typically applied to sludge digestion reject water, in order to relieve the activated sludge tanks, to which this stream is typically recycled. This contribution assesses the impact of the applied control strategy in the SHARON-reactor, both on the effluent quality of the subsequent Anammox reactor as well as on the plant-wide level by means of an operating cost index. Moreover, it is investigated to which extent the usefulness of a certain control strategy depends on the reactor design (volume). A simulation study is carried out using the plant-wide Benchmark Simulation Model no. 2 (BSM2), extended with the SHARON and Anammox processes. The results reveal a discrepancy between optimizing the reject water treatment performance and minimizing plant-wide operating costs.

  4. A benchmark for vehicle detection on wide area motion imagery

    NASA Astrophysics Data System (ADS)

    Catrambone, Joseph; Amzovski, Ismail; Liang, Pengpeng; Blasch, Erik; Sheaff, Carolyn; Wang, Zhonghai; Chen, Genshe; Ling, Haibin

    2015-05-01

    Wide area motion imagery (WAMI) has been attracting an increased amount of research attention due to its large spatial and temporal coverage. An important application includes moving target analysis, where vehicle detection is often one of the first steps before advanced activity analysis. While there exist many vehicle detection algorithms, a thorough evaluation of them on WAMI data still remains a challenge mainly due to the lack of an appropriate benchmark data set. In this paper, we address a research need by presenting a new benchmark for wide area motion imagery vehicle detection data. The WAMI benchmark is based on the recently available Wright-Patterson Air Force Base (WPAFB09) dataset and the Temple Resolved Uncertainty Target History (TRUTH) associated target annotation. Trajectory annotations were provided in the original release of the WPAFB09 dataset, but detailed vehicle annotations were not available with the dataset. In addition, annotations of static vehicles, e.g., in parking lots, are also not identified in the original release. Addressing these issues, we re-annotated the whole dataset with detailed information for each vehicle, including not only a target's location, but also its pose and size. The annotated WAMI data set should be useful to community for a common benchmark to compare WAMI detection, tracking, and identification methods.

  5. Benchmarking the Integration of WAVEWATCH III Results into HAZUS-MH: Preliminary Results

    NASA Technical Reports Server (NTRS)

    Berglund, Judith; Holland, Donald; McKellip, Rodney; Sciaudone, Jeff; Vickery, Peter; Wang, Zhanxian; Ying, Ken

    2005-01-01

    The report summarizes the results from the preliminary benchmarking activities associated with the use of WAVEWATCH III (WW3) results in the HAZUS-MH MR1 flood module. Project partner Applied Research Associates (ARA) is integrating the WW3 model into HAZUS. The current version of HAZUS-MH predicts loss estimates from hurricane-related coastal flooding by using values of surge only. Using WW3, wave setup can be included with surge. Loss estimates resulting from the use of surge-only and surge-plus-wave-setup were compared. This benchmarking study is preliminary because the HAZUS-MH MR1 flood module was under development at the time of the study. In addition, WW3 is not scheduled to be fully integrated with HAZUS-MH and available for public release until 2008.

  6. Marshall Space Flight Center CFD overview

    NASA Technical Reports Server (NTRS)

    Schutzenhofer, Luke A.

    1989-01-01

    Computational Fluid Dynamics (CFD) activities at Marshall Space Flight Center (MSFC) have been focused on hardware specific and research applications with strong emphasis upon benchmark validation. The purpose here is to provide insight into the MSFC CFD related goals, objectives, current hardware related CFD activities, propulsion CFD research efforts and validation program, future near-term CFD hardware related programs, and CFD expectations. The current hardware programs where CFD has been successfully applied are the Space Shuttle Main Engines (SSME), Alternate Turbopump Development (ATD), and Aeroassist Flight Experiment (AFE). For the future near-term CFD hardware related activities, plans are being developed that address the implementation of CFD into the early design stages of the Space Transportation Main Engine (STME), Space Transportation Booster Engine (STBE), and the Environmental Control and Life Support System (ECLSS) for the Space Station. Finally, CFD expectations in the design environment will be delineated.

  7. Optimization of a solid-state electron spin qubit using Gate Set Tomography

    DOE PAGES

    Dehollain, Juan P.; Muhonen, Juha T.; Blume-Kohout, Robin J.; ...

    2016-10-13

    Here, state of the art qubit systems are reaching the gate fidelities required for scalable quantum computation architectures. Further improvements in the fidelity of quantum gates demands characterization and benchmarking protocols that are efficient, reliable and extremely accurate. Ideally, a benchmarking protocol should also provide information on how to rectify residual errors. Gate Set Tomography (GST) is one such protocol designed to give detailed characterization of as-built qubits. We implemented GST on a high-fidelity electron-spin qubit confined by a single 31P atom in 28Si. The results reveal systematic errors that a randomized benchmarking analysis could measure but not identify, whereasmore » GST indicated the need for improved calibration of the length of the control pulses. After introducing this modification, we measured a new benchmark average gate fidelity of 99.942(8)%, an improvement on the previous value of 99.90(2)%. Furthermore, GST revealed high levels of non-Markovian noise in the system, which will need to be understood and addressed when the qubit is used within a fault-tolerant quantum computation scheme.« less

  8. Linking log files with dosimetric accuracy--A multi-institutional study on quality assurance of volumetric modulated arc therapy.

    PubMed

    Pasler, Marlies; Kaas, Jochem; Perik, Thijs; Geuze, Job; Dreindl, Ralf; Künzler, Thomas; Wittkamper, Frits; Georg, Dietmar

    2015-12-01

    To systematically evaluate machine specific quality assurance (QA) for volumetric modulated arc therapy (VMAT) based on log files by applying a dynamic benchmark plan. A VMAT benchmark plan was created and tested on 18 Elekta linacs (13 MLCi or MLCi2, 5 Agility) at 4 different institutions. Linac log files were analyzed and a delivery robustness index was introduced. For dosimetric measurements an ionization chamber array was used. Relative dose deviations were assessed by mean gamma for each control point and compared to the log file evaluation. Fourteen linacs delivered the VMAT benchmark plan, while 4 linacs failed by consistently terminating the delivery. The mean leaf error (±1SD) was 0.3±0.2 mm for all linacs. Large MLC maximum errors up to 6.5 mm were observed at reversal positions. Delivery robustness index accounting for MLC position correction (0.8-1.0) correlated with delivery time (80-128 s) and depended on dose rate performance. Dosimetric evaluation indicated in general accurate plan reproducibility with γ(mean)(±1 SD)=0.4±0.2 for 1 mm/1%. However single control point analysis revealed larger deviations and attributed well to log file analysis. The designed benchmark plan helped identify linac related malfunctions in dynamic mode for VMAT. Log files serve as an important additional QA measure to understand and visualize dynamic linac parameters. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Flight program language requirements. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The activities and results of a study for the definition of flight program language requirements are described. A set of detailed requirements are presented for a language capable of supporting onboard application programming for the Marshall Space Flight Center's anticipated future activities in the decade of 1975-85. These requirements are based, in part, on the evaluation of existing flight programming language designs to determine the applicability of these designs to flight programming activities which are anticipated. The coding of benchmark problems in the selected programming languages is discussed. These benchmarks are in the form of program kernels selected from existing flight programs. This approach was taken to insure that the results of the study would reflect state of the art language capabilities, as well as to determine whether an existing language design should be selected for adaptation.

  10. Assessing student understanding of sound waves and trigonometric reasoning in a technology-rich, project-enhanced environment

    NASA Astrophysics Data System (ADS)

    Wilhelm, Jennifer Anne

    This case study examined what student content understanding could occur in an inner city Industrial Electronics classroom located at Tree High School where project-based instruction, enhanced with technology, was implemented for the first time. Students participated in a project implementation unit involving sound waves and trigonometric reasoning. The unit was designed to foster common content learning (via benchmark lessons) by all students in the class, and to help students gain a deeper conceptual understanding of a sub-set of the larger content unit (via group project research). The objective goal of the implementation design unit was to have students gain conceptual understanding of sound waves, such as what actually waves in a wave, how waves interfere with one another, and what affects the speed of a wave. This design unit also intended for students to develop trigonometric reasoning associated with sinusoidal curves and superposition of sinusoidal waves. Project criteria within this design included implementation features, such as the need for the student to have a driving research question and focus, the need for benchmark lessons to help foster and scaffold content knowledge and understanding, and the need for project milestones to complete throughout the implementation unit to allow students the time for feedback and revision. The Industrial Electronics class at Tree High School consisted of nine students who met daily during double class periods giving 100 minutes of class time per day. The class teacher had been teaching for 18 years (mathematics, physics, and computer science). He had a background in engineering and experience teaching at the college level. Benchmark activities during implementation were used to scaffold fundamental ideas and terminology needed to investigate characteristics of sound and waves. Students participating in benchmark activities analyzed motion and musical waveforms using probeware, and explored wave phenomena using waves simulation software. Benchmark activities were also used to bridge the ideas of triangle trigonometric ratios to the graphs of sinusoidal curves, which could lead to understanding the concepts of frequency, period, amplitude, and wavelength. (Abstract shortened by UMI.)

  11. 7 CFR 1485.15 - Activity plan.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... participant shall develop a specific activity plan(s) based on its strategic plan and the allocation approval... any changes in strategy from the strategic plan; (iii) A budget for each proposed activity, identifying the source of funds; (iv) Specific goals and benchmarks to be used to measure the effectiveness of...

  12. Growing Together with the Treetures. Activity Guide. Series 1.

    ERIC Educational Resources Information Center

    Schnell, Bobbi; Blau, Judith H.; Hinrichs, Jennifer Judd

    This activity guide is designed to be used with the Growing Together program. Tree-related activities are correlated to the Benchmarks for Scientific Literacy, the recommended standards for mathematics, science, and technology suggested by the American Association for the Advancement of Science (AAAS). The Treature Educational Program is dedicated…

  13. Multichannel feedforward control schemes with coupling compensation for active sound profiling

    NASA Astrophysics Data System (ADS)

    Mosquera-Sánchez, Jaime A.; Desmet, Wim; de Oliveira, Leopoldo P. R.

    2017-05-01

    Active sound profiling includes a number of control techniques that enables the equalization, rather than the mere reduction, of acoustic noise. Challenges may rise when trying to achieve distinct targeted sound profiles simultaneously at multiple locations, e.g., within a vehicle cabin. This paper introduces distributed multichannel control schemes for independently tailoring structural borne sound reaching a number of locations within a cavity. The proposed techniques address the cross interactions amongst feedforward active sound profiling units, which compensate for interferences of the primary sound at each location of interest by exchanging run-time data amongst the control units, while attaining the desired control targets. Computational complexity, convergence, and stability of the proposed multichannel schemes are examined in light of the physical system at which they are implemented. The tuning performance of the proposed algorithms is benchmarked with the centralized and pure-decentralized control schemes through computer simulations on a simplified numerical model, which has also been subjected to plant magnitude variations. Provided that the representation of the plant is accurate enough, the proposed multichannel control schemes have been shown as the only ones that properly deliver targeted active sound profiling tasks at each error sensor location. Experimental results in a 1:3-scaled vehicle mock-up further demonstrate that the proposed schemes are able to attain reductions of more than 60 dB upon periodic disturbances at a number of positions, while resolving cross-channel interferences. Moreover, when the sensor/actuator placement is found as defective at a given frequency, the inclusion of a regularization parameter in the cost function is seen to not hinder the proper operation of the proposed compensation schemes, at the time that it assures their stability, at the expense of losing control performance.

  14. Comparing Hospital Processes and Outcomes in California Medicare Beneficiaries: Simulation Prompts Reconsideration

    PubMed Central

    Escobar, Gabriel J; Baker, Jennifer M; Turk, Benjamin J; Draper, David; Liu, Vincent; Kipnis, Patricia

    2017-01-01

    Introduction This article is not a traditional research report. It describes how conducting a specific set of benchmarking analyses led us to broader reflections on hospital benchmarking. We reexamined an issue that has received far less attention from researchers than in the past: How variations in the hospital admission threshold might affect hospital rankings. Considering this threshold made us reconsider what benchmarking is and what future benchmarking studies might be like. Although we recognize that some of our assertions are speculative, they are based on our reading of the literature and previous and ongoing data analyses being conducted in our research unit. We describe the benchmarking analyses that led to these reflections. Objectives The Centers for Medicare and Medicaid Services’ Hospital Compare Web site includes data on fee-for-service Medicare beneficiaries but does not control for severity of illness, which requires physiologic data now available in most electronic medical records. To address this limitation, we compared hospital processes and outcomes among Kaiser Permanente Northern California’s (KPNC) Medicare Advantage beneficiaries and non-KPNC California Medicare beneficiaries between 2009 and 2010. Methods We assigned a simulated severity of illness measure to each record and explored the effect of having the additional information on outcomes. Results We found that if the admission severity of illness in non-KPNC hospitals increased, KPNC hospitals’ mortality performance would appear worse; conversely, if admission severity at non-KPNC hospitals’ decreased, KPNC hospitals’ performance would appear better. Conclusion Future hospital benchmarking should consider the impact of variation in admission thresholds. PMID:29035176

  15. Benchmark of four popular virtual screening programs: construction of the active/decoy dataset remains a major determinant of measured performance.

    PubMed

    Chaput, Ludovic; Martinez-Sanz, Juan; Saettel, Nicolas; Mouawad, Liliane

    2016-01-01

    In a structure-based virtual screening, the choice of the docking program is essential for the success of a hit identification. Benchmarks are meant to help in guiding this choice, especially when undertaken on a large variety of protein targets. Here, the performance of four popular virtual screening programs, Gold, Glide, Surflex and FlexX, is compared using the Directory of Useful Decoys-Enhanced database (DUD-E), which includes 102 targets with an average of 224 ligands per target and 50 decoys per ligand, generated to avoid biases in the benchmarking. Then, a relationship between these program performances and the properties of the targets or the small molecules was investigated. The comparison was based on two metrics, with three different parameters each. The BEDROC scores with α = 80.5, indicated that, on the overall database, Glide succeeded (score > 0.5) for 30 targets, Gold for 27, FlexX for 14 and Surflex for 11. The performance did not depend on the hydrophobicity nor the openness of the protein cavities, neither on the families to which the proteins belong. However, despite the care in the construction of the DUD-E database, the small differences that remain between the actives and the decoys likely explain the successes of Gold, Surflex and FlexX. Moreover, the similarity between the actives of a target and its crystal structure ligand seems to be at the basis of the good performance of Glide. When all targets with significant biases are removed from the benchmarking, a subset of 47 targets remains, for which Glide succeeded for only 5 targets, Gold for 4 and FlexX and Surflex for 2. The performance dramatic drop of all four programs when the biases are removed shows that we should beware of virtual screening benchmarks, because good performances may be due to wrong reasons. Therefore, benchmarking would hardly provide guidelines for virtual screening experiments, despite the tendency that is maintained, i.e., Glide and Gold display better performance than FlexX and Surflex. We recommend to always use several programs and combine their results. Graphical AbstractSummary of the results obtained by virtual screening with the four programs, Glide, Gold, Surflex and FlexX, on the 102 targets of the DUD-E database. The percentage of targets with successful results, i.e., with BDEROC(α = 80.5) > 0.5, when the entire database is considered are in Blue, and when targets with biased chemical libraries are removed are in Red.

  16. Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in themore » neutron field are reported.« less

  17. Employing Nested OpenMP for the Parallelization of Multi-Zone Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Jost, Gabriele

    2004-01-01

    In this paper we describe the parallelization of the multi-zone code versions of the NAS Parallel Benchmarks employing multi-level OpenMP parallelism. For our study we use the NanosCompiler, which supports nesting of OpenMP directives and provides clauses to control the grouping of threads, load balancing, and synchronization. We report the benchmark results, compare the timings with those of different hybrid parallelization paradigms and discuss OpenMP implementation issues which effect the performance of multi-level parallel applications.

  18. Benchmark Problems for Space Mission Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Folta, David C.; Burns, Richard

    2003-01-01

    To provide a high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested for space mission formation flying. The problems cover formation flying in low altitude, near-circular Earth orbit, high altitude, highly elliptical Earth orbits, and large amplitude lissajous trajectories about co-linear libration points of the Sun-Earth/Moon system. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions that are of interest to various agencies.

  19. Better Medicare Cost Report data are needed to help hospitals benchmark costs and performance.

    PubMed

    Magnus, S A; Smith, D G

    2000-01-01

    To evaluate costs and achieve cost control in the face of new technology and demands for efficiency from both managed care and governmental payers, hospitals need to benchmark their costs against those of other comparable hospitals. Since they typically use Medicare Cost Report (MCR) data for this purpose, a variety of cost accounting problems with the MCR may hamper hospitals' understanding of their relative costs and performance. Managers and researchers alike need to investigate the validity, accuracy, and timeliness of the MCR's cost accounting data.

  20. Benchmark tests of JENDL-3.2 for thermal and fast reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takano, Hideki; Akie, Hiroshi; Kikuchi, Yasuyuki

    1994-12-31

    Benchmark calculations for a variety of thermal and fast reactors have been performed by using the newly evaluated JENDL-3 Version-2 (JENDL-3.2) file. In the thermal reactor calculations for the uranium and plutonium fueled cores of TRX and TCA, the k{sub eff} and lattice parameters were well predicted. The fast reactor calculations for ZPPR-9 and FCA assemblies showed that the k{sub eff} reactivity worths of Doppler, sodium void and control rod, and reaction rate distribution were in a very good agreement with the experiments.

  1. Statistics based sampling for controller and estimator design

    NASA Astrophysics Data System (ADS)

    Tenne, Dirk

    The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.

  2. Closed-loop separation control over a sharp edge ramp using genetic programming

    NASA Astrophysics Data System (ADS)

    Debien, Antoine; von Krbek, Kai A. F. F.; Mazellier, Nicolas; Duriez, Thomas; Cordier, Laurent; Noack, Bernd R.; Abel, Markus W.; Kourta, Azeddine

    2016-03-01

    We experimentally perform open and closed-loop control of a separating turbulent boundary layer downstream from a sharp edge ramp. The turbulent boundary layer just above the separation point has a Reynolds number Re_{θ }≈ 3500 based on momentum thickness. The goal of the control is to mitigate separation and early re-attachment. The forcing employs a spanwise array of active vortex generators. The flow state is monitored with skin-friction sensors downstream of the actuators. The feedback control law is obtained using model-free genetic programming control (GPC) (Gautier et al. in J Fluid Mech 770:442-457, 2015). The resulting flow is assessed using the momentum coefficient, pressure distribution and skin friction over the ramp and stereo PIV. The PIV yields vector field statistics, e.g. shear layer growth, the back-flow area and vortex region. GPC is benchmarked against the best periodic forcing. While open-loop control achieves separation reduction by locking-on the shedding mode, GPC gives rise to similar benefits by accelerating the shear layer growth. Moreover, GPC uses less actuation energy.

  3. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pecchia, M.; D'Auria, F.; Mazzantini, O.

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI formore » performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)« less

  4. Summary of comparison and analysis of results from exercises 1 and 2 of the OECD PBMR coupled neutronics/thermal hydraulics transient benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkhabela, P.; Han, J.; Tyobeka, B.

    2006-07-01

    The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less

  5. Divide and Conquer-Based 1D CNN Human Activity Recognition Using Test Data Sharpening †

    PubMed Central

    Yoon, Sang Min

    2018-01-01

    Human Activity Recognition (HAR) aims to identify the actions performed by humans using signals collected from various sensors embedded in mobile devices. In recent years, deep learning techniques have further improved HAR performance on several benchmark datasets. In this paper, we propose one-dimensional Convolutional Neural Network (1D CNN) for HAR that employs a divide and conquer-based classifier learning coupled with test data sharpening. Our approach leverages a two-stage learning of multiple 1D CNN models; we first build a binary classifier for recognizing abstract activities, and then build two multi-class 1D CNN models for recognizing individual activities. We then introduce test data sharpening during prediction phase to further improve the activity recognition accuracy. While there have been numerous researches exploring the benefits of activity signal denoising for HAR, few researches have examined the effect of test data sharpening for HAR. We evaluate the effectiveness of our approach on two popular HAR benchmark datasets, and show that our approach outperforms both the two-stage 1D CNN-only method and other state of the art approaches. PMID:29614767

  6. Divide and Conquer-Based 1D CNN Human Activity Recognition Using Test Data Sharpening.

    PubMed

    Cho, Heeryon; Yoon, Sang Min

    2018-04-01

    Human Activity Recognition (HAR) aims to identify the actions performed by humans using signals collected from various sensors embedded in mobile devices. In recent years, deep learning techniques have further improved HAR performance on several benchmark datasets. In this paper, we propose one-dimensional Convolutional Neural Network (1D CNN) for HAR that employs a divide and conquer-based classifier learning coupled with test data sharpening. Our approach leverages a two-stage learning of multiple 1D CNN models; we first build a binary classifier for recognizing abstract activities, and then build two multi-class 1D CNN models for recognizing individual activities. We then introduce test data sharpening during prediction phase to further improve the activity recognition accuracy. While there have been numerous researches exploring the benefits of activity signal denoising for HAR, few researches have examined the effect of test data sharpening for HAR. We evaluate the effectiveness of our approach on two popular HAR benchmark datasets, and show that our approach outperforms both the two-stage 1D CNN-only method and other state of the art approaches.

  7. Benchmarking reference services: step by step.

    PubMed

    Buchanan, H S; Marshall, J G

    1996-01-01

    This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.

  8. 40 CFR 51.1000 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...

  9. 40 CFR 51.1000 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...

  10. 40 CFR 51.1000 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...

  11. 40 CFR 51.1000 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...

  12. 40 CFR 51.1000 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...

  13. Develop applications based on android: Teacher Engagement Control of Health (TECH)

    NASA Astrophysics Data System (ADS)

    Sasmoko; Manalu, S. R.; Widhoyoko, S. A.; Indrianti, Y.; Suparto

    2018-03-01

    Physical and psychological condition of teachers is very important because it helped determine the realization of a positive school climate and productive so that they can run their profession optimally. This research is an advanced research on the design of ITEI application that able to see the profile of teacher’s engagement in Indonesia and to optimize the condition is needed an application that can detect the health of teachers both physically and psychologically. The research method used is the neuroresearch method combined with the development of IT system design for TECH which includes server design, database and android TECH application display. The study yielded 1) mental health benchmarks, 2) physical health benchmarks, and 3) the design of Android Application for Teacher Engagement Control of Health (TECH).

  14. Global Positioning System (GPS) survey of Augustine Volcano, Alaska, August 3-8, 2000: data processing, geodetic coordinates and comparison with prior geodetic surveys

    USGS Publications Warehouse

    Pauk, Benjamin A.; Power, John A.; Lisowski, Mike; Dzurisin, Daniel; Iwatsubo, Eugene Y.; Melbourne, Tim

    2001-01-01

    Between August 3 and 8,2000,the Alaska Volcano Observatory completed a Global Positioning System (GPS) survey at Augustine Volcano, Alaska. Augustine is a frequently active calcalkaline volcano located in the lower portion of Cook Inlet (fig. 1), with reported eruptions in 1812, 1882, 1909?, 1935, 1964, 1976, and 1986 (Miller et al., 1998). Geodetic measurements using electronic and optical surveying techniques (EDM and theodolite) were begun at Augustine Volcano in 1986. In 1988 and 1989, an island-wide trilateration network comprising 19 benchmarks was completed and measured in its entirety (Power and Iwatsubo, 1998). Partial GPS surveys of the Augustine Island geodetic network were completed in 1992 and 1995; however, neither of these surveys included all marks on the island.Additional GPS measurements of benchmarks A5 and A15 (fig. 2) were made during the summers of 1992, 1993, 1994, and 1996. The goals of the 2000 GPS survey were to:1) re-measure all existing benchmarks on Augustine Island using a homogeneous set of GPS equipment operated in a consistent manner, 2) add measurements at benchmarks on the western shore of Cook Inlet at distances of 15 to 25 km, 3) add measurements at an existing benchmark (BURR) on Augustine Island that was not previously surveyed, and 4) add additional marks in areas of the island thought to be actively deforming. The entire survey resulted in collection of GPS data at a total of 24 sites (fig. 1 and 2). In this report we describe the methods of GPS data collection and processing used at Augustine during the 2000 survey. We use this data to calculate coordinates and elevations for all 24 sites surveyed. Data from the 2000 survey is then compared toelectronic and optical measurements made in 1988 and 1989. This report also contains a general description of all marks surveyed in 2000 and photographs of all new marks established during the 2000 survey (Appendix A).

  15. An Integrated Development Environment for Adiabatic Quantum Programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; McCaskey, Alex; Bennink, Ryan S

    2014-01-01

    Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation enginemore » that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.« less

  16. Benchmarking Memory Performance with the Data Cube Operator

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Shabanov, Leonid V.

    2004-01-01

    Data movement across a computer memory hierarchy and across computational grids is known to be a limiting factor for applications processing large data sets. We use the Data Cube Operator on an Arithmetic Data Set, called ADC, to benchmark capabilities of computers and of computational grids to handle large distributed data sets. We present a prototype implementation of a parallel algorithm for computation of the operatol: The algorithm follows a known approach for computing views from the smallest parent. The ADC stresses all levels of grid memory and storage by producing some of 2d views of an Arithmetic Data Set of d-tuples described by a small number of integers. We control data intensity of the ADC by selecting the tuple parameters, the sizes of the views, and the number of realized views. Benchmarking results of memory performance of a number of computer architectures and of a small computational grid are presented.

  17. Active transportation measurement and benchmarking development : New Orleans pedestrian and bicycle count report, 2010-2011.

    DOT National Transportation Integrated Search

    2012-01-01

    Over the last decade, there has been a surge in bicycle and pedestrian use in communities that have invested in active transportation infrastructure and programming. While these increases show potentially promising trends, many of the cities that hav...

  18. Benchmarking Ligand-Based Virtual High-Throughput Screening with the PubChem Database

    PubMed Central

    Butkiewicz, Mariusz; Lowe, Edward W.; Mueller, Ralf; Mendenhall, Jeffrey L.; Teixeira, Pedro L.; Weaver, C. David; Meiler, Jens

    2013-01-01

    With the rapidly increasing availability of High-Throughput Screening (HTS) data in the public domain, such as the PubChem database, methods for ligand-based computer-aided drug discovery (LB-CADD) have the potential to accelerate and reduce the cost of probe development and drug discovery efforts in academia. We assemble nine data sets from realistic HTS campaigns representing major families of drug target proteins for benchmarking LB-CADD methods. Each data set is public domain through PubChem and carefully collated through confirmation screens validating active compounds. These data sets provide the foundation for benchmarking a new cheminformatics framework BCL::ChemInfo, which is freely available for non-commercial use. Quantitative structure activity relationship (QSAR) models are built using Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Decision Trees (DTs), and Kohonen networks (KNs). Problem-specific descriptor optimization protocols are assessed including Sequential Feature Forward Selection (SFFS) and various information content measures. Measures of predictive power and confidence are evaluated through cross-validation, and a consensus prediction scheme is tested that combines orthogonal machine learning algorithms into a single predictor. Enrichments ranging from 15 to 101 for a TPR cutoff of 25% are observed. PMID:23299552

  19. The Paucity Problem: Where Have All the Space Reactor Experiments Gone?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Marshall, Margaret A.

    2016-10-01

    The Handbooks of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) together contain a plethora of documented and evaluated experiments essential in the validation of nuclear data, neutronics codes, and modeling of various nuclear systems. Unfortunately, only a minute selection of handbook data (twelve evaluations) are of actual experimental facilities and mockups designed specifically for space nuclear research. There is a paucity problem, such that the multitude of space nuclear experimental activities performed in the past several decades have yet to be recovered and made available in such detail that themore » international community could benefit from these valuable historical research efforts. Those experiments represent extensive investments in infrastructure, expertise, and cost, as well as constitute significantly valuable resources of data supporting past, present, and future research activities. The ICSBEP and IRPhEP were established to identify and verify comprehensive sets of benchmark data; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data. See full abstract in attached document.« less

  20. Nuclear power plant digital system PRA pilot study with the dynamic flow-graph methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yau, M.; Motamed, M.; Guarro, S.

    2006-07-01

    Current Probabilistic Risk Assessment (PRA) methodology is well established in analyzing hardware and some of the key human interactions. However processes for analyzing the software functions of digital systems within a plant PRA framework, and accounting for the digital system contribution to the overall risk are not generally available nor are they well understood and established. A recent study reviewed a number of methodologies that have potential applicability to modeling and analyzing digital systems within a PRA framework. This study identified the Dynamic Flow-graph Methodology (DFM) and the Markov Methodology as the most promising tools. As a result of thismore » study, a task was defined under the framework of a collaborative agreement between the U.S. Nuclear Regulatory Commission (NRC) and the Ohio State Univ. (OSU). The objective of this task is to set up benchmark systems representative of digital systems used in nuclear power plants and to evaluate DFM and the Markov methodology with these benchmark systems. The first benchmark system is a typical Pressurized Water Reactor (PWR) Steam Generator (SG) Feedwater System (FWS) level control system based on an earlier ASCA work with the U.S. NRC 2, upgraded with modern control laws. ASCA, Inc. is currently under contract to OSU to apply DFM to this benchmark system. The goal is to investigate the feasibility of using DFM to analyze and quantify digital system risk, and to integrate the DFM analytical results back into the plant event tree/fault tree PRA model. (authors)« less

  1. Enrichment assessment of multiple virtual screening strategies for Toll-like receptor 8 agonists based on a maximal unbiased benchmarking data set.

    PubMed

    Pei, Fen; Jin, Hongwei; Zhou, Xin; Xia, Jie; Sun, Lidan; Liu, Zhenming; Zhang, Liangren

    2015-11-01

    Toll-like receptor 8 agonists, which activate adaptive immune responses by inducing robust production of T-helper 1-polarizing cytokines, are promising candidates for vaccine adjuvants. As the binding site of toll-like receptor 8 is large and highly flexible, virtual screening by individual method has inevitable limitations; thus, a comprehensive comparison of different methods may provide insights into seeking effective strategy for the discovery of novel toll-like receptor 8 agonists. In this study, the performance of knowledge-based pharmacophore, shape-based 3D screening, and combined strategies was assessed against a maximum unbiased benchmarking data set containing 13 actives and 1302 decoys specialized for toll-like receptor 8 agonists. Prior structure-activity relationship knowledge was involved in knowledge-based pharmacophore generation, and a set of antagonists was innovatively used to verify the selectivity of the selected knowledge-based pharmacophore. The benchmarking data set was generated from our recently developed 'mubd-decoymaker' protocol. The enrichment assessment demonstrated a considerable performance through our selected three-layer virtual screening strategy: knowledge-based pharmacophore (Phar1) screening, shape-based 3D similarity search (Q4_combo), and then a Gold docking screening. This virtual screening strategy could be further employed to perform large-scale database screening and to discover novel toll-like receptor 8 agonists. © 2015 John Wiley & Sons A/S.

  2. Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Machnes, S.; Institute for Theoretical Physics, University of Ulm, D-89069 Ulm; Sander, U.

    2011-08-15

    For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions aremore » pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.« less

  3. Control strategies for nitrous oxide emissions reduction on wastewater treatment plants operation.

    PubMed

    Santín, I; Barbu, M; Pedret, C; Vilanova, R

    2017-11-15

    The present paper focused on reducing greenhouse gases emissions in wastewater treatment plants operation by application of suitable control strategies. Specifically, the objective is to reduce nitrous oxide emissions during the nitrification process. Incomplete nitrification in the aerobic tanks can lead to an accumulation of nitrite that triggers the nitrous oxide emissions. In order to avoid the peaks of nitrous oxide emissions, this paper proposes a cascade control configuration by manipulating the dissolved oxygen set-points in the aerobic tanks. This control strategy is combined with ammonia cascade control already applied in the literature. This is performed with the objective to take also into account effluent pollutants and operational costs. In addition, other greenhouse gases emissions sources are also evaluated. Results have been obtained by simulation, using a modified version of Benchmark Simulation Model no. 2, which takes into account greenhouse gases emissions. This is called Benchmark Simulation Model no. 2 Gas. The results show that the proposed control strategies are able to reduce by 29.86% of nitrous oxide emissions compared to the default control strategy, while maintaining a satisfactory trade-off between water quality and costs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Recycled Art: Create Puppets Using Recycled Objects.

    ERIC Educational Resources Information Center

    Clearing, 2003

    2003-01-01

    Presents an activity from "Healthy Foods from Healthy Soils" for making puppets using recycled food packaging materials. Includes background information, materials, instructions, literature links, resources, and benchmarks. (NB)

  5. Yoga for military service personnel with PTSD: A single arm study.

    PubMed

    Johnston, Jennifer M; Minami, Takuya; Greenwald, Deborah; Li, Chieh; Reinhardt, Kristen; Khalsa, Sat Bir S

    2015-11-01

    This study evaluated the effects of yoga on posttraumatic stress disorder (PTSD) symptoms, resilience, and mindfulness in military personnel. Participants completing the yoga intervention were 12 current or former military personnel who met the Diagnostic and Statistical Manual for Mental Disorders-Fourth Edition-Text Revision (DSM-IV-TR) diagnostic criteria for PTSD. Results were also benchmarked against other military intervention studies of PTSD using the Clinician Administered PTSD Scale (CAPS; Blake et al., 2000) as an outcome measure. Results of within-subject analyses supported the study's primary hypothesis that yoga would reduce PTSD symptoms (d = 0.768; t = 2.822; p = .009) but did not support the hypothesis that yoga would significantly increase mindfulness (d = 0.392; t = -0.9500; p = .181) and resilience (d = 0.270; t = -1.220; p = .124) in this population. Benchmarking results indicated that, as compared with the aggregated treatment benchmark (d = 1.074) obtained from published clinical trials, the current study's treatment effect (d = 0.768) was visibly lower, and compared with the waitlist control benchmark (d = 0.156), the treatment effect in the current study was visibly higher. (c) 2015 APA, all rights reserved).

  6. Sustainability of the Communities That Care prevention system by coalitions participating in the Community Youth Development Study.

    PubMed

    Gloppen, Kari M; Arthur, Michael W; Hawkins, J David; Shapiro, Valerie B

    2012-09-01

    Community prevention coalitions are a common strategy to mobilize stakeholders to implement tested and effective prevention programs to promote adolescent health and well-being. This article examines the sustainability of Communities That Care (CTC) coalitions approximately 20 months after study support for the intervention ended. The Community Youth Development Study is a community-randomized trial of the CTC prevention system. Using data from 2007 and 2009 coalition leader interviews, this study reports changes in coalition activities from a period of study support for CTC (2007) to 20 months following the end of study support for CTC (2009), measured by the extent to which coalitions continued to meet specific benchmarks. Twenty months after study support for CTC implementation ended, 11 of 12 CTC coalitions in the Community Youth Development Study still existed. The 11 remaining coalitions continued to report significantly higher scores on the benchmarks of phases 2 through 5 of the CTC system than did prevention coalitions in the control communities. At the 20-month follow-up, two-thirds of the CTC coalitions reported having a paid staff person. This study found that the CTC coalitions maintained a relatively high level of implementation fidelity to the CTC system 20 months after the study support for the intervention ended. However, the downward trend in some of the measured benchmarks indicates that continued high-quality training and technical assistance may be important to ensure that CTC coalitions maintain a science-based approach to prevention, and continue to achieve public health impacts on adolescent health and behavior outcomes. Copyright © 2012 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  7. Groundwater-quality data in the Santa Barbara study unit, 2011: results from the California GAMA Program

    USGS Publications Warehouse

    Davis, Tracy A.; Kulongoski, Justin T.; Belitz, Kenneth

    2013-01-01

    Groundwater quality in the 48-square-mile Santa Barbara study unit was investigated by the U.S. Geological Survey (USGS) from January to February 2011, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The Santa Barbara study unit was the thirty-fourth study unit to be sampled as part of the GAMA-PBP. The GAMA Santa Barbara study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system, and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer system is defined as those parts of the aquifers corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the Santa Barbara study unit. Groundwater quality in the primary aquifer system may differ from the quality in the shallower or deeper water-bearing zones; shallow groundwater may be more vulnerable to surficial contamination. In the Santa Barbara study unit located in Santa Barbara and Ventura Counties, groundwater samples were collected from 24 wells. Eighteen of the wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells), and six wells were selected to aid in evaluation of water-quality issues (understanding wells). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], pesticides and pesticide degradates, and pharmaceutical compounds); constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]); naturally occurring inorganic constituents (trace elements, nutrients, major and minor ions, silica, total dissolved solids [TDS], alkalinity, and arsenic, chromium, and iron species); and radioactive constituents (radon-222 and gross alpha and gross beta radioactivity). Naturally occurring isotopes (stable isotopes of hydrogen and oxygen in water, stables isotopes of inorganic carbon and boron dissolved in water, isotope ratios of dissolved strontium, tritium activities, and carbon-14 abundances) and dissolved noble gases also were measured to help identify the sources and ages of the sampled groundwater. In total, 281 constituents and water-quality indicators were measured. Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at up to 12 percent of the wells in the Santa Barbara study unit, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples generally were within the limits of acceptable analytical reproducibility. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 82 percent of the compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, untreated groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is served to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. All organic constituents and most inorganic constituents that were detected in groundwater samples from the 18 grid wells in the Santa Barbara study unit were detected at concentrations less than drinking-water benchmarks. Of the 220 organic and special-interest constituents sampled for at the 18 grid wells, 13 were detected in groundwater samples; concentrations of all detected constituents were less than regulatory and non-regulatory health-based benchmarks. In total, VOCs were detected in 61 percent of the 18 grid wells sampled, pesticides and pesticide degradates were detected in 11 percent, and perchlorate was detected in 67 percent. Polar pesticides and their degradates, pharmaceutical compounds, and NDMA were not detected in any of the grid wells sampled in the Santa Barbara study unit. Eighteen grid wells were sampled for trace elements, major and minor ions, nutrients, and radioactive constituents; most detected concentrations were less than health-based benchmarks. Exceptions are one detection of boron greater than the CDPH notification level (NL-CA) of 1,000 micrograms per liter (μg/L) and one detection of fluoride greater than the CDPH maximum contaminant level (MCL-CA) of 2 milligrams per liter (mg/L). Results for constituents with non-regulatory benchmarks set for aesthetic concerns from the grid wells showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in three grid wells. Manganese concentrations greater than the SMCL-CA of 50 μg/L were detected in seven grid wells. Chloride was detected at a concentration greater than the SMCL-CA recommended benchmark of 250 mg/L in four grid wells. Sulfate concentrations greater than the SMCL-CA recommended benchmark of 250 mg/L were measured in eight grid wells, and the concentration in one of these wells was also greater than the SMCL-CA upper benchmark of 500 mg/L. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 17 grid wells, and concentrations in six of these wells were also greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  8. Proceedings from the 1998 Occupational Health Conference: Benchmarking for Excellence

    NASA Technical Reports Server (NTRS)

    Hoffler, G. Wyckliffe (Editor); O'Donnell, Michele D. (Editor)

    1999-01-01

    The theme of the 1998 NASA Occupational Health Conference was "Benchmarking for Excellence." Conference participants included NASA and contractor Occupational Health professionals, as well as speakers from NASA, other Federal agencies and private companies. Addressing the Conference theme, speakers described new concepts and techniques for corporate benchmarking. They also identified practices used by NASA, other Federal agencies, and by award winning programs in private industry. A two-part Professional Development Course on workplace toxicology and indoor air quality was conducted a day before the Conference. A program manager with the International Space Station Office provided an update on station activities and an expert delivered practical advice on both oral and written communications. A keynote address on the medical aspects of space walking by a retired NASA astronaut highlighted the Conference. Discipline breakout sessions, poster presentations, and a KSC tour complemented the Conference agenda.

  9. Space network scheduling benchmark: A proof-of-concept process for technology transfer

    NASA Technical Reports Server (NTRS)

    Moe, Karen; Happell, Nadine; Hayden, B. J.; Barclay, Cathy

    1993-01-01

    This paper describes a detailed proof-of-concept activity to evaluate flexible scheduling technology as implemented in the Request Oriented Scheduling Engine (ROSE) and applied to Space Network (SN) scheduling. The criteria developed for an operational evaluation of a reusable scheduling system is addressed including a methodology to prove that the proposed system performs at least as well as the current system in function and performance. The improvement of the new technology must be demonstrated and evaluated against the cost of making changes. Finally, there is a need to show significant improvement in SN operational procedures. Successful completion of a proof-of-concept would eventually lead to an operational concept and implementation transition plan, which is outside the scope of this paper. However, a high-fidelity benchmark using actual SN scheduling requests has been designed to test the ROSE scheduling tool. The benchmark evaluation methodology, scheduling data, and preliminary results are described.

  10. A Bayesian approach to traffic light detection and mapping

    NASA Astrophysics Data System (ADS)

    Hosseinyalamdary, Siavash; Yilmaz, Alper

    2017-03-01

    Automatic traffic light detection and mapping is an open research problem. The traffic lights vary in color, shape, geolocation, activation pattern, and installation which complicate their automated detection. In addition, the image of the traffic lights may be noisy, overexposed, underexposed, or occluded. In order to address this problem, we propose a Bayesian inference framework to detect and map traffic lights. In addition to the spatio-temporal consistency constraint, traffic light characteristics such as color, shape and height is shown to further improve the accuracy of the proposed approach. The proposed approach has been evaluated on two benchmark datasets and has been shown to outperform earlier studies. The results show that the precision and recall rates for the KITTI benchmark are 95.78 % and 92.95 % respectively and the precision and recall rates for the LARA benchmark are 98.66 % and 94.65 % .

  11. Final Report of the NASA Office of Safety and Mission Assurance Agile Benchmarking Team

    NASA Technical Reports Server (NTRS)

    Wetherholt, Martha

    2016-01-01

    To ensure that the NASA Safety and Mission Assurance (SMA) community remains in a position to perform reliable Software Assurance (SA) on NASAs critical software (SW) systems with the software industry rapidly transitioning from waterfall to Agile processes, Terry Wilcutt, Chief, Safety and Mission Assurance, Office of Safety and Mission Assurance (OSMA) established the Agile Benchmarking Team (ABT). The Team's tasks were: 1. Research background literature on current Agile processes, 2. Perform benchmark activities with other organizations that are involved in software Agile processes to determine best practices, 3. Collect information on Agile-developed systems to enable improvements to the current NASA standards and processes to enhance their ability to perform reliable software assurance on NASA Agile-developed systems, 4. Suggest additional guidance and recommendations for updates to those standards and processes, as needed. The ABT's findings and recommendations for software management, engineering and software assurance are addressed herein.

  12. Integrated manufacturing approach to attain benchmark team performance

    NASA Astrophysics Data System (ADS)

    Chen, Shau-Ron; Nguyen, Andrew; Naguib, Hussein

    1994-09-01

    A Self-Directed Work Team (SDWT) was developed to transfer a polyimide process module from the research laboratory to our wafer fab facility for applications in IC specialty devices. The SDWT implemented processes and tools based on the integration of five manufacturing strategies for continuous improvement. These were: Leadership Through Quality (LTQ), Total Productive Maintenance (TMP), Cycle Time Management (CTM), Activity-Based Costing (ABC), and Total Employee Involvement (TEI). Utilizing these management techniques simultaneously, the team achieved six sigma control of all critical parameters, increased Overall Equipment Effectiveness (OEE) from 20% to 90%, reduced cycle time by 95%, cut polyimide manufacturing cost by 70%, and improved its overall team member skill level by 33%.

  13. Development of Unsteady Aerodynamic and Aeroelastic Reduced-Order Models Using the FUN3D Code

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Vatsa, Veer N.; Biedron, Robert T.

    2009-01-01

    Recent significant improvements to the development of CFD-based unsteady aerodynamic reduced-order models (ROMs) are implemented into the FUN3D unstructured flow solver. These improvements include the simultaneous excitation of the structural modes of the CFD-based unsteady aerodynamic system via a single CFD solution, minimization of the error between the full CFD and the ROM unsteady aero- dynamic solution, and computation of a root locus plot of the aeroelastic ROM. Results are presented for a viscous version of the two-dimensional Benchmark Active Controls Technology (BACT) model and an inviscid version of the AGARD 445.6 aeroelastic wing using the FUN3D code.

  14. Forest resource information system

    NASA Technical Reports Server (NTRS)

    Mroczynski, R. P. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. A benchmark classification evaluation framework was implemented. The FRIS preprocessing activities were refined. Potential geo-based referencing systems were identified as components of FRIS.

  15. Neutron Reference Benchmark Field Specification: ACRR 44 Inch Lead-Boron (LB44) Bucket Environment (ACRR-LB44-CC-32-CL).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the 44 inch Lead-Boron (LB44) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results ofmore » 31 integral dosimetry measurements in the neutron field are reported.« less

  16. Neutron Reference Benchmark Field Specifications: ACRR Polyethylene-Lead-Graphite (PLG) Bucket Environment (ACRR-PLG-CC-32-CL).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parm, Edward J.; Griffin, Patrick J.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the Polyethylene-Lead-Graphite (PLG) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 37 integralmore » dosimetry measurements in the neutron field are reported.« less

  17. Data Packages for the Hanford Immobilized Low Activity Tank Waste Performance Assessment 2001 Version [SEC 1 THRU 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MANN, F.M.

    Data package supporting the 2001 Immobilized Low-Activity Waste Performance Analysis. Geology, hydrology, geochemistry, facility, waste form, and dosimetry data based on recent investigation are provided. Verification and benchmarking packages for selected software codes are provided.

  18. 15 CFR 801.12 - Rules and regulations for the BE-180, Benchmark Survey of Financial Services Transactions between...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...; other financial investment activities (including miscellaneous intermediation, portfolio management, investment advice, and all other financial investment activities); insurance carriers; insurance agencies... 52-Finance and Insurance, and holding companies that own or influence, and are principally engaged in...

  19. [Benchmarking in patient identification: An opportunity to learn].

    PubMed

    Salazar-de-la-Guerra, R M; Santotomás-Pajarrón, A; González-Prieto, V; Menéndez-Fraga, M D; Rocha Hurtado, C

    To perform a benchmarking on the safe identification of hospital patients involved in "Club de las tres C" (Calidez, Calidad y Cuidados) in order to prepare a common procedure for this process. A descriptive study was conducted on the patient identification process in palliative care and stroke units in 5medium-stay hospitals. The following steps were carried out: Data collection from each hospital; organisation and data analysis, and preparation of a common procedure for this process. The data obtained for the safe identification of all stroke patients were: hospital 1 (93%), hospital 2 (93.1%), hospital 3 (100%), and hospital 5 (93.4%), and for the palliative care process: hospital 1 (93%), hospital 2 (92.3%), hospital 3 (92%), hospital 4 (98.3%), and hospital 5 (85.2%). The aim of the study has been accomplished successfully. Benchmarking activities have been developed and knowledge on the patient identification process has been shared. All hospitals had good results. The hospital 3 was best in the ictus identification process. The benchmarking identification is difficult, but, a useful common procedure that collects the best practices has been identified among the 5 hospitals. Copyright © 2017 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. Trusted Data Communication and Security Issues in Gnss Network of Turkey

    NASA Astrophysics Data System (ADS)

    Bakici, S.; Erkek, B.; Manti, V.; Altekin, A.

    2017-11-01

    There are three main activities of General Directorate of Land Registry and Cadastre. These are Mapping, Land Registry and Cadastre. Geomatic Department is responsible for mapping activities. The most important projects like TUSAGA-Aktif (CORS-Tr), Metadata Geoportal, Orthophoto Production and orthophoto web services and preparation of Turkish NSDI Feasibility Report have been conducted and completed by this department's specialists since 2005. TUSAGA-Aktif (CORS-Tr) System, serves location information at cm level accuracy in Turkey and TR Nortern Cyprus in few seconds, where adequate numbers of GNSS satellites are observed and communication possibilities are present. No ground control points and benchmarks are necessary. There are 146 permanent GNSS stations within the CORS-Tr System. Station data are transferred online to the main control center located in the Mapping Department of the General Directorate of Land Registry and Cadastre and to the control center located in the General Command of Mapping. Currently CORS-Tr has more than 9000 users. Most of them are private companies working for governmental organization. Providing data communication between control center and both GNSS station and users via trusted and good substructure is important. Additionally, protection of the system and data against cyber attacks from domestic and foreign sources is important. This paper focuses on data communication and security issues of GNSS network named TUSAGA-Aktif.

  1. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  2. Residual activity evaluation: a benchmark between ANITA, FISPACT, FLUKA and PHITS codes

    NASA Astrophysics Data System (ADS)

    Firpo, Gabriele; Viberti, Carlo Maria; Ferrari, Anna; Frisoni, Manuela

    2017-09-01

    The activity of residual nuclides dictates the radiation fields in periodic inspections/repairs (maintenance periods) and dismantling operations (decommissioning phase) of accelerator facilities (i.e., medical, industrial, research) and nuclear reactors. Therefore, the correct prediction of the material activation allows for a more accurate planning of the activities, in line with the ALARA (As Low As Reasonably Achievable) principles. The scope of the present work is to show the results of a comparison between residual total specific activity versus a set of cooling time instants (from zero up to 10 years after irradiation) as obtained by two analytical (FISPACT and ANITA) and two Monte Carlo (FLUKA and PHITS) codes, making use of their default nuclear data libraries. A set of 40 irradiating scenarios is considered, i.e. neutron and proton particles of different energies, ranging from zero to many hundreds MeV, impinging on pure elements or materials of standard composition typically used in industrial applications (namely, AISI SS316 and Portland concrete). In some cases, experimental results were also available for a more thorough benchmark.

  3. Benchmarking and Hardware-In-The-Loop Operation of a ...

    EPA Pesticide Factsheets

    Engine Performance evaluation in support of LD MTE. EPA used elements of its ALPHA model to apply hardware-in-the-loop (HIL) controls to the SKYACTIV engine test setup to better understand how the engine would operate in a chassis test after combined with future leading edge technologies, advanced high-efficiency transmission, reduced mass, and reduced roadload. Predict future vehicle performance with Atkinson engine. As part of its technology assessment for the upcoming midterm evaluation of the 2017-2025 LD vehicle GHG emissions regulation, EPA has been benchmarking engines and transmissions to generate inputs for use in its ALPHA model

  4. Systematic Expansion of Active Spaces beyond the CASSCF Limit: A GASSCF/SplitGAS Benchmark Study.

    PubMed

    Vogiatzis, Konstantinos D; Li Manni, Giovanni; Stoneburner, Samuel J; Ma, Dongxia; Gagliardi, Laura

    2015-07-14

    The applicability and accuracy of the generalized active space self-consistent field, (GASSCF), and (SplitGAS) methods are presented. The GASSCF method enables the exploration of larger active spaces than with the conventional complete active space SCF, (CASSCF), by fragmentation of a large space into subspaces and by controlling the interspace excitations. In the SplitGAS method, the GAS configuration interaction, CI, expansion is further partitioned in two parts: the principal, which includes the most important configuration state functions, and an extended, containing less relevant but not negligible ones. An effective Hamiltonian is then generated, with the extended part acting as a perturbation to the principal space. Excitation energies of ozone, furan, pyrrole, nickel dioxide, and copper tetrachloride dianion are reported. Various partitioning schemes of the GASSCF and SplitGAS CI expansions are considered and compared with the complete active space followed by second-order perturbation theory, (CASPT2), and multireference CI method, (MRCI), or available experimental data. General guidelines for the optimum applicability of these methods are discussed together with their current limitations.

  5. Baseline Evaluations to Support Control Room Modernization at Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald L.; Joe, Jeffrey C.

    2015-02-01

    For any major control room modernization activity at a commercial nuclear power plant (NPP) in the U.S., a utility should carefully follow the four phases prescribed by the U.S. Nuclear Regulatory Commission in NUREG-0711, Human Factors Engineering Program Review Model. These four phases include Planning and Analysis, Design, Verification and Validation, and Implementation and Operation. While NUREG-0711 is a useful guideline, it is written primarily from the perspective of regulatory review, and it therefore does not provide a nuanced account of many of the steps the utility might undertake as part of control room modernization. The guideline is largely summative—intendedmore » to catalog final products—rather than formative—intended to guide the overall modernization process. In this paper, we highlight two crucial formative sub-elements of the Planning and Analysis phase specific to control room modernization that are not covered in NUREG-0711. These two sub-elements are the usability and ergonomics baseline evaluations. A baseline evaluation entails evaluating the system as-built and currently in use. The usability baseline evaluation provides key insights into operator performance using the control system currently in place. The ergonomics baseline evaluation identifies possible deficiencies in the physical configuration of the control system. Both baseline evaluations feed into the design of the replacement system and subsequent summative benchmarking activities that help ensure that control room modernization represents a successful evolution of the control system.« less

  6. Research, Publication, and Service Patterns of Florida Academic Librarians

    ERIC Educational Resources Information Center

    Henry, Deborah B.; Neville, Tina M.

    2004-01-01

    In an effort to establish benchmarks for comparison to national trends, a web-based survey explored the research, publication, and service activities of Florida academic librarians. Participants ranked the importance of professional activities to the tenure/promotion process. Findings suggest that perceived tenure and promotion demands do…

  7. Hydrogen and Fuel Cells | Chemistry and Nanoscience Research | NREL

    Science.gov Websites

    Reduction Reaction for Ultrathin Uniform Pt/C Catalyst Layers without Influence from Nafion,"" , "Benchmarking the Oxygen Reduction Reaction Activity of Pt-Based Catalysts Using Standardized , B.S. Pivovar, S.S. Kocha. ""Suppression of Oxygen Reduction Reaction Activity on Pt-Based

  8. Multi-Attribute Task Battery - Applications in pilot workload and strategic behavior research

    NASA Technical Reports Server (NTRS)

    Arnegard, Ruth J.; Comstock, J. R., Jr.

    1991-01-01

    The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.

  9. The multi-attribute task battery for human operator workload and strategic behavior research

    NASA Technical Reports Server (NTRS)

    Comstock, J. Raymond, Jr.; Arnegard, Ruth J.

    1992-01-01

    The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to use nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.

  10. Development of a sensor coordinated kinematic model for neural network controller training

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1990-01-01

    A robotic benchmark problem useful for evaluating alternative neural network controllers is presented. Specifically, it derives two camera models and the kinematic equations of a multiple degree of freedom manipulator whose end effector is under observation. The mapping developed include forward and inverse translations from binocular images to 3-D target position and the inverse kinematics of mapping point positions into manipulator commands in joint space. Implementation is detailed for a three degree of freedom manipulator with one revolute joint at the base and two prismatic joints on the arms. The example is restricted to operate within a unit cube with arm links of 0.6 and 0.4 units respectively. The development is presented in the context of more complex simulations and a logical path for extension of the benchmark to higher degree of freedom manipulators is presented.

  11. Platinum adlayered ruthenium nanoparticles, method for preparing, and uses thereof

    DOEpatents

    Tong, YuYe; Du, Bingchen

    2015-08-11

    A superior, industrially scalable one-pot ethylene glycol-based wet chemistry method to prepare platinum-adlayered ruthenium nanoparticles has been developed that offers an exquisite control of the platinum packing density of the adlayers and effectively prevents sintering of the nanoparticles during the deposition process. The wet chemistry based method for the controlled deposition of submonolayer platinum is advantageous in terms of processing and maximizing the use of platinum and can, in principle, be scaled up straightforwardly to an industrial level. The reactivity of the Pt(31)-Ru sample was about 150% higher than that of the industrial benchmark PtRu (1:1) alloy sample but with 3.5 times less platinum loading. Using the Pt(31)-Ru nanoparticles would lower the electrode material cost compared to using the industrial benchmark alloy nanoparticles for direct methanol fuel cell applications.

  12. Assessing validity of observational intervention studies - the Benchmarking Controlled Trials.

    PubMed

    Malmivaara, Antti

    2016-09-01

    Benchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations. To create and pilot test a checklist for appraising methodological validity of a BCT. The checklist was created by extracting the most essential elements from the comprehensive set of criteria in the previous paper on BCTs. Also checklists and scientific papers on observational studies and respective systematic reviews were utilized. Ten BCTs published in the Lancet and in the New England Journal of Medicine were used to assess feasibility of the created checklist. The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies. The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies. However, the piloted checklist should be validated in further studies. Key messages Benchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations. This paper presents a checklist for appraising methodological validity of BCTs and pilot-tests the checklist with ten BCTs published in leading medical journals. The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies. The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies.

  13. Benchmarking specialty hospitals, a scoping review on theory and practice.

    PubMed

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  14. Assessing Student Understanding of the "New Biology": Development and Evaluation of a Criterion-Referenced Genomics and Bioinformatics Assessment

    NASA Astrophysics Data System (ADS)

    Campbell, Chad Edward

    Over the past decade, hundreds of studies have introduced genomics and bioinformatics (GB) curricula and laboratory activities at the undergraduate level. While these publications have facilitated the teaching and learning of cutting-edge content, there has yet to be an evaluation of these assessment tools to determine if they are meeting the quality control benchmarks set forth by the educational research community. An analysis of these assessment tools indicated that <10% referenced any quality control criteria and that none of the assessments met more than one of the quality control benchmarks. In the absence of evidence that these benchmarks had been met, it is unclear whether these assessment tools are capable of generating valid and reliable inferences about student learning. To remedy this situation the development of a robust GB assessment aligned with the quality control benchmarks was undertaken in order to ensure evidence-based evaluation of student learning outcomes. Content validity is a central piece of construct validity, and it must be used to guide instrument and item development. This study reports on: (1) the correspondence of content validity evidence gathered from independent sources; (2) the process of item development using this evidence; (3) the results from a pilot administration of the assessment; (4) the subsequent modification of the assessment based on the pilot administration results and; (5) the results from the second administration of the assessment. Twenty-nine different subtopics within GB (Appendix B: Genomics and Bioinformatics Expert Survey) were developed based on preliminary GB textbook analyses. These subtopics were analyzed using two methods designed to gather content validity evidence: (1) a survey of GB experts (n=61) and (2) a detailed content analyses of GB textbooks (n=6). By including only the subtopics that were shown to have robust support across these sources, 22 GB subtopics were established for inclusion in the assessment. An expert panel subsequently developed, evaluated, and revised two multiple-choice items to align with each of the 22 subtopics, producing a final item pool of 44 items. These items were piloted with student samples of varying content exposure levels. Both Classical Test Theory (CTT) and Item Response Theory (IRT) methodologies were used to evaluate the assessment's validity, reliability and ability inferences, and its ability to differentiate students with different magnitudes of content exposure. A total of 18 items were subsequently modified and reevaluated by an expert panel. The 26 original and 18 modified items were once again piloted with student samples of varying content exposure levels. Both CTT and IRT methodologies were once again used to evaluate student responses in order to evaluate the assessment's validity and reliability inferences as well as its ability to differentiate students with different magnitudes of content exposure. Interviews with students from different content exposure levels were also performed in order to gather convergent validity evidence (external validity evidence) as well as substantive validity evidence. Also included are the limitations of the assessment and a set of guidelines on how the assessment can best be used.

  15. The Effect of Thermophoresis on Unsteady Oldroyd-B Nanofluid Flow over Stretching Surface

    PubMed Central

    Awad, Faiz G.; Ahamed, Sami M. S.; Sibanda, Precious; Khumalo, Melusi

    2015-01-01

    There are currently only a few theoretical studies on convective heat transfer in polymer nanocomposites. In this paper, the unsteady incompressible flow of a polymer nanocomposite represented by an Oldroyd-B nanofluid along a stretching sheet is investigated. Recent studies have assumed that the nanoparticle fraction can be actively controlled on the boundary, similar to the temperature. However, in practice, such control presents significant challenges and in this study the nanoparticle flux at the boundary surface is assumed to be zero. We have used a relatively novel numerical scheme; the spectral relaxation method to solve the momentum, heat and mass transport equations. The accuracy of the solutions has been determined by benchmarking the results against the quasilinearisation method. We have conducted a parametric study to determine the influence of the fluid parameters on the heat and mass transfer coefficients. PMID:26312754

  16. Non-parametric identification of multivariable systems: A local rational modeling approach with application to a vibration isolation benchmark

    NASA Astrophysics Data System (ADS)

    Voorhoeve, Robbert; van der Maas, Annemiek; Oomen, Tom

    2018-05-01

    Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF identification of lightly damped mechanical systems with improved speed and accuracy. The proposed method is based on local rational models, which can efficiently handle the lightly-damped resonant dynamics. A key aspect herein is the freedom in the multivariable rational model parametrizations. Several choices for such multivariable rational model parametrizations are proposed and investigated. For systems with many inputs and outputs the required number of model parameters can rapidly increase, adversely affecting the performance of the local modeling approach. Therefore, low-order model structures are investigated. The structure of these low-order parametrizations leads to an undesired directionality in the identification problem. To address this, an iterative local rational modeling algorithm is proposed. As a special case recently developed SISO algorithms are recovered. The proposed approach is successfully demonstrated on simulations and on an active vibration isolation system benchmark, confirming good performance of the method using significantly less parameters compared with alternative approaches.

  17. Multisensor benchmark data for riot control

    NASA Astrophysics Data System (ADS)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  18. Monitoring land subsidence in Sacramento Valley, California, using GPS

    USGS Publications Warehouse

    Blodgett, J.C.; Ikehara, M.E.; Williams, Gary E.

    1990-01-01

    Land subsidence measurement is usually based on a comparison of bench-mark elevations surveyed at different times. These bench marks, established for mapping or the national vertical control network, are not necessarily suitable for measuring land subsidence. Also, many bench marks have been destroyed or are unstable. Conventional releveling of the study area would be costly and would require several years to complete. Differences of as much as 3.9 ft between recent leveling and published bench-mark elevations have been documented at seven locations in the Sacramento Valley. Estimates of land subsidence less than about 0.3 ft are questionable because elevation data are based on leveling and adjustment procedures that occured over many years. A new vertical control network based on the Global Positioning System (GPS) provides highly accurate vertical control data at relatively low costs, and the survey points can be placed where needed to obtain adequate areal coverage of the area affected by land subsidence.

  19. A Comparison of Coverage Restrictions for Biopharmaceuticals and Medical Procedures.

    PubMed

    Chambers, James; Pope, Elle; Bungay, Kathy; Cohen, Joshua; Ciarametaro, Michael; Dubois, Robert; Neumann, Peter J

    2018-04-01

    Differences in payer evaluation and coverage of pharmaceuticals and medical procedures suggest that coverage may differ for medications and procedures independent of their clinical benefit. We hypothesized that coverage for medications is more restricted than corresponding coverage for nonmedication interventions. We included top-selling medications and highly utilized procedures. For each intervention-indication pair, we classified value in terms of cost-effectiveness (incremental cost per quality-adjusted life-year), as reported by the Tufts Medical Center Cost-Effectiveness Analysis Registry. For each intervention-indication pair and for each of 10 large payers, we classified coverage, when available, as either "more restrictive" or as "not more restrictive," compared with a benchmark. The benchmark reflected the US Food and Drug Administration label information, when available, or pertinent clinical guidelines. We compared coverage policies and the benchmark in terms of step edits and clinical restrictions. Finally, we regressed coverage restrictiveness against intervention type (medication or nonmedication), controlling for value (cost-effectiveness more or less favorable than a designated threshold). We identified 392 medication and 185 procedure coverage decisions. A total of 26.3% of the medication coverage and 38.4% of the procedure coverage decisions were more restrictive than their corresponding benchmarks. After controlling for value, the odds of being more restrictive were 42% lower for medications than for procedures. Including unfavorable tier placement in the definition of "more restrictive" greatly increased the proportion of medication coverage decisions classified as "more restrictive" and reversed our findings. Therapy access depends on factors other than cost and clinical benefit, suggesting potential health care system inefficiency. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  20. An optimized proportional-derivative controller for the human upper extremity with gravity.

    PubMed

    Jagodnik, Kathleen M; Blana, Dimitra; van den Bogert, Antonie J; Kirsch, Robert F

    2015-10-15

    When Functional Electrical Stimulation (FES) is used to restore movement in subjects with spinal cord injury (SCI), muscle stimulation patterns should be selected to generate accurate and efficient movements. Ideally, the controller for such a neuroprosthesis will have the simplest architecture possible, to facilitate translation into a clinical setting. In this study, we used the simulated annealing algorithm to optimize two proportional-derivative (PD) feedback controller gain sets for a 3-dimensional arm model that includes musculoskeletal dynamics and has 5 degrees of freedom and 22 muscles, performing goal-oriented reaching movements. Controller gains were optimized by minimizing a weighted sum of position errors, orientation errors, and muscle activations. After optimization, gain performance was evaluated on the basis of accuracy and efficiency of reaching movements, along with three other benchmark gain sets not optimized for our system, on a large set of dynamic reaching movements for which the controllers had not been optimized, to test ability to generalize. Robustness in the presence of weakened muscles was also tested. The two optimized gain sets were found to have very similar performance to each other on all metrics, and to exhibit significantly better accuracy, compared with the three standard gain sets. All gain sets investigated used physiologically acceptable amounts of muscular activation. It was concluded that optimization can yield significant improvements in controller performance while still maintaining muscular efficiency, and that optimization should be considered as a strategy for future neuroprosthesis controller design. Published by Elsevier Ltd.

  1. 76 FR 19257 - National Cancer Control Month, 2011

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ... Department of Health and Human Services, is tasked with outlining national objectives and benchmarks to... family member or friend, and too many of us understand the terrible toll of this disease. In memory of...

  2. Our Profession Is Changing--Whether We Like It or Not.

    ERIC Educational Resources Information Center

    Eddison, Betty

    1997-01-01

    Discusses outsourcing in the information industry. Highlights include information technology; special libraries; outsourcing in Australia; public librarians; benchmarking and quality control; online searching; and outsourcing as a threat to information professionals. (LRW)

  3. Benchmarking the MCNP code for Monte Carlo modelling of an in vivo neutron activation analysis system.

    PubMed

    Natto, S A; Lewis, D G; Ryde, S J

    1998-01-01

    The Monte Carlo computer code MCNP (version 4A) has been used to develop a personal computer-based model of the Swansea in vivo neutron activation analysis (IVNAA) system. The model included specification of the neutron source (252Cf), collimators, reflectors and shielding. The MCNP model was 'benchmarked' against fast neutron and thermal neutron fluence data obtained experimentally from the IVNAA system. The Swansea system allows two irradiation geometries using 'short' and 'long' collimators, which provide alternative dose rates for IVNAA. The data presented here relate to the short collimator, although results of similar accuracy were obtained using the long collimator. The fast neutron fluence was measured in air at a series of depths inside the collimator. The measurements agreed with the MCNP simulation within the statistical uncertainty (5-10%) of the calculations. The thermal neutron fluence was measured and calculated inside the cuboidal water phantom. The depth of maximum thermal fluence was 3.2 cm (measured) and 3.0 cm (calculated). The width of the 50% thermal fluence level across the phantom at its mid-depth was found to be the same by both MCNP and experiment. This benchmarking exercise has given us a high degree of confidence in MCNP as a tool for the design of IVNAA systems.

  4. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  5. Optimal orientation in flows: providing a benchmark for animal movement strategies.

    PubMed

    McLaren, James D; Shamoun-Baranes, Judy; Dokter, Adriaan M; Klaassen, Raymond H G; Bouten, Willem

    2014-10-06

    Animal movements in air and water can be strongly affected by experienced flow. While various flow-orientation strategies have been proposed and observed, their performance in variable flow conditions remains unclear. We apply control theory to establish a benchmark for time-minimizing (optimal) orientation. We then define optimal orientation for movement in steady flow patterns and, using dynamic wind data, for short-distance mass movements of thrushes (Turdus sp.) and 6000 km non-stop migratory flights by great snipes, Gallinago media. Relative to the optimal benchmark, we assess the efficiency (travel speed) and reliability (success rate) of three generic orientation strategies: full compensation for lateral drift, vector orientation (single-heading movement) and goal orientation (continually heading towards the goal). Optimal orientation is characterized by detours to regions of high flow support, especially when flow speeds approach and exceed the animal's self-propelled speed. In strong predictable flow (short distance thrush flights), vector orientation adjusted to flow on departure is nearly optimal, whereas for unpredictable flow (inter-continental snipe flights), only goal orientation was near-optimally reliable and efficient. Optimal orientation provides a benchmark for assessing efficiency of responses to complex flow conditions, thereby offering insight into adaptive flow-orientation across taxa in the light of flow strength, predictability and navigation capacity.

  6. Optimal orientation in flows: providing a benchmark for animal movement strategies

    PubMed Central

    McLaren, James D.; Shamoun-Baranes, Judy; Dokter, Adriaan M.; Klaassen, Raymond H. G.; Bouten, Willem

    2014-01-01

    Animal movements in air and water can be strongly affected by experienced flow. While various flow-orientation strategies have been proposed and observed, their performance in variable flow conditions remains unclear. We apply control theory to establish a benchmark for time-minimizing (optimal) orientation. We then define optimal orientation for movement in steady flow patterns and, using dynamic wind data, for short-distance mass movements of thrushes (Turdus sp.) and 6000 km non-stop migratory flights by great snipes, Gallinago media. Relative to the optimal benchmark, we assess the efficiency (travel speed) and reliability (success rate) of three generic orientation strategies: full compensation for lateral drift, vector orientation (single-heading movement) and goal orientation (continually heading towards the goal). Optimal orientation is characterized by detours to regions of high flow support, especially when flow speeds approach and exceed the animal's self-propelled speed. In strong predictable flow (short distance thrush flights), vector orientation adjusted to flow on departure is nearly optimal, whereas for unpredictable flow (inter-continental snipe flights), only goal orientation was near-optimally reliable and efficient. Optimal orientation provides a benchmark for assessing efficiency of responses to complex flow conditions, thereby offering insight into adaptive flow-orientation across taxa in the light of flow strength, predictability and navigation capacity. PMID:25056213

  7. Development of an accelerometer-linked online intervention system to promote physical activity in adolescents.

    PubMed

    Guthrie, Nicole; Bradlyn, Andrew; Thompson, Sharon K; Yen, Sophia; Haritatos, Jana; Dillon, Fred; Cole, Steve W

    2015-01-01

    Most adolescents do not achieve the recommended levels of moderate-to-vigorous physical activity (MVPA), placing them at increased risk for a diverse array of chronic diseases in adulthood. There is a great need for scalable and effective interventions that can increase MVPA in adolescents. Here we report the results of a measurement validation study and a preliminary proof-of-concept experiment testing the impact of Zamzee, an accelerometer-linked online intervention system that combines proximal performance feedback and incentive motivation features to promote MVPA. In a calibration study that parametrically varied levels of physical activity in 31 12-14 year-old children, the Zamzee activity meter was shown to provide a valid measure of MVPA (sensitivity in detecting MVPA = 85.9%, specificity = 97.5%, and r = .94 correspondence with the benchmark RT3 accelerometer system; all p < .0001). In a subsequent randomized controlled multi-site experiment involving 182 middle school-aged children assessed for MVPA over 6 wks, intent-to-treat analyses found that those who received access to the Zamzee intervention had average MVPA levels 54% greater than those of a passive control group (p < 0.0001) and 68% greater than those of an active control group that received access to a commercially available active videogame (p < .0001). Zamzee's effects on MVPA did not diminish significantly over the course of the 6-wk study period, and were statistically significant in both females and males, and in normal- vs. high-BMI subgroups. These results provide promising initial indications that combining the Zamzee activity meter with online proximal performance feedback and incentive motivation features can positively impact MVPA levels in adolescents.

  8. Development of an Accelerometer-Linked Online Intervention System to Promote Physical Activity in Adolescents

    PubMed Central

    Guthrie, Nicole; Bradlyn, Andrew; Thompson, Sharon K.; Yen, Sophia; Haritatos, Jana; Dillon, Fred; Cole, Steve W.

    2015-01-01

    Most adolescents do not achieve the recommended levels of moderate-to-vigorous physical activity (MVPA), placing them at increased risk for a diverse array of chronic diseases in adulthood. There is a great need for scalable and effective interventions that can increase MVPA in adolescents. Here we report the results of a measurement validation study and a preliminary proof-of-concept experiment testing the impact of Zamzee, an accelerometer-linked online intervention system that combines proximal performance feedback and incentive motivation features to promote MVPA. In a calibration study that parametrically varied levels of physical activity in 31 12-14 year-old children, the Zamzee activity meter was shown to provide a valid measure of MVPA (sensitivity in detecting MVPA = 85.9%, specificity = 97.5%, and r = .94 correspondence with the benchmark RT3 accelerometer system; all p < .0001). In a subsequent randomized controlled multi-site experiment involving 182 middle school-aged children assessed for MVPA over 6 wks, intent-to-treat analyses found that those who received access to the Zamzee intervention had average MVPA levels 54% greater than those of a passive control group (p < 0.0001) and 68% greater than those of an active control group that received access to a commercially available active videogame (p < .0001). Zamzee’s effects on MVPA did not diminish significantly over the course of the 6-wk study period, and were statistically significant in both females and males, and in normal- vs. high-BMI subgroups. These results provide promising initial indications that combining the Zamzee activity meter with online proximal performance feedback and incentive motivation features can positively impact MVPA levels in adolescents. PMID:26010359

  9. Benchmarking of TALE- and CRISPR/dCas9-Based Transcriptional Regulators in Mammalian Cells for the Construction of Synthetic Genetic Circuits.

    PubMed

    Lebar, Tina; Jerala, Roman

    2016-10-21

    Transcriptional activator-like effector (TALE)- and CRISPR/Cas9-based designable recognition domains represent a technological breakthrough not only for genome editing but also for building designed genetic circuits. Both platforms are able to target rarely occurring DNA segments, even within complex genomes. TALE and dCas9 domains, genetically fused to transcriptional regulatory domains, can be used for the construction of engineered logic circuits. Here we benchmarked the performance of the two platforms, targeting the same DNA sequences, to compare their advantages for the construction of designed circuits in mammalian cells. Optimal targeting strands for repression and activation of dCas9-based designed transcription factors were identified; both platforms exhibited good orthogonality and were used to construct functionally complete NOR gates. Although the CRISPR/dCas9 system is clearly easier to construct, TALE-based activators were significantly stronger, and the TALE-based platform performed better, especially for the construction of layered circuits.

  10. Preparing Students for Education, Work, and Community: Activity Theory in Task-Based Curriculum Design

    ERIC Educational Resources Information Center

    Campbell, Chris; MacPherson, Seonaigh; Sawkins, Tanis

    2014-01-01

    This case study describes how sociocultural and activity theory were applied in the design of a publicly funded, Canadian Language Benchmark (CLB)-based English as a Second Language (ESL) credential program and curriculum for immigrant and international students in postsecondary institutions in British Columbia, Canada. The ESL Pathways Project…

  11. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1996 revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Tsao, C.L.

    1996-06-01

    This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less

  12. Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model

    PubMed Central

    Saul, Katherine R.; Hu, Xiao; Goehler, Craig M.; Vidt, Meghan E.; Daly, Melissa; Velisar, Anca; Murray, Wendy M.

    2014-01-01

    Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms. PMID:24995410

  13. Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model.

    PubMed

    Saul, Katherine R; Hu, Xiao; Goehler, Craig M; Vidt, Meghan E; Daly, Melissa; Velisar, Anca; Murray, Wendy M

    2015-01-01

    Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms.

  14. Benchmarking Outcomes in the Critically Injured Burn Patient

    PubMed Central

    Klein, Matthew B.; Goverman, Jeremy; Hayden, Douglas L.; Fagan, Shawn P.; McDonald-Smith, Grace P.; Alexander, Andrew K.; Gamelli, Richard L.; Gibran, Nicole S.; Finnerty, Celeste C.; Jeschke, Marc G.; Arnoldo, Brett; Wispelwey, Bram; Mindrinos, Michael N.; Xiao, Wenzhong; Honari, Shari E.; Mason, Philip H.; Schoenfeld, David A.; Herndon, David N.; Tompkins, Ronald G.

    2014-01-01

    Objective To determine and compare outcomes with accepted benchmarks in burn care at six academic burn centers. Background Since the 1960s, U.S. morbidity and mortality rates have declined tremendously for burn patients, likely related to improvements in surgical and critical care treatment. We describe the baseline patient characteristics and well-defined outcomes for major burn injuries. Methods We followed 300 adults and 241 children from 2003–2009 through hospitalization using standard operating procedures developed at study onset. We created an extensive database on patient and injury characteristics, anatomic and physiological derangement, clinical treatment, and outcomes. These data were compared with existing benchmarks in burn care. Results Study patients were critically injured as demonstrated by mean %TBSA (41.2±18.3 for adults and 57.8±18.2 for children) and presence of inhalation injury in 38% of the adults and 54.8% of the children. Mortality in adults was 14.1% for those less than 55 years old and 38.5% for those age ≥55 years. Mortality in patients less than 17 years old was 7.9%. Overall, the multiple organ failure rate was 27%. When controlling for age and %TBSA, presence of inhalation injury was not significant. Conclusions This study provides the current benchmark for major burn patients. Mortality rates, notwithstanding significant % TBSA and presence of inhalation injury, have significantly declined compared to previous benchmarks. Modern day surgical and medically intensive management has markedly improved to the point where we can expect patients less than 55 years old with severe burn injuries and inhalation injury to survive these devastating conditions. PMID:24722222

  15. Stratification of unresponsive patients by an independently validated index of brain complexity

    PubMed Central

    Casarotto, Silvia; Comanducci, Angela; Rosanova, Mario; Sarasso, Simone; Fecchio, Matteo; Napolitani, Martino; Pigorini, Andrea; G. Casali, Adenauer; Trimarchi, Pietro D.; Boly, Melanie; Gosseries, Olivia; Bodart, Olivier; Curto, Francesco; Landi, Cristina; Mariotti, Maurizio; Devalle, Guya; Laureys, Steven; Tononi, Giulio

    2016-01-01

    Objective Validating objective, brain‐based indices of consciousness in behaviorally unresponsive patients represents a challenge due to the impossibility of obtaining independent evidence through subjective reports. Here we address this problem by first validating a promising metric of consciousness—the Perturbational Complexity Index (PCI)—in a benchmark population who could confirm the presence or absence of consciousness through subjective reports, and then applying the same index to patients with disorders of consciousness (DOCs). Methods The benchmark population encompassed 150 healthy controls and communicative brain‐injured subjects in various states of conscious wakefulness, disconnected consciousness, and unconsciousness. Receiver operating characteristic curve analysis was performed to define an optimal cutoff for discriminating between the conscious and unconscious conditions. This cutoff was then applied to a cohort of noncommunicative DOC patients (38 in a minimally conscious state [MCS] and 43 in a vegetative state [VS]). Results We found an empirical cutoff that discriminated with 100% sensitivity and specificity between the conscious and the unconscious conditions in the benchmark population. This cutoff resulted in a sensitivity of 94.7% in detecting MCS and allowed the identification of a number of unresponsive VS patients (9 of 43) with high values of PCI, overlapping with the distribution of the benchmark conscious condition. Interpretation Given its high sensitivity and specificity in the benchmark and MCS population, PCI offers a reliable, independently validated stratification of unresponsive patients that has important physiopathological and therapeutic implications. In particular, the high‐PCI subgroup of VS patients may retain a capacity for consciousness that is not expressed in behavior. Ann Neurol 2016;80:718–729 PMID:27717082

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Sterbentz, James W.; Snoj, Luka

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  17. Benchmarking in emergency health systems.

    PubMed

    Kennedy, Marcus P; Allen, Jacqueline; Allen, Greg

    2002-12-01

    This paper discusses the role of benchmarking as a component of quality management. It describes the historical background of benchmarking, its competitive origin and the requirement in today's health environment for a more collaborative approach. The classical 'functional and generic' types of benchmarking are discussed with a suggestion to adopt a different terminology that describes the purpose and practicalities of benchmarking. Benchmarking is not without risks. The consequence of inappropriate focus and the need for a balanced overview of process is explored. The competition that is intrinsic to benchmarking is questioned and the negative impact it may have on improvement strategies in poorly performing organizations is recognized. The difficulty in achieving cross-organizational validity in benchmarking is emphasized, as is the need to scrutinize benchmarking measures. The cost effectiveness of benchmarking projects is questioned and the concept of 'best value, best practice' in an environment of fixed resources is examined.

  18. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  19. Metric Evaluation Pipeline for 3d Modeling of Urban Scenes

    NASA Astrophysics Data System (ADS)

    Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.

    2017-05-01

    Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  20. County business patterns, 1996 : Kansas

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  1. County business patterns, 1997 : Texas

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  2. County business patterns, 1997 : Connecticut

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  3. County business patterns, 1997 : Georgia

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  4. County business patterns, 1997 : Ohio

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  5. County business patterns, 1997 : Indiana

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  6. County business patterns, 1997 : Nevada

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  7. County business patterns, 1997 : Louisiana

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  8. County business patterns, 1997 : Michigan

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  9. County business patterns, 1997 : Iowa

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  10. County business patterns, 1997 : Florida

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  11. County business patterns, 1997 : Arizona

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  12. County business patterns, 1997 : New York

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  13. County business patterns, 1997 : Illinois

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  14. County business patterns, 1997 : Virginia

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  15. County business patterns, 1997 : North Carolina

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  16. County business patterns, 1997 : Pennsylvania

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  17. County business patterns, 1997 : Minnesota

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  18. County business patterns, 1997 : Alabama

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  19. County business patterns, 1997 : Delaware

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  20. County business patterns, 1997 : Hawaii

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  1. County business patterns, 1997 : Vermont

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  2. County business patterns, 1996 : Indiana

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  3. County business patterns, 1997 : Oregon

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  4. County business patterns, 1997 : New Mexico

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  5. County business patterns, 1996 : Texas

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  6. County business patterns, 1996 : Arizona

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  7. County business patterns, 1997 : Kentucky

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  8. County business patterns, 1996 : North Carolina

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  9. County business patterns, 1997 : Tennessee

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  10. County business patterns, 1996 : New York

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  11. County business patterns, 1996 : California

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  12. County business patterns, 1997 : Puerto Rico

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  13. County business patterns, 1997 : Mississippi

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  14. County business patterns, 1996 : Vermont

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  15. County business patterns, 1996 : Oklahoma

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  16. County business patterns, 1997 : Colorado

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  17. County business patterns, 1996 : Maryland

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  18. County business patterns, 1996 : Wyoming

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  19. County business patterns, 1996 : Missouri

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  20. County business patterns, 1996 : Nevada

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  1. County business patterns, 1997 : Missouri

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  2. County business patterns, 1996 : Rhode Island

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  3. County business patterns, 1996 : Michigan

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  4. County business patterns, 1996 : New Jersey

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  5. County business patterns, 1996 : Arkansas

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  6. County business patterns, 1996 : Nebraska

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  7. County business patterns, 1997 : Utah

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  8. County business patterns, 1997 : Wyoming

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  9. County business patterns, 1997 : Rhode Island

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  10. County business patterns, 1996 : Massachusetts

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  11. County business patterns, 1996 : Iowa

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  12. County business patterns, 1996 : Alabama

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  13. County business patterns, 1997 : West Virginia

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  14. County business patterns, 1997 : Washington

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  15. County business patterns, 1996 : South Dakota

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  16. County business patterns, 1996 : Pennsylvania

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  17. County business patterns, 1996 : Maine

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  18. County business patterns, 1996 : Delaware

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  19. County business patterns, 1997 : Maine

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  20. County business patterns, 1997 : Oklahoma

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  1. County business patterns, 1997 : Wisconsin

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  2. County business patterns, 1997 : Kansas

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  3. County business patterns, 1996 : Hawaii

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  4. County business patterns, 1996 : Alaska

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  5. County business patterns, 1996 : Louisiana

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  6. County business patterns, 1996 : Ohio

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  7. County business patterns, 1996 : Montana

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  8. County business patterns, 1996 : North Dakota

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  9. County business patterns, 1996 : Georgia

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  10. County business patterns, 1996 : New Mexico

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  11. County business patterns, 1996 : Mississippi

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  12. County business patterns, 1997 : Montana

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  13. County business patterns, 1997 : South Dakota

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  14. County business patterns, 1997 : New Jersey

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  15. County business patterns, 1996 : Wisconsin

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  16. County business patterns, 1997 : Nebraska

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  17. County business patterns, 1996 : Florida

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  18. County business patterns, 1996 : Utah

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  19. County business patterns, 1996 : Virginia

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  20. County business patterns, 1996 : Connecticut

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  1. County business patterns, 1996 : Puerto Rico

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  2. County business patterns, 1997 : South Carolina

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  3. County business patterns, 1996 : Idaho

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  4. County business patterns, 1996 : New Hampshire

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  5. County business patterns, 1996 : West Virginia

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  6. County business patterns, 1997 : New Hampshire

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  7. County business patterns, 1996 : Tennessee

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  8. County business patterns, 1997 : Maryland

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  9. County business patterns, 1997 : Massachusetts

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  10. County business patterns, 1997 : Idaho

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  11. County business patterns, 1996 : Colorado

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  12. County business patterns, 1997 : Arkansas

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  13. County business patterns, 1996 : Kentucky

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  14. County business patterns, 1996 : Illinois

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  15. County business patterns, 1996 : Oregon

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  16. County business patterns, 1996 : South Carolina

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  17. County business patterns, 1996 : Minnesota

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  18. County business patterns, 1997 : Alaska

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  19. County business patterns, 1997 : North Dakota

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  20. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  1. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  2. Quantifying the quantum gate fidelity of single-atom spin qubits in silicon by randomized benchmarking.

    PubMed

    Muhonen, J T; Laucht, A; Simmons, S; Dehollain, J P; Kalra, R; Hudson, F E; Freer, S; Itoh, K M; Jamieson, D N; McCallum, J C; Dzurak, A S; Morello, A

    2015-04-22

    Building upon the demonstration of coherent control and single-shot readout of the electron and nuclear spins of individual (31)P atoms in silicon, we present here a systematic experimental estimate of quantum gate fidelities using randomized benchmarking of 1-qubit gates in the Clifford group. We apply this analysis to the electron and the ionized (31)P nucleus of a single P donor in isotopically purified (28)Si. We find average gate fidelities of 99.95% for the electron and 99.99% for the nuclear spin. These values are above certain error correction thresholds and demonstrate the potential of donor-based quantum computing in silicon. By studying the influence of the shape and power of the control pulses, we find evidence that the present limitation to the gate fidelity is mostly related to the external hardware and not the intrinsic behaviour of the qubit.

  3. Bench-marking effects in the blaming of professionals for incidents of aggression and assault.

    PubMed

    Carifio, J; Lanza, M

    1994-01-01

    This study compared all possible orders of responding to three vignettes describing incidents between a male patient and a female nurse in which the nurse is mildly assaulted, severely assaulted, or verbally abused by the patient (the control condition). Subjects were 32 female senior-year nursing students and 28 practicing nurses. It was found that response levels to a given vignette could predict a respondent's response to the other vignettes. Also, a significant "bench-marking" effect was found: if a subject responded to the mild assault vignette first, the subject's overall response pattern best fit the general nonlinear assignment-of-blame pattern observed, but if the subject responded to the severe assault or control vignette first, this vignette set a bench mark for responding from which the subject's subsequent responses did not deviate greatly, which slightly distorted the subject's V-shaped nonlinear response pattern.

  4. Modification and benchmarking of SKYSHINE-III for use with ISFSI cask arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertel, N.E.; Napolitano, D.G.

    1997-12-01

    Dry cask storage arrays are becoming more and more common at nuclear power plants in the United States. Title 10 of the Code of Federal Regulations, Part 72, limits doses at the controlled area boundary of these independent spent-fuel storage installations (ISFSI) to 0.25 mSv (25 mrem)/yr. The minimum controlled area boundaries of such a facility are determined by cask array dose calculations, which include direct radiation and radiation scattered by the atmosphere, also known as skyshine. NAC International (NAC) uses SKYSHINE-III to calculate the gamma-ray and neutron dose rates as a function of distance from ISFSI arrays. In thismore » paper, we present modifications to the SKYSHINE-III that more explicitly model cask arrays. In addition, we have benchmarked the radiation transport methods used in SKYSHINE-III against {sup 60}Co gamma-ray experiments and MCNP neutron calculations.« less

  5. Educating Next Generation Nuclear Criticality Safety Engineers at the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. D. Bess; J. B. Briggs; A. S. Garcia

    2011-09-01

    One of the challenges in educating our next generation of nuclear safety engineers is the limitation of opportunities to receive significant experience or hands-on training prior to graduation. Such training is generally restricted to on-the-job-training before this new engineering workforce can adequately provide assessment of nuclear systems and establish safety guidelines. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) can provide students and young professionals the opportunity to gain experience and enhance critical engineering skills. The ICSBEP and IRPhEP publish annual handbooks that contain evaluations of experiments along withmore » summarized experimental data and peer-reviewed benchmark specifications to support the validation of neutronics codes, nuclear cross-section data, and the validation of reactor designs. Participation in the benchmark process not only benefits those who use these Handbooks within the international community, but provides the individual with opportunities for professional development, networking with an international community of experts, and valuable experience to be used in future employment. Traditionally students have participated in benchmarking activities via internships at national laboratories, universities, or companies involved with the ICSBEP and IRPhEP programs. Additional programs have been developed to facilitate the nuclear education of students while participating in the benchmark projects. These programs include coordination with the Center for Space Nuclear Research (CSNR) Next Degree Program, the Collaboration with the Department of Energy Idaho Operations Office to train nuclear and criticality safety engineers, and student evaluations as the basis for their Master's thesis in nuclear engineering.« less

  6. Toxicological Benchmarks for Screening of Potential Contaminants of Concern for Effects on Aquatic Biota on the Oak Ridge Reservation, Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W., II

    1993-01-01

    One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less

  7. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  8. Benchmarking of surgical complications in gynaecological oncology: prospective multicentre study.

    PubMed

    Burnell, M; Iyer, R; Gentry-Maharaj, A; Nordin, A; Liston, R; Manchanda, R; Das, N; Gornall, R; Beardmore-Gray, A; Hillaby, K; Leeson, S; Linder, A; Lopes, A; Meechan, D; Mould, T; Nevin, J; Olaitan, A; Rufford, B; Shanbhag, S; Thackeray, A; Wood, N; Reynolds, K; Ryan, A; Menon, U

    2016-12-01

    To explore the impact of risk-adjustment on surgical complication rates (CRs) for benchmarking gynaecological oncology centres. Prospective cohort study. Ten UK accredited gynaecological oncology centres. Women undergoing major surgery on a gynaecological oncology operating list. Patient co-morbidity, surgical procedures and intra-operative (IntraOp) complications were recorded contemporaneously by surgeons for 2948 major surgical procedures. Postoperative (PostOp) complications were collected from hospitals and patients. Risk-prediction models for IntraOp and PostOp complications were created using penalised (lasso) logistic regression using over 30 potential patient/surgical risk factors. Observed and risk-adjusted IntraOp and PostOp CRs for individual hospitals were calculated. Benchmarking using colour-coded funnel plots and observed-to-expected ratios was undertaken. Overall, IntraOp CR was 4.7% (95% CI 4.0-5.6) and PostOp CR was 25.7% (95% CI 23.7-28.2). The observed CRs for all hospitals were under the upper 95% control limit for both IntraOp and PostOp funnel plots. Risk-adjustment and use of observed-to-expected ratio resulted in one hospital moving to the >95-98% CI (red) band for IntraOp CRs. Use of only hospital-reported data for PostOp CRs would have resulted in one hospital being unfairly allocated to the red band. There was little concordance between IntraOp and PostOp CRs. The funnel plots and overall IntraOp (≈5%) and PostOp (≈26%) CRs could be used for benchmarking gynaecological oncology centres. Hospital benchmarking using risk-adjusted CRs allows fairer institutional comparison. IntraOp and PostOp CRs are best assessed separately. As hospital under-reporting is common for postoperative complications, use of patient-reported outcomes is important. Risk-adjusted benchmarking of surgical complications for ten UK gynaecological oncology centres allows fairer comparison. © 2016 Royal College of Obstetricians and Gynaecologists.

  9. A novel discrete PSO algorithm for solving job shop scheduling problem to minimize makespan

    NASA Astrophysics Data System (ADS)

    Rameshkumar, K.; Rajendran, C.

    2018-02-01

    In this work, a discrete version of PSO algorithm is proposed to minimize the makespan of a job-shop. A novel schedule builder has been utilized to generate active schedules. The discrete PSO is tested using well known benchmark problems available in the literature. The solution produced by the proposed algorithms is compared with best known solution published in the literature and also compared with hybrid particle swarm algorithm and variable neighborhood search PSO algorithm. The solution construction methodology adopted in this study is found to be effective in producing good quality solutions for the various benchmark job-shop scheduling problems.

  10. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.

    1991-01-01

    A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  11. Assessing validity of observational intervention studies – the Benchmarking Controlled Trials

    PubMed Central

    Malmivaara, Antti

    2016-01-01

    Abstract Background: Benchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations. Aims: To create and pilot test a checklist for appraising methodological validity of a BCT. Methods: The checklist was created by extracting the most essential elements from the comprehensive set of criteria in the previous paper on BCTs. Also checklists and scientific papers on observational studies and respective systematic reviews were utilized. Ten BCTs published in the Lancet and in the New England Journal of Medicine were used to assess feasibility of the created checklist. Results: The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies. Conclusions: The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies. However, the piloted checklist should be validated in further studies.Key messagesBenchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations.This paper presents a checklist for appraising methodological validity of BCTs and pilot-tests the checklist with ten BCTs published in leading medical journals. The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies.The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies. PMID:27238631

  12. The British Columbia Nephrologists' Access Study (BCNAS) - a prospective, health services interventional study to develop waiting time benchmarks and reduce wait times for out-patient nephrology consultations.

    PubMed

    Schachter, Michael E; Romann, Alexandra; Djurdev, Ognjenka; Levin, Adeera; Beaulieu, Monica

    2013-08-29

    Early referral and management of high-risk chronic kidney disease may prevent or delay the need for dialysis. Automatic eGFR reporting has increased demand for out-patient nephrology consultations and in some cases, prolonged queues. In Canada, a national task force suggested the development of waiting time targets, which has not been done for nephrology. We sought to describe waiting time for outpatient nephrology consultations in British Columbia (BC). Data collection occurred in 2 phases: 1) Baseline Description (Jan 18-28, 2010) and 2) Post Waiting Time Benchmark-Introduction (Jan 16-27, 2012). Waiting time was defined as the interval from receipt of referral letters to assessment. Using a modified Delphi process, Nephrologists and Family Physicians (FP) developed waiting time targets for commonly referred conditions through meetings and surveys. Rules were developed to weigh-in nephrologists', FPs', and patients' perspectives in order to generate waiting time benchmarks. Targets consider comorbidities, eGFR, BP and albuminuria. Referred conditions were assigned a priority score between 1-4. BC nephrologists were encouraged to centrally triage referrals to see the first available nephrologist. Waiting time benchmarks were simultaneously introduced to guide patient scheduling. A post-intervention waiting time evaluation was then repeated. In 2010 and 2012, 43/52 (83%) and 46/57 (81%) of BC nephrologists participated. Waiting time decreased from 98(IQR44,157) to 64(IQR21,120) days from 2010 to 2012 (p = <.001), despite no change in referral eGFR, demographics, nor number of office hrs/wk. Waiting time improved most for high priority patients. An integrated, Provincial initiative to measure wait times, develop waiting benchmarks, and engage physicians in active waiting time management associated with improved access to nephrologists in BC. Improvements in waiting time was most marked for the highest priority patients, which suggests that benchmarks had an influence on triaging behavior. Further research is needed to determine whether this effect is sustainable.

  13. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...

  14. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...

  15. County business patterns, 1997 : U.S. summary

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  16. Lessons Learned over Four Benchmark Exercises from the Community Structure-Activity Resource

    PubMed Central

    Carlson, Heather A.

    2016-01-01

    Preparing datasets and analyzing the results is difficult and time-consuming, and I hope the points raised here will help other scientists avoid some of the thorny issues we wrestled with. PMID:27345761

  17. County business patterns, 1997 : District of Columbia

    DOT National Transportation Integrated Search

    1999-09-01

    County Business Patterns is an annual series that provides : subnational economic data by industry. The series is : useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  18. County business patterns, 1996 : District of Columbia

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  19. 77 FR 57090 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-17

    ... bonus payments to three-star plans and eliminating the cap on blended county benchmarks that would... supplement what can be learned from the analyses of administrative and financial data for MAOs, and from an...

  20. County business patterns, 1996 : U.S. summary

    DOT National Transportation Integrated Search

    1998-11-01

    County Business Patterns is an annual series that : provides subnational economic data by industry. The series : is useful for studying the economic activity of small areas; : analyzing economic changes over time; and as a benchmark : for statistical...

  1. Hybrid and plug-in hybrid electric vehicle performance testing by the US Department of Energy Advanced Vehicle Testing Activity

    NASA Astrophysics Data System (ADS)

    Karner, Donald; Francfort, James

    The Advanced Vehicle Testing Activity (AVTA), part of the U.S. Department of Energy's FreedomCAR and Vehicle Technologies Program, has conducted testing of advanced technology vehicles since August 1995 in support of the AVTA goal to provide benchmark data for technology modeling, and vehicle development programs. The AVTA has tested full size electric vehicles, urban electric vehicles, neighborhood electric vehicles, and hydrogen internal combustion engine powered vehicles. Currently, the AVTA is conducting baseline performance, battery benchmark and fleet tests of hybrid electric vehicles (HEV) and plug-in hybrid electric vehicles (PHEV). Testing has included all HEVs produced by major automotive manufacturers and spans over 2.5 million test miles. Testing is currently incorporating PHEVs from four different vehicle converters. The results of all testing are posted on the AVTA web page maintained by the Idaho National Laboratory.

  2. 45 CFR 156.20 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... adjustments made pursuant to the benchmark standards described in § 156.110 of this subchapter. Benefit design... this subchapter. Enrollee satisfaction survey vendor means an organization that has relevant survey administration experience (for example, CAHPS® surveys), organizational survey capacity, and quality control...

  3. Groundwater-quality data in the Western San Joaquin Valley study unit, 2010 - Results from the California GAMA Program

    USGS Publications Warehouse

    Mathany, Timothy M.; Landon, Matthew K.; Shelton, Jennifer L.; Belitz, Kenneth

    2013-01-01

    Groundwater quality in the approximately 2,170-square-mile Western San Joaquin Valley (WSJV) study unit was investigated by the U.S. Geological Survey (USGS) from March to July 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program's Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The WSJV study unit was the twenty-ninth study unit to be sampled as part of the GAMA-PBP. The GAMA Western San Joaquin Valley study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system, and to facilitate statistically consistent comparisons of untreated groundwater quality throughout California. The primary aquifer system is defined as parts of aquifers corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the WSJV study unit. Groundwater quality in the primary aquifer system may differ from the quality in the shallower or deeper water-bearing zones; shallow groundwater may be more vulnerable to surficial contamination. In the WSJV study unit, groundwater samples were collected from 58 wells in 2 study areas (Delta-Mendota subbasin and Westside subbasin) in Stanislaus, Merced, Madera, Fresno, and Kings Counties. Thirty-nine of the wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells), and 19 wells were selected to aid in the understanding of aquifer-system flow and related groundwater-quality issues (understanding wells). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], low-level fumigants, and pesticides and pesticide degradates), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], and 1,2,3-trichloropropane [1,2,3-TCP]), and naturally occurring inorganic constituents (trace elements, nutrients, dissolved organic carbon [DOC], major and minor ions, silica, total dissolved solids [TDS], alkalinity, total arsenic and iron [unfiltered] and arsenic, chromium, and iron species [filtered]). Isotopic tracers (stable isotopes of hydrogen, oxygen, and boron in water, stable isotopes of nitrogen and oxygen in dissolved nitrate, stable isotopes of sulfur in dissolved sulfate, isotopic ratios of strontium in water, stable isotopes of carbon in dissolved inorganic carbon, activities of tritium, and carbon-14 abundance), dissolved standard gases (methane, carbon dioxide, nitrogen, oxygen, and argon), and dissolved noble gases (argon, helium-4, krypton, neon, and xenon) were measured to help identify sources and ages of sampled groundwater. In total, 245 constituents and 8 water-quality indicators were measured. Quality-control samples (blanks, replicates, or matrix spikes) were collected at 16 percent of the wells in the WSJV study unit, and the results for these samples were used to evaluate the quality of the data from the groundwater samples. Blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples all were within acceptable limits of variability. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 87 percent of the compounds. This study did not evaluate the quality of water delivered to consumers. After withdrawal, groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is delivered to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. Most inorganic constituents detected in groundwater samples from the 39 grid wells were detected at concentrations less than health-based benchmarks. Detections of organic and special-interest constituents from grid wells sampled in the WSJV study unit also were less than health-based benchmarks. In total, VOCs were detected in 12 of the 39 grid wells sampled (approximately 31 percent), pesticides and pesticide degradates were detected in 9 grid wells (approximately 23 percent), and perchlorate was detected in 15 grid wells (approximately 38 percent). Trace elements, major and minor ions, and nutrients were sampled for at 39 grid wells; most concentrations were less than health-based benchmarks. Exceptions include two detections of arsenic greater than the USEPA maximum contaminant level (MCL-US) of 10 micrograms per liter (μg/L), 20 detections of boron greater than the CDPH notification level (NL-CA) of 1,000 μg/L, 2 detections of molybdenum greater than the USEPA lifetime health advisory level (HAL-US) of 40 μg/L, 1 detection of selenium greater than the MCL-US of 50 μg/L, 2 detections of strontium greater than the HAL-US of 4,000 μg/L, and 3 detections of nitrate greater than the MCL-US of 10 μg/L. Results for inorganic constituents with non-health-based benchmarks (iron, manganese, chloride, sulfate, and TDS) showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in five grid wells. Manganese concentrations greater than the SMCL-CA of 50 μg/L were detected in 16 grid wells. Chloride concentrations greater than the recommended SMCL-CA benchmark of 250 milligrams per liter (mg/L) were detected in 14 grid wells, and concentrations in 5 of these wells also were greater than the upper SMCL-CA benchmark of 500 mg/L. Sulfate concentrations greater than the recommended SMCL-CA benchmark of 250 mg/L were measured in 21 grid wells, and concentrations in 13 of these wells also were greater than the SMCL-CA upper benchmark of 500 mg/L. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 36 grid wells, and concentrations in 20 of these wells also were greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  4. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...

  5. Active control of bright electron beams with RF optics for femtosecond microscopy

    DOE PAGES

    Williams, J.; Zhou, F.; Sun, T.; ...

    2017-08-01

    A frontier challenge in implementing femtosecond electron microscopy is to gain precise optical control of intense beams to mitigate collective space charge effects for significantly improving the throughput. In this paper, we explore the flexible uses of an RF cavity as a longitudinal lens in a high-intensity beam column for condensing the electron beams both temporally and spectrally, relevant to the design of ultrafast electron microscopy. Through the introduction of a novel atomic grating approach for characterization of electron bunch phase space and control optics, we elucidate the principles for predicting and controlling the phase space dynamics to reach optimalmore » compressions at various electron densities and generating conditions. We provide strategies to identify high-brightness modes, achieving ~100 fs and ~1 eV resolutions with 10 6 electrons per bunch, and establish the scaling of performance for different bunch charges. These results benchmark the sensitivity and resolution from the fundamental beam brightness perspective and also validate the adaptive optics concept to enable delicate control of the density-dependent phase space structures to optimize the performance, including delivering ultrashort, monochromatic, high-dose, or coherent electron bunches.« less

  6. Active control of bright electron beams with RF optics for femtosecond microscopy

    PubMed Central

    Williams, J.; Zhou, F.; Sun, T.; Tao, Z.; Chang, K.; Makino, K.; Berz, M.; Duxbury, P. M.; Ruan, C.-Y.

    2017-01-01

    A frontier challenge in implementing femtosecond electron microscopy is to gain precise optical control of intense beams to mitigate collective space charge effects for significantly improving the throughput. Here, we explore the flexible uses of an RF cavity as a longitudinal lens in a high-intensity beam column for condensing the electron beams both temporally and spectrally, relevant to the design of ultrafast electron microscopy. Through the introduction of a novel atomic grating approach for characterization of electron bunch phase space and control optics, we elucidate the principles for predicting and controlling the phase space dynamics to reach optimal compressions at various electron densities and generating conditions. We provide strategies to identify high-brightness modes, achieving ∼100 fs and ∼1 eV resolutions with 106 electrons per bunch, and establish the scaling of performance for different bunch charges. These results benchmark the sensitivity and resolution from the fundamental beam brightness perspective and also validate the adaptive optics concept to enable delicate control of the density-dependent phase space structures to optimize the performance, including delivering ultrashort, monochromatic, high-dose, or coherent electron bunches. PMID:28868325

  7. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1994 Revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Mabrey, J.B.

    1994-07-01

    This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronicmore » Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.« less

  8. Benchmarking Model Variants in Development of a Hardware-in-the-Loop Simulation System

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot D.; Zinnecker, Alicia M.; Kratz, Jonathan L.; Culley, Dennis E.; Thomas, George L.

    2016-01-01

    Distributed engine control architecture presents a significant increase in complexity over traditional implementations when viewed from the perspective of system simulation and hardware design and test. Even if the overall function of the control scheme remains the same, the hardware implementation can have a significant effect on the overall system performance due to differences in the creation and flow of data between control elements. A Hardware-in-the-Loop (HIL) simulation system is under development at NASA Glenn Research Center that enables the exploration of these hardware dependent issues. The system is based on, but not limited to, the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). This paper describes the step-by-step conversion from the self-contained baseline model to the hardware in the loop model, and the validation of each step. As the control model hardware fidelity was improved during HIL system development, benchmarking simulations were performed to verify that engine system performance characteristics remained the same. The results demonstrate the goal of the effort; the new HIL configurations have similar functionality and performance compared to the baseline C-MAPSS40k system.

  9. Raising Quality and Achievement. A College Guide to Benchmarking.

    ERIC Educational Resources Information Center

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  10. Benchmarking in Education: Tech Prep, a Case in Point. IEE Brief Number 8.

    ERIC Educational Resources Information Center

    Inger, Morton

    Benchmarking is a process by which organizations compare their practices, processes, and outcomes to standards of excellence in a systematic way. The benchmarking process entails the following essential steps: determining what to benchmark and establishing internal baseline data; identifying the benchmark; determining how that standard has been…

  11. Benchmarks: The Development of a New Approach to Student Evaluation.

    ERIC Educational Resources Information Center

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  12. [Controlling instruments in radiology].

    PubMed

    Maurer, M

    2013-10-01

    Due to the rising costs and competitive pressures radiological clinics and practices are now facing, controlling instruments are gaining importance in the optimization of structures and processes of the various diagnostic examinations and interventional procedures. It will be shown how the use of selected controlling instruments can secure and improve the performance of radiological facilities. A definition of the concept of controlling will be provided. It will be shown which controlling instruments can be applied in radiological departments and practices. As an example, two of the controlling instruments, material cost analysis and benchmarking, will be illustrated.

  13. Identifying key genes in glaucoma based on a benchmarked dataset and the gene regulatory network.

    PubMed

    Chen, Xi; Wang, Qiao-Ling; Zhang, Meng-Hui

    2017-10-01

    The current study aimed to identify key genes in glaucoma based on a benchmarked dataset and gene regulatory network (GRN). Local and global noise was added to the gene expression dataset to produce a benchmarked dataset. Differentially-expressed genes (DEGs) between patients with glaucoma and normal controls were identified utilizing the Linear Models for Microarray Data (Limma) package based on benchmarked dataset. A total of 5 GRN inference methods, including Zscore, GeneNet, context likelihood of relatedness (CLR) algorithm, Partial Correlation coefficient with Information Theory (PCIT) and GEne Network Inference with Ensemble of Trees (Genie3) were evaluated using receiver operating characteristic (ROC) and precision and recall (PR) curves. The interference method with the best performance was selected to construct the GRN. Subsequently, topological centrality (degree, closeness and betweenness) was conducted to identify key genes in the GRN of glaucoma. Finally, the key genes were validated by performing reverse transcription-quantitative polymerase chain reaction (RT-qPCR). A total of 176 DEGs were detected from the benchmarked dataset. The ROC and PR curves of the 5 methods were analyzed and it was determined that Genie3 had a clear advantage over the other methods; thus, Genie3 was used to construct the GRN. Following topological centrality analysis, 14 key genes for glaucoma were identified, including IL6 , EPHA2 and GSTT1 and 5 of these 14 key genes were validated by RT-qPCR. Therefore, the current study identified 14 key genes in glaucoma, which may be potential biomarkers to use in the diagnosis of glaucoma and aid in identifying the molecular mechanism of this disease.

  14. HTR-PROTEUS pebble bed experimental program cores 9 & 10: columnar hexagonal point-on-point packing with a 1:1 moderator-to-fuel pebble ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.

    2014-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  15. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORES 5, 6, 7, & 8: COLUMNAR HEXAGONAL POINT-ON-POINT PACKING WITH A 1:2 MODERATOR-TO-FUEL PEBBLE RATIO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess

    2013-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  16. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORES 9 & 10: COLUMNAR HEXAGONAL POINT-ON-POINT PACKING WITH A 1:1 MODERATOR-TO-FUEL PEBBLE RATIO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess

    2013-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  17. Benchmarking by HbA1c in a national diabetes quality register--does measurement bias matter?

    PubMed

    Carlsen, Siri; Thue, Geir; Cooper, John Graham; Røraas, Thomas; Gøransson, Lasse Gunnar; Løvaas, Karianne; Sandberg, Sverre

    2015-08-01

    Bias in HbA1c measurement could give a wrong impression of the standard of care when benchmarking diabetes care. The aim of this study was to evaluate how measurement bias in HbA1c results may influence the benchmarking process performed by a national diabetes register. Using data from 2012 from the Norwegian Diabetes Register for Adults, we included HbA1c results from 3584 patients with type 1 diabetes attending 13 hospital clinics, and 1366 patients with type 2 diabetes attending 18 GP offices. Correction factors for HbA1c were obtained by comparing the results of the hospital laboratories'/GP offices' external quality assurance scheme with the target value from a reference method. Compared with the uncorrected yearly median HbA1c values for hospital clinics and GP offices, EQA corrected HbA1c values were within ±0.2% (2 mmol/mol) for all but one hospital clinic whose value was reduced by 0.4% (4 mmol/mol). Three hospital clinics reduced the proportion of patients with poor glycemic control, one by 9% and two by 4%. For most participants in our study, correcting for measurement bias had little effect on the yearly median HbA1c value or the percentage of patients achieving glycemic goals. However, at three hospital clinics correcting for measurement bias had an important effect on HbA1c benchmarking results especially with regard to percentages of patients achieving glycemic targets. The analytical quality of HbA1c should be taken into account when comparing benchmarking results.

  18. 15 CFR 801.12 - Rules and regulations for the BE-180, Benchmark Survey of Financial Services Transactions between...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 52-Finance and Insurance, and holding companies that own or influence, and are principally engaged in..., brokerages, and other insurance related activities; insurance and employee benefit funds (including pension...

  19. Structural Benchmark Creep Testing for the Advanced Stirling Convertor Heater Head

    NASA Technical Reports Server (NTRS)

    Krause, David L.; Kalluri, Sreeramesh; Bowman, Randy R.; Shah, Ashwin R.

    2008-01-01

    The National Aeronautics and Space Administration (NASA) has identified the high efficiency Advanced Stirling Radioisotope Generator (ASRG) as a candidate power source for use on long duration Science missions such as lunar applications, Mars rovers, and deep space missions. For the inherent long life times required, a structurally significant design limit for the heater head component of the ASRG Advanced Stirling Convertor (ASC) is creep deformation induced at low stress levels and high temperatures. Demonstrating proof of adequate margins on creep deformation and rupture for the operating conditions and the MarM-247 material of construction is a challenge that the NASA Glenn Research Center is addressing. The combined analytical and experimental program ensures integrity and high reliability of the heater head for its 17-year design life. The life assessment approach starts with an extensive series of uniaxial creep tests on thin MarM-247 specimens that comprise the same chemistry, microstructure, and heat treatment processing as the heater head itself. This effort addresses a scarcity of openly available creep properties for the material as well as for the virtual absence of understanding of the effect on creep properties due to very thin walls, fine grains, low stress levels, and high-temperature fabrication steps. The approach continues with a considerable analytical effort, both deterministically to evaluate the median creep life using nonlinear finite element analysis, and probabilistically to calculate the heater head s reliability to a higher degree. Finally, the approach includes a substantial structural benchmark creep testing activity to calibrate and validate the analytical work. This last element provides high fidelity testing of prototypical heater head test articles; the testing includes the relevant material issues and the essential multiaxial stress state, and applies prototypical and accelerated temperature profiles for timely results in a highly controlled laboratory environment. This paper focuses on the last element and presents a preliminary methodology for creep rate prediction, the experimental methods, test challenges, and results from benchmark testing of a trial MarM-247 heater head test article. The results compare favorably with the analytical strain predictions. A description of other test findings is provided, and recommendations for future test procedures are suggested. The manuscript concludes with describing the potential impact of the heater head creep life assessment and benchmark testing effort on the ASC program.

  20. A national standard for psychosocial safety climate (PSC): PSC 41 as the benchmark for low risk of job strain and depressive symptoms.

    PubMed

    Bailey, Tessa S; Dollard, Maureen F; Richards, Penny A M

    2015-01-01

    Despite decades of research from around the world now permeating occupational health and safety (OHS) legislation and guidelines, there remains a lack of tools to guide practice. Our main goal was to establish benchmark levels of psychosocial safety climate (PSC) that would signify risk of job strain (jobs with high demands and low control) and depression in organizations. First, to justify our focus on PSC, using interview data from Australian employees matched at 2 time points 12 months apart (n = 1081), we verified PSC as a significant leading predictor of job strain and in turn depression. Next, using 2 additional data sets (n = 2097 and n = 1043) we determined benchmarks of organizational PSC (range 12-60) for low-risk (PSC at 41 or above) and high-risk (PSC at 37 or below) of employee job strain and depressive symptoms. Finally, using the newly created benchmarks we estimated the population attributable risk (PAR) and found that improving PSC in organizations to above 37 could reduce 14% of job strain and 16% of depressive symptoms in the working population. The results provide national standards that organizations and regulatory agencies can utilize to promote safer working environments and lower the risk of harm to employee mental health. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  1. Performance Monitoring of Distributed Data Processing Systems

    NASA Technical Reports Server (NTRS)

    Ojha, Anand K.

    2000-01-01

    Test and checkout systems are essential components in ensuring safety and reliability of aircraft and related systems for space missions. A variety of systems, developed over several years, are in use at the NASA/KSC. Many of these systems are configured as distributed data processing systems with the functionality spread over several multiprocessor nodes interconnected through networks. To be cost-effective, a system should take the least amount of resource and perform a given testing task in the least amount of time. There are two aspects of performance evaluation: monitoring and benchmarking. While monitoring is valuable to system administrators in operating and maintaining, benchmarking is important in designing and upgrading computer-based systems. These two aspects of performance evaluation are the foci of this project. This paper first discusses various issues related to software, hardware, and hybrid performance monitoring as applicable to distributed systems, and specifically to the TCMS (Test Control and Monitoring System). Next, a comparison of several probing instructions are made to show that the hybrid monitoring technique developed by the NIST (National Institutes for Standards and Technology) is the least intrusive and takes only one-fourth of the time taken by software monitoring probes. In the rest of the paper, issues related to benchmarking a distributed system have been discussed and finally a prescription for developing a micro-benchmark for the TCMS has been provided.

  2. Sub-Saharan Africa Report, No. 2830

    DTIC Science & Technology

    1983-08-12

    proceeds abroad and earn in- come. This scheme would require suffi- cient forex reserves. It would provide a counter revenue which could be set...also as- sisted our credit rating. Forex controls Your Money: Is there a benchmark gold price for the lifting of foreign exchange controls? Dc...first and lest it before tak- ing the next step. Your Money: The lift- ing of forex controls could lead to a vola- tile exchange rate. De Loor

  3. Active disturbance rejection control based robust output feedback autopilot design for airbreathing hypersonic vehicles.

    PubMed

    Tian, Jiayi; Zhang, Shifeng; Zhang, Yinhui; Li, Tong

    2018-03-01

    Since motion control plant (y (n) =f(⋅)+d) was repeatedly used to exemplify how active disturbance rejection control (ADRC) works when it was proposed, the integral chain system subject to matched disturbances is always regarded as a canonical form and even misconstrued as the only form that ADRC is applicable to. In this paper, a systematic approach is first presented to apply ADRC to a generic nonlinear uncertain system with mismatched disturbances and a robust output feedback autopilot for an airbreathing hypersonic vehicle (AHV) is devised based on that. The key idea is to employ the feedback linearization (FL) and equivalent input disturbance (EID) technique to decouple nonlinear uncertain system into several subsystems in canonical form, thus it would be much easy to directly design classical/improved linear/nonlinear ADRC controller for each subsystem. It is noticed that all disturbances are taken into account when implementing FL rather than just omitting that in previous research, which greatly enhances controllers' robustness against external disturbances. For autopilot design, ADRC strategy enables precise tracking for velocity and altitude reference command in the presence of severe parametric perturbations and atmospheric disturbances only using measurable output information. Bounded-input-bounded-output (BIBO) stable is analyzed for closed-loop system. To illustrate the feasibility and superiority of this novel design, a series of comparative simulations with some prominent and representative methods are carried out on a benchmark longitudinal AHV model. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Neutron Activation and Thermoluminescent Detector Responses to a Bare Pulse of the CEA Valduc SILENE Critical Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Thomas Martin; Celik, Cihangir; McMahan, Kimberly L.

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 11, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). Themore » goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.« less

  5. Open-source platform to benchmark fingerprints for ligand-based virtual screening

    PubMed Central

    2013-01-01

    Similarity-search methods using molecular fingerprints are an important tool for ligand-based virtual screening. A huge variety of fingerprints exist and their performance, usually assessed in retrospective benchmarking studies using data sets with known actives and known or assumed inactives, depends largely on the validation data sets used and the similarity measure used. Comparing new methods to existing ones in any systematic way is rather difficult due to the lack of standard data sets and evaluation procedures. Here, we present a standard platform for the benchmarking of 2D fingerprints. The open-source platform contains all source code, structural data for the actives and inactives used (drawn from three publicly available collections of data sets), and lists of randomly selected query molecules to be used for statistically valid comparisons of methods. This allows the exact reproduction and comparison of results for future studies. The results for 12 standard fingerprints together with two simple baseline fingerprints assessed by seven evaluation methods are shown together with the correlations between methods. High correlations were found between the 12 fingerprints and a careful statistical analysis showed that only the two baseline fingerprints were different from the others in a statistically significant way. High correlations were also found between six of the seven evaluation methods, indicating that despite their seeming differences, many of these methods are similar to each other. PMID:23721588

  6. Performance effects of irregular communications patterns on massively parallel multiprocessors

    NASA Technical Reports Server (NTRS)

    Saltz, Joel; Petiton, Serge; Berryman, Harry; Rifkin, Adam

    1991-01-01

    A detailed study of the performance effects of irregular communications patterns on the CM-2 was conducted. The communications capabilities of the CM-2 were characterized under a variety of controlled conditions. In the process of carrying out the performance evaluation, extensive use was made of a parameterized synthetic mesh. In addition, timings with unstructured meshes generated for aerodynamic codes and a set of sparse matrices with banded patterns on non-zeroes were performed. This benchmarking suite stresses the communications capabilities of the CM-2 in a range of different ways. Benchmark results demonstrate that it is possible to make effective use of much of the massive concurrency available in the communications network.

  7. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  8. Dynamics of Active Separation Control at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Pack, LaTunia G.; Seifert, Avi

    2000-01-01

    A series of active flow control experiments were recently conducted at high Reynolds numbers on a generic separated configuration. The model simulates the upper surface of a 20% thick Glauert-Goldschmied type airfoil at zero angle of attack. The flow is fully turbulent since the tunnel sidewall boundary layer flows over the model. The main motivation for the experiments is to generate a comprehensive data base for validation of unsteady numerical simulation as a first step in the development of a CFD design tool, without which it would not be possible to effectively utilize the great potential of unsteady flow control. This paper focuses on the dynamics of several key features of the baseline as well as the controlled flow. It was found that the thickness of the upstream boundary layer has a negligible effect on the flow dynamics. It is speculated that separation is caused mainly by the highly convex surface while viscous effects are less important. The two-dimensional separated flow contains unsteady waves centered on a reduced frequency of 0.9, while in the three dimensional separated flow, frequencies around a reduced frequency of 0.3 and 1 are active. Several scenarios of resonant wave interaction take place at the separated shear-layer and in the pressure recovery region. The unstable reduced frequency bands for periodic excitation are centered on 1.5 and 5, but these reduced frequencies are based on the length of the baseline bubble that shortens due to the excitation. The conventional works well for the coherent wave features. Reproduction of these dynamic effects by a numerical simulation would provide benchmark validation.

  9. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    PubMed

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  10. The General Concept of Benchmarking and Its Application in Higher Education in Europe

    ERIC Educational Resources Information Center

    Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna

    2009-01-01

    The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…

  11. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback

    PubMed Central

    Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie

    2017-01-01

    An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes. PMID:28208781

  12. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback.

    PubMed

    Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie

    2017-02-09

    An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.

  13. Benchmarking reference services: an introduction.

    PubMed

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  14. An Examination of Faculty and Student Online Activity: Predictive Relationships of Student Academic Success in a Learning Management System (LMS)

    ERIC Educational Resources Information Center

    Stamm, Randy Lee

    2013-01-01

    The purpose of this mixed method research study was to examine relationships in student and instructor activity logs and student performance benchmarks specific to enabling early intervention by the instructor in a Learning Management System (LMS). Instructor feedback was collected through a survey instrument to demonstrate perceived importance of…

  15. Invitations to Life's Diversity. Teacher-Friendly Science Activities with Reproducible Handouts in English and Spanish. Grades 3-5. Living Things Science Series.

    ERIC Educational Resources Information Center

    Camp, Carole Ann, Ed.

    This booklet, one of six in the Living Things Science series, presents activities about diversity and classification of living things which address basic "Benchmarks" suggested by the American Association for the Advancement of Science for the Living Environment for grades 3-5. Contents include background information, vocabulary (in…

  16. Invitations to Evolving. Teacher-Friendly Science Activities with Reproducible Handouts in English and Spanish. Grades 3-5. Living Things Science Series.

    ERIC Educational Resources Information Center

    Camp, Carole Ann, Ed.

    This booklet, one of six in the Living Things Science series, presents activities about evolution which address basic "Benchmarks" suggested by the American Association for the Advancement of Science for the Living Environment for grades 3-5. Contents include background information, vocabulary (in English and Spanish), materials,…

  17. Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning

    DTIC Science & Technology

    2008-01-01

    active learning framework for SVM-based and boosting-based rank learning. Our approach suggests sampling based on maximizing the estimated loss differential over unlabeled data. Experimental results on two benchmark corpora show that the proposed model substantially reduces the labeling effort, and achieves superior performance rapidly with as much as 30% relative improvement over the margin-based sampling

  18. Thrombolysis ImPlementation in Stroke (TIPS): evaluating the effectiveness of a strategy to increase the adoption of best evidence practice – protocol for a cluster randomised controlled trial in acute stroke care

    PubMed Central

    2014-01-01

    Background Stroke is a leading cause of death and disability internationally. One of the three effective interventions in the acute phase of stroke care is thrombolytic therapy with tissue plasminogen activator (tPA), if given within 4.5 hours of onset to appropriate cases of ischaemic stroke. Objectives To test the effectiveness of a multi-component multidisciplinary collaborative approach compared to usual care as a strategy for increasing thrombolysis rates for all stroke patients at intervention hospitals, while maintaining accepted benchmarks for low rates of intracranial haemorrhage and high rates of functional outcomes for both groups at three months. Methods and design A cluster randomised controlled trial of 20 hospitals across 3 Australian states with 2 groups: multi- component multidisciplinary collaborative intervention as the experimental group and usual care as the control group. The intervention is based on behavioural theory and analysis of the steps, roles and barriers relating to rapid assessment for thrombolysis eligibility; it involves a comprehensive range of strategies addressing individual-level and system-level change at each site. The primary outcome is the difference in tPA rates between the two groups post-intervention. The secondary outcome is the proportion of tPA treated patients in both groups with good functional outcomes (modified Rankin Score (mRS <2) and the proportion with intracranial haemorrhage (mRS ≥2), compared to international benchmarks. Discussion TIPS will trial a comprehensive, multi-component and multidisciplinary collaborative approach to improving thrombolysis rates at multiple sites. The trial has the potential to identify methods for optimal care which can be implemented for stroke patients during the acute phase. Study findings will include barriers and solutions to effective thrombolysis implementation and trial outcomes will be published whether significant or not. Trial registration Australian New Zealand Clinical Trials Registry: ACTRN12613000939796 PMID:24666591

  19. EPA Presentation Regarding the Advanced Light-Duty Powertrain and Hybrid Analysis (ALPHA) Tool

    EPA Pesticide Factsheets

    This page contains a selection of the presentations that EPA has publicly presented about our work on the Midterm Evaluation (MTE). It highlights EPA's benchmarking and modeling activities relating to light duty greenhouse gas (GHG) emissions.

  20. 78 FR 12757 - Agency Information Collection Activities: Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-25

    ... the cap on blended county benchmarks that would otherwise limit QBPs. Through this demonstration, CMS... (MAOs) and up to 10 case studies with MAOs in order to supplement what can be learned from the analyses...

  1. Taking the Battle Upstream: Towards a Benchmarking Role for NATO

    DTIC Science & Technology

    2012-09-01

    Benchmark.........................................................................................14 Figure 8. World Bank Benchmarking Work on Quality...Search of a Benchmarking Theory for the Public Sector.” 16     Figure 8. World Bank Benchmarking Work on Quality of Governance One of the most...the Ministries of Defense in the countries in which it works ). Another interesting innovation is that for comparison purposes, McKinsey categorized

  2. Benchmarks--Standards Comparisons. Math Competencies: EFF Benchmarks Comparison [and] Reading Competencies: EFF Benchmarks Comparison [and] Writing Competencies: EFF Benchmarks Comparison.

    ERIC Educational Resources Information Center

    Kent State Univ., OH. Ohio Literacy Resource Center.

    This document is intended to show the relationship between Ohio's Standards and Competencies, Equipped for the Future's (EFF's) Standards and Components of Performance, and Ohio's Revised Benchmarks. The document is divided into three parts, with Part 1 covering mathematics instruction, Part 2 covering reading instruction, and Part 3 covering…

  3. BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets(SoTC)

    EPA Science Inventory

    Background: Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to estimate acceptable exposure...

  4. Sudden jump in VAP spurs QI to cut rate to near zero.

    PubMed

    2005-07-01

    New device allows staff to brush ventilated patients' teeth three times a day. Just-in-time training for teams includes reorientation to cause/effect diagrams, control charts. Benchmarking data helps earn trust, business support of high-volume payer.

  5. BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets (STC symposium)

    EPA Science Inventory

    Background: Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to estimate acceptable exposure...

  6. How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction

    NASA Astrophysics Data System (ADS)

    Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.

    2015-03-01

    The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.

  7. Benchmarking and audit of breast units improves quality of care

    PubMed Central

    van Dam, P.A.; Verkinderen, L.; Hauspy, J.; Vermeulen, P.; Dirix, L.; Huizing, M.; Altintas, S.; Papadimitriou, K.; Peeters, M.; Tjalma, W.

    2013-01-01

    Quality Indicators (QIs) are measures of health care quality that make use of readily available hospital inpatient administrative data. Assessment quality of care can be performed on different levels: national, regional, on a hospital basis or on an individual basis. It can be a mandatory or voluntary system. In all cases development of an adequate database for data extraction, and feedback of the findings is of paramount importance. In the present paper we performed a Medline search on “QIs and breast cancer” and “benchmarking and breast cancer care”, and we have added some data from personal experience. The current data clearly show that the use of QIs for breast cancer care, regular internal and external audit of performance of breast units, and benchmarking are effective to improve quality of care. Adherence to guidelines improves markedly (particularly regarding adjuvant treatment) and there are data emerging showing that this results in a better outcome. As quality assurance benefits patients, it will be a challenge for the medical and hospital community to develop affordable quality control systems, which are not leading to excessive workload. PMID:24753926

  8. A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.

    PubMed

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  9. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    PubMed Central

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059

  10. The Surge Capacity for People in Emergencies (SCOPE) study in Australasian hospitals.

    PubMed

    Traub, Matthias; Bradt, David A; Joseph, Anthony P

    2007-04-16

    To measure physical assets in Australasian hospitals required for the management of mass casualties as a result of terrorism or natural disasters. A cross-sectional survey of Australian and New Zealand hospitals. All emergency department directors of Australasian College for Emergency Medicine (ACEM)-accredited hospitals, as well as private and non-ACEM accredited emergency departments staffed by ACEM Fellows in metropolitan Sydney. Numbers of operating theatres, intensive care unit (ICU) beds and x-ray machines; state of preparedness using benchmarks defined by the Centers for Disease Control and Prevention in the United States. We found that 61%-82% of critically injured patients would not have immediate access to operative care, 34%-70% would have delayed access to an ICU bed, and 42% of the less critically injured would have delayed access to x-ray facilities. Our study demonstrates that physical assets in Australasian public hospitals do not meet US hospital preparedness benchmarks for mass casualty incidents. We recommend national agreement on disaster preparedness benchmarks and periodic publication of hospital performance indicators to enhance disaster preparedness.

  11. Evolutionary Optimization of a Geometrically Refined Truss

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.

  12. Groundwater-quality data in the Monterey–Salinas shallow aquifer study unit, 2013: Results from the California GAMA Program

    USGS Publications Warehouse

    Goldrath, Dara A.; Kulongoski, Justin T.; Davis, Tracy A.

    2016-09-01

    Groundwater quality in the 3,016-square-mile Monterey–Salinas Shallow Aquifer study unit was investigated by the U.S. Geological Survey (USGS) from October 2012 to May 2013 as part of the California State Water Resources Control Board Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project. The GAMA Monterey–Salinas Shallow Aquifer study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the shallow-aquifer systems in parts of Monterey and San Luis Obispo Counties and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The shallow-aquifer system in the Monterey–Salinas Shallow Aquifer study unit was defined as those parts of the aquifer system shallower than the perforated depth intervals of public-supply wells, which generally corresponds to the part of the aquifer system used by domestic wells. Groundwater quality in the shallow aquifers can differ from the quality in the deeper water-bearing zones; shallow groundwater can be more vulnerable to surficial contamination.Samples were collected from 170 sites that were selected by using a spatially distributed, randomized grid-based method. The study unit was divided into 4 study areas, each study area was divided into grid cells, and 1 well was sampled in each of the 100 grid cells (grid wells). The grid wells were domestic wells or wells with screen depths similar to those in nearby domestic wells. A greater spatial density of data was achieved in 2 of the study areas by dividing grid cells in those study areas into subcells, and in 70 subcells, samples were collected from exterior faucets at sites where there were domestic wells or wells with screen depths similar to those in nearby domestic wells (shallow-well tap sites).Field water-quality indicators (dissolved oxygen, water temperature, pH, and specific conductance) were measured, and samples for analysis of inorganic constituents (trace elements, nutrients, major and minor ions, silica, total dissolved solids, and alkalinity) were collected at all 170 sites. In addition to these constituents, the samples from grid wells were analyzed for organic constituents (volatile organic compounds, pesticides and pesticide degradates), constituents of special interest (perchlorate and N-nitrosodimethylamine, or NDMA), radioactive constituents (radon-222 and gross-alpha and gross-beta radioactivity), and geochemical and age-dating tracers (stable isotopes of carbon in dissolved inorganic carbon, carbon-14 abundances, stable isotopes of hydrogen and oxygen in water, and tritium activities).Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at up to 11 percent of the wells in the Monterey–Salinas Shallow Aquifer study unit, and the results for these samples were used to evaluate the quality of the data from the groundwater samples. With the exception of trace elements, blanks rarely contained detectable concentrations of any constituent, indicating that contamination from sample-collection procedures was not a significant source of bias in the data for the groundwater samples. Low concentrations of some trace elements were detected in blanks; therefore, the data were re-censored at higher reporting levels. Replicate samples generally were within the limits of acceptable analytical reproducibility. The median values of matrix-spike recoveries were within the acceptable range (70 to 130 percent) for the volatile organic compounds (VOCs) and N-nitrosodimethylamine (NDMA), but were only approximately 64 percent for pesticides and pesticide degradates.The sample-collection protocols used in this study were designed to obtain representative samples of groundwater. The quality of groundwater can differ from the quality of drinking water because water chemistry can change as a result of contact with plumbing systems or the atmosphere; because of treatment, disinfection, or blending with water from other sources; or some combination of these. Water quality in domestic wells is not regulated in California, however, to provide context for the water-quality data presented in this report, results were compared to benchmarks established for drinking-water quality. The primary comparison benchmarks were maximum contaminant levels established by the U.S. Environmental Protection Agency and the State of California (MCL-US and MCL-CA, respectively). Non-regulatory benchmarks were used for constituents without maximum contaminant levels (MCLs), including Health Based Screening Levels (HBSLs) developed by the USGS and State of California secondary maximum contaminant levels (SMCL-CA) and notification levels. Most constituents detected in samples from the Monterey–Salinas Shallow Aquifer study unit had concentrations less than their respective benchmarks.Of the 148 organic constituents analyzed in the 100 grid-well samples, 38 were detected, and all concentrations were less than the benchmarks. Volatile organic compounds were detected in 26 of the grid wells, and pesticides and pesticide degradates were detected in 28 grid wells. The special-interest constituent NDMA was detected above the HBSL in three samples, one of which also had a perchlorate concentration greater than the MCL-CA.Of the inorganic constituents, 6 were detected at concentrations above their respective MCL benchmarks in grid-well samples: arsenic (5 grid wells above the MCL of 10 micrograms per liter, μg/L), selenium (3 grid wells, MCL of 50 μg/L), uranium (4 grid wells, MCL of 30 μg/L), nitrate (16 grid wells, MCL of 10 milligrams per liter, mg/L), adjusted gross alpha particle activity (10 grid wells, MCL of 15 picocuries per liter, pCi/L), and gross beta particle activity (1 grid well, MCL of 50 pCi/L). An additional 4 inorganic constituents were detected at concentrations above their respective HBSL benchmarks in grid-well samples: boron (1 grid well above the HBSL of 6,000 μg/L), manganese (8 grid wells, HBSL of 300 μg/L), molybdenum (6 grid wells, HBSL of 40 μg/L), and strontium (6 grid wells, HBSL of 4,000 μg/L). Of the inorganic constituents, 4 were detected at concentrations above their non-health based SMCL benchmarks in grid-well samples: iron (9 grid wells above the SMCL of 300 μg/L), chloride (7 grid wells, SMCL of 500 mg/L), sulfate (14 grid wells, SMCL of 500 mg/L), and total dissolved solids (27 grid wells, SMCL of 1,000 mg/L).Of the inorganic constituents analyzed in the 70 shallow-well tap sites, 10 were detected at concentrations above the benchmarks. Of the inorganic constituents, 3 were detected at concentrations above their respective MCL benchmarks in shallow-well tap sites: arsenic (2 shallow-well tap sites above the MCL of 10 μg/L), uranium (2 shallow-well tap sites, MCL of 30 μg/L), and nitrate (24 shallow-well tap sites, MCL of 10 mg/L). An additional 3 inorganic constituents were detected above their respective HBSL benchmarks in shallow-well tap sites: manganese (4 shallow-well tap sites above the HBSL of 300 μg/L), molybdenum (4 shallow-well tap sites, HBSL of 40 μg/L), and zinc (2 shallow-well tap sites, HBSL of 2,000 μg/L). Of the inorganic constituents, 4 were detected at concentrations above their non-health based SMCL benchmarks in shallow-well tap sites: iron (6 shallow-well tap sites above the SMCL of 300 μg/L), chloride (1 shallow-well tap site, SMCL of 500 mg/L), sulfate (9 shallow-well tap sites, SMCL of 500 mg/L), and total dissolved solids (15 shallow-well tap sites, SMCL of 1,000 mg/L).

  13. A benchmarking method to measure dietary absorption efficiency of chemicals by fish.

    PubMed

    Xiao, Ruiyang; Adolfsson-Erici, Margaretha; Åkerman, Gun; McLachlan, Michael S; MacLeod, Matthew

    2013-12-01

    Understanding the dietary absorption efficiency of chemicals in the gastrointestinal tract of fish is important from both a scientific and a regulatory point of view. However, reported fish absorption efficiencies for well-studied chemicals are highly variable. In the present study, the authors developed and exploited an internal chemical benchmarking method that has the potential to reduce uncertainty and variability and, thus, to improve the precision of measurements of fish absorption efficiency. The authors applied the benchmarking method to measure the gross absorption efficiency for 15 chemicals with a wide range of physicochemical properties and structures. They selected 2,2',5,6'-tetrachlorobiphenyl (PCB53) and decabromodiphenyl ethane as absorbable and nonabsorbable benchmarks, respectively. Quantities of chemicals determined in fish were benchmarked to the fraction of PCB53 recovered in fish, and quantities of chemicals determined in feces were benchmarked to the fraction of decabromodiphenyl ethane recovered in feces. The performance of the benchmarking procedure was evaluated based on the recovery of the test chemicals and precision of absorption efficiency from repeated tests. Benchmarking did not improve the precision of the measurements; after benchmarking, however, the median recovery for 15 chemicals was 106%, and variability of recoveries was reduced compared with before benchmarking, suggesting that benchmarking could account for incomplete extraction of chemical in fish and incomplete collection of feces from different tests. © 2013 SETAC.

  14. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGES

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  15. A health risk benchmark for the neurologic effects of styrene: comparison with NOAEL/LOAEL approach.

    PubMed

    Rabovsky, J; Fowles, J; Hill, M D; Lewis, D C

    2001-02-01

    Benchmark dose (BMD) analysis was used to estimate an inhalation benchmark concentration for styrene neurotoxicity. Quantal data on neuropsychologic test results from styrene-exposed workers [Mutti et al. (1984). American Journal of Industrial Medicine, 5, 275-286] were used to quantify neurotoxicity, defined as the percent of tested workers who responded abnormally to > or = 1, > or = 2, or > or = 3 out of a battery of eight tests. Exposure was based on previously published results on mean urinary mandelic- and phenylglyoxylic acid levels in the workers, converted to air styrene levels (15, 44, 74, or 115 ppm). Nonstyrene-exposed workers from the same region served as a control group. Maximum-likelihood estimates (MLEs) and BMDs at 5 and 10% response levels of the exposed population were obtained from log-normal analysis of the quantal data. The highest MLE was 9 ppm (BMD = 4 ppm) styrene and represents abnormal responses to > or = 3 tests by 10% of the exposed population. The most health-protective MLE was 2 ppm styrene (BMD = 0.3 ppm) and represents abnormal responses to > or = 1 test by 5% of the exposed population. A no observed adverse effect level/lowest observed adverse effect level (NOAEL/LOAEL) analysis of the same quantal data showed workers in all styrene exposure groups responded abnormally to > or = 1, > or = 2, or > or = 3 tests, compared to controls, and the LOAEL was 15 ppm. A comparison of the BMD and NOAEL/LOAEL analyses suggests that at air styrene levels below the LOAEL, a segment of the worker population may be adversely affected. The benchmark approach will be useful for styrene noncancer risk assessment purposes by providing a more accurate estimate of potential risk that should, in turn, help to reduce the uncertainty that is a common problem in setting exposure levels.

  16. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    PubMed

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  17. More allopurinol is needed to get gout patients < 0.36 mmol/l: a gout audit in the form of a before-after trial.

    PubMed

    Arroll, Bruce; Bennett, Merran; Dalbeth, Nicola; Hettiarachchi, Dilanka; Ben, Cribben; Shelling, Ginnie

    2009-12-01

    To establish a benchmark for gout control using the proportion of patients with serum uric acid (SUA) < 0.36 mmol/L, assess patients' understanding of their preventive medication and trial a mail and phone intervention to improve gout control. Patients clinically diagnosed with gout and baseline SUAs were identified in two South Auckland practices. A mail and phone intervention was introduced aimed at improving the control of gout. Intervention #1 took place in one practice over three months. Intervention #2 occurred in the other practice four to 16 months following baseline. No significant change in SUA from intervention #1 after three months. The second intervention by mail and phone resulted in improvement in SUA levels with a greater proportion of those with SUA < 0.36 mmol/L and the difference in means statistically significant (p = 0.039 two-tailed paired t-test). Benchmarking for usual care was established at 38-43% SUA < 0.36 level. It was possible to increase from 38% to 50%. Issues relating to gout identified included lack of understanding of the need for long-term allopurinol and diagnosis and management for patients for whom English is not their first language. 1. Community workers who speak Pacific languages may assist GPs in communicating to non-English speaking patients. 2. Alternative diagnoses should be considered in symptomatic patients with prolonged normouricaemia. 3. GPs should gradually introduce allopurinol after acute gout attacks, emphasising importance of prophylaxis. 4. A campaign to inform patients about benefits of allopurinol should be considered. 5. A simple one keystroke audit is needed for gout audit and benchmarking. 6. GP guidelines for gout diagnosis and management should be available.

  18. Groundwater-quality data in the Borrego Valley, Central Desert, and Low-Use Basins of the Mojave and Sonoran Deserts study unit, 2008-2010--Results from the California GAMA Program

    USGS Publications Warehouse

    Mathany, Timothy M.; Wright, Michael T.; Beuttel, Brandon S.; Belitz, Kenneth

    2012-01-01

    Groundwater quality in the 12,103-square-mile Borrego Valley, Central Desert, and Low-Use Basins of the Mojave and Sonoran Deserts (CLUB) study unit was investigated by the U.S. Geological Survey (USGS) from December 2008 to March 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program's Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The CLUB study unit was the twenty-eighth study unit to be sampled as part of the GAMA-PBP. The GAMA CLUB study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer systems, and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer systems (hereinafter referred to as primary aquifers) are defined as parts of aquifers corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the CLUB study unit. The quality of groundwater in shallow or deep water-bearing zones may differ from the quality of groundwater in the primary aquifers; shallow groundwater may be more vulnerable to surficial contamination. In the CLUB study unit, groundwater samples were collected from 52 wells in 3 study areas (Borrego Valley, Central Desert, and Low-Use Basins of the Mojave and Sonoran Deserts) in San Bernardino, Riverside, Kern, San Diego, and Imperial Counties. Forty-nine of the wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells), and three wells were selected to aid in evaluation of water-quality issues (understanding wells). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]), naturally-occurring inorganic constituents (trace elements, nutrients, major and minor ions, silica, total dissolved solids [TDS], alkalinity, and species of inorganic chromium), and radioactive constituents (radon-222, radium isotopes, and gross alpha and gross beta radioactivity). Naturally-occurring isotopes (stable isotopes of hydrogen, oxygen, boron, and strontium in water, stable isotopes of carbon in dissolved inorganic carbon, activities of tritium, and carbon-14 abundance) and dissolved noble gases also were measured to help identify the sources and ages of sampled groundwater. In total, 223 constituents and 12 water-quality indicators were investigated. Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at up to 10 percent of the wells in the CLUB study unit, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples generally were within the limits of acceptable analytical reproducibility. Median matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 85 percent of the compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, untreated groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is delivered to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. Most inorganic constituents detected in groundwater samples from the 49 grid wells were detected at concentrations less than drinking-water benchmarks. In addition, all detections of organic constituents from the CLUB study-unit grid-well samples were less than health-based benchmarks. In total, VOCs were detected in 17 of the 49 grid wells sampled (approximately 35 percent), pesticides and pesticide degradates were detected in 5 of the 47 grid wells sampled (approximately 11 percent), and perchlorate was detected in 41 of 49 grid wells sampled (approximately 84 percent). Trace elements, major and minor ions, and nutrients were sampled for at 39 grid wells, and radioactive constituents were sampled for at 23 grid wells; most detected concentrations were less than health-based benchmarks. Exceptions in the grid-well samples include seven detections of arsenic greater than the USEPA maximum contaminant level (MCL-US) of 10 micrograms per liter (μg/L); four detections of boron greater than the CDPH notification level (NL-CA) of 1,000 μg/L; six detections of molybdenum greater than the USEPA lifetime health advisory level (HAL-US) of 40 μg/L; two detections of uranium greater than the MCL-US of 30 μg/L; nine detections of fluoride greater than the CDPH maximum contaminant level (MCL-CA) of 2 milligrams per liter (mg/L); one detection of nitrite plus nitrate (NO2-+NO3-), as nitrogen, greater than the MCL-US of 10 mg/L; and four detections of gross alpha radioactivity (72-hour count), and one detection of gross alpha radioactivity (30-day count), greater than the MCL-US of 15 picocuries per liter. Results for constituents with non-regulatory benchmarks set for aesthetic concerns showed that a manganese concentration greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 50 μg/L was detected in one grid well. Chloride concentrations greater than the recommended SMCL-CA benchmark of 250 mg/L were detected in three grid wells, and one of these wells also had a concentration that was greater than the upper SMCL-CA benchmark of 500 mg/L. Sulfate concentrations greater than the recommended SMCL-CA benchmark of 250 mg/L were measured in six grid wells. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 20 grid wells, and concentrations in 2 of these wells also were greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  19. Benchmarking clinical photography services in the NHS.

    PubMed

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  20. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone. Using a ROV to place and remove sensors on the benchmarks will significantly reduce the number of sensors required by the community to monitor offshore strain in subduction zones.

  1. Best Practices and Testing Protocols for Benchmarking ORR Activities of Fuel Cell Electrocatalysts Using Rotating Disk Electrode

    DOE PAGES

    Kocha, Shyam S.; Shinozaki, Kazuma; Zack, Jason W.; ...

    2017-05-02

    Thin-film-rotating disk electrodes (TF-RDEs) are the half-cell electrochemical system of choice for rapid screening of oxygen reduction reaction (ORR) activity of novel Pt supported on carbon black supports (Pt/C) electrocatalysts. It has been shown that the magnitude of the measured ORR activity and reproducibility are highly dependent on the system cleanliness, evaluation protocols, and operating conditions as well as ink formulation, composition, film drying, and the resultant film thickness and uniformity. Accurate benchmarks of baseline Pt/C catalysts evaluated using standardized protocols and best practices are necessary to expedite ultra-low-platinum group metal (PGM) catalyst development that is crucial for the imminentmore » commercialization of fuel cell vehicles. We report results of evaluation in three independent laboratories of Pt/C electrocatalysts provided by commercial fuel cell catalyst manufacturers (Johnson Matthey, Umicore, Tanaka Kikinzoku Kogyo - TKK). The studies were conducted using identical evaluation protocols/ink formulation/film fabrication albeit employing unique electrochemical cell designs specific to each laboratory. Furthermore, the ORR activities reported in this work provide a baseline and criteria for selection and scale-up of novel high activity ORR electrocatalysts for implementation in proton exchange membrane fuel cells (PEMFCs).« less

  2. International Collaborations on Engineered Barrier Systems: Brief Overview of SKB-EBS Activities.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jove-Colon, Carlos F.

    2015-10-01

    Research collaborations with international partners on the behavior and performance of engineered barrier systems (EBS) are an important aspect of the DOE-NE Used Fuel Disposition Campaign strategy in the evaluation of disposal design concepts. These international partnerships are a cost-effective way of engaging in key R&D activities with common goals resulting in effective scientific knowledge exchanges thus enhancing existing and future research programs in the USA. This report provides a brief description of the activities covered by the Swedish Nuclear Fuel and Waste Management Company (SKB) EBS Task Force (TF) (referred hereafter as SKB EBS TF) and potential future directionsmore » for engagement of the DOE-NE UFDC program in relevant R&D activities. Emphasis is given to SKB EBS TF activities that are still ongoing and aligned to the UFDC R&D program. This include utilization of data collected in the bentonite rock interaction experiment (BRIE) and data sets from benchmark experiments produced by the chemistry or “C” part of the SKB EBS TF. Potential applications of information generated by this program include comparisons/tests between model and data (e.g., reactive diffusion), development and implementation of coupled-process models (e.g., HM), and code/model benchmarking.« less

  3. Best Practices and Testing Protocols for Benchmarking ORR Activities of Fuel Cell Electrocatalysts Using Rotating Disk Electrode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kocha, Shyam S.; Shinozaki, Kazuma; Zack, Jason W.

    Abstract Thin-film-rotating disk electrodes (TF-RDEs) are the half-cell electrochemical system of choice for rapid screening of oxygen reduction reaction (ORR) activity of novel Pt supported on carbon black supports (Pt/C) electrocatalysts. It has been shown that the magnitude of the measured ORR activity and reproducibility are highly dependent on the system cleanliness, evaluation protocols, and operating conditions as well as ink formulation, composition, film drying, and the resultant film thickness and uniformity. Accurate benchmarks of baseline Pt/C catalysts evaluated using standardized protocols and best practices are necessary to expedite ultra-low-platinum group metal (PGM) catalyst development that is crucial for themore » imminent commercialization of fuel cell vehicles. We report results of evaluation in three independent laboratories of Pt/C electrocatalysts provided by commercial fuel cell catalyst manufacturers (Johnson Matthey, Umicore, Tanaka Kikinzoku Kogyo—TKK). The studies were conducted using identical evaluation protocols/ink formulation/film fabrication albeit employing unique electrochemical cell designs specific to each laboratory. The ORR activities reported in this work provide a baseline and criteria for selection and scale-up of novel high activity ORR electrocatalysts for implementation in proton exchange membrane fuel cells (PEMFCs).« less

  4. Best Practices and Testing Protocols for Benchmarking ORR Activities of Fuel Cell Electrocatalysts Using Rotating Disk Electrode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kocha, Shyam S.; Shinozaki, Kazuma; Zack, Jason W.

    Thin-film-rotating disk electrodes (TF-RDEs) are the half-cell electrochemical system of choice for rapid screening of oxygen reduction reaction (ORR) activity of novel Pt supported on carbon black supports (Pt/C) electrocatalysts. It has been shown that the magnitude of the measured ORR activity and reproducibility are highly dependent on the system cleanliness, evaluation protocols, and operating conditions as well as ink formulation, composition, film drying, and the resultant film thickness and uniformity. Accurate benchmarks of baseline Pt/C catalysts evaluated using standardized protocols and best practices are necessary to expedite ultra-low-platinum group metal (PGM) catalyst development that is crucial for the imminentmore » commercialization of fuel cell vehicles. We report results of evaluation in three independent laboratories of Pt/C electrocatalysts provided by commercial fuel cell catalyst manufacturers (Johnson Matthey, Umicore, Tanaka Kikinzoku Kogyo - TKK). The studies were conducted using identical evaluation protocols/ink formulation/film fabrication albeit employing unique electrochemical cell designs specific to each laboratory. Furthermore, the ORR activities reported in this work provide a baseline and criteria for selection and scale-up of novel high activity ORR electrocatalysts for implementation in proton exchange membrane fuel cells (PEMFCs).« less

  5. The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool

    EPA Pesticide Factsheets

    Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.

  6. Benchmarking--Measuring and Comparing for Continuous Improvement.

    ERIC Educational Resources Information Center

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  7. Multi-criteria evaluation of wastewater treatment plant control strategies under uncertainty.

    PubMed

    Flores-Alsina, Xavier; Rodríguez-Roda, Ignasi; Sin, Gürkan; Gernaey, Krist V

    2008-11-01

    The evaluation of activated sludge control strategies in wastewater treatment plants (WWTP) via mathematical modelling is a complex activity because several objectives; e.g. economic, environmental, technical and legal; must be taken into account at the same time, i.e. the evaluation of the alternatives is a multi-criteria problem. Activated sludge models are not well characterized and some of the parameters can present uncertainty, e.g. the influent fractions arriving to the facility and the effect of either temperature or toxic compounds on the kinetic parameters, having a strong influence in the model predictions used during the evaluation of the alternatives and affecting the resulting rank of preferences. Using a simplified version of the IWA Benchmark Simulation Model No. 2 as a case study, this article shows the variations in the decision making when the uncertainty in activated sludge model (ASM) parameters is either included or not during the evaluation of WWTP control strategies. This paper comprises two main sections. Firstly, there is the evaluation of six WWTP control strategies using multi-criteria decision analysis setting the ASM parameters at their default value. In the following section, the uncertainty is introduced, i.e. input uncertainty, which is characterized by probability distribution functions based on the available process knowledge. Next, Monte Carlo simulations are run to propagate input through the model and affect the different outcomes. Thus (i) the variation in the overall degree of satisfaction of the control objectives for the generated WWTP control strategies is quantified, (ii) the contributions of environmental, legal, technical and economic objectives to the existing variance are identified and finally (iii) the influence of the relative importance of the control objectives during the selection of alternatives is analyzed. The results show that the control strategies with an external carbon source reduce the output uncertainty in the criteria used to quantify the degree of satisfaction of environmental, technical and legal objectives, but increasing the economical costs and their variability as a trade-off. Also, it is shown how a preliminary selected alternative with cascade ammonium controller becomes less desirable when input uncertainty is included, having simpler alternatives more chance of success.

  8. Health and productivity management: establishing key performance measures, benchmarks, and best practices.

    PubMed

    Goetzel, R Z; Guindon, A M; Turshen, I J; Ozminkowski, R J

    2001-01-01

    Major areas considered under the rubric of health and productivity management (HPM) in American business include absenteeism, employee turnover, and the use of medical, disability, and workers' compensation programs. Until recently, few normative data existed for most HPM areas. To meet the need for normative information in HPM, a series of Consortium Benchmarking Studies were conducted. In the most recent application of the study, 1998 HPM costs, incidence, duration, and other program data were collected from 43 employers on almost one million workers. The median HPM costs for these organizations were $9992 per employee, which were distributed among group health (47%), turnover (37%), unscheduled absence (8%), nonoccupational disability (5%), and workers' compensation programs (3%). Achieving "best-practice" levels of performance (operationally defined as the 25th percentile for program expenditures in each HPM area) would realize savings of $2562 per employee (a 26% reduction). The results indicate substantial opportunities for improvement through effective coordination and management of HPM programs. Examples of best-practice activities collated from on-site visits to "benchmark" organizations are also reviewed.

  9. Reinforcement learning or active inference?

    PubMed

    Friston, Karl J; Daunizeau, Jean; Kiebel, Stefan J

    2009-07-29

    This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.

  10. Nanoscale Imaging of Light-Matter Coupling Inside Metal-Coated Cavities with a Pulsed Electron Beam.

    PubMed

    Moerland, Robert J; Weppelman, I Gerward C; Scotuzzi, Marijke; Hoogenboom, Jacob P

    2018-05-02

    Many applications in (quantum) nanophotonics rely on controlling light-matter interaction through strong, nanoscale modification of the local density of states (LDOS). All-optical techniques probing emission dynamics in active media are commonly used to measure the LDOS and benchmark experimental performance against theoretical predictions. However, metal coatings needed to obtain strong LDOS modifications in, for instance, nanocavities, are incompatible with all-optical characterization. So far, no reliable method exists to validate theoretical predictions. Here, we use subnanosecond pulses of focused electrons to penetrate the metal and excite a buried active medium at precisely defined locations inside subwavelength resonant nanocavities. We reveal the spatial layout of the spontaneous-emission decay dynamics inside the cavities with deep-subwavelength detail, directly mapping the LDOS. We show that emission enhancement converts to inhibition despite an increased number of modes, emphasizing the critical role of optimal emitter location. Our approach yields fundamental insight in dynamics at deep-subwavelength scales for a wide range of nano-optical systems.

  11. Enhanced Photoelectrochemical Activity by Autologous Cd/CdO/CdS Heterojunction Photoanodes with High Conductivity and Separation Efficiency.

    PubMed

    Xie, Shilei; Zhang, Peng; Zhang, Min; Liu, Peng; Li, Wei; Lu, Xihong; Cheng, Faliang; Tong, Yexiang

    2017-07-18

    The development for hydrogen from solar energy has attracted great attention due to the global demand for clean, environmentally friendly energy. Herein, autologous Cd/CdO/CdS heterojunctions were prepared in a carefully controlled process with metallic Cd as the inner layer and CdO as the interlayer. Further research revealed that the transportation and separation of photogenerated pairs were enhanced due to low resistance of the Cd inner layer and the type II CdO/CdS heterojunction. As a result, the optimized Cd/CdO/CdS heterojunction photoanode showed outstanding and long-term photoelectrochemical activity for water splitting, with a current density of 3.52 mA cm -2 , or a benchmark specific hydrogen production rate of 1.65 μmol cm -2  min -1 at -0.3 V versus Ag/AgCl, by using the environmental pollutants of sulfide and sulfite as sacrificial agents. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Defect measurement and analysis of JPL ground software: a case study

    NASA Technical Reports Server (NTRS)

    Powell, John D.; Spagnuolo, John N., Jr.

    2004-01-01

    Ground software systems at JPL must meet high assurance standards while remaining on schedule due to relatively immovable launch dates for spacecraft that will be controlled by such systems. Toward this end, the Software Quality Improvement (SQI) project's Measurement and Benchmarking (M&B) team is collecting and analyzing defect data of JPL ground system software projects to build software defect prediction models. The aim of these models is to improve predictability with regard to software quality activities. Predictive models will quantitatively define typical trends for JPL ground systems as well as Critical Discriminators (CDs) to provide explanations for atypical deviations from the norm at JPL. CDs are software characteristics that can be estimated or foreseen early in a software project's planning. Thus, these CDs will assist in planning for the predicted degree to which software quality activities for a project are likely to deviation from the normal JPL ground system based on pasted experience across the lab.

  13. Changing roles: the radiologist in management.

    PubMed

    Siegle, R L; Nelsen, L

    1999-05-01

    Radiologists have rarely had direct administrative control of the hospital departments in which they practice. Several years ago, the administration of the medical school at the University of Texas Health Science Center at San Antonio and its county-owned principal teaching hospital agreed to integrate the physician and administrative management of the radiology department in an attempt to improve operations and reduce expenses. This integration is a pilot plan that will eventually be extended to most departments. The authors have collected data that measure department function for the seven quarters of physician management and compared these data with those of the previous four quarters prior to physician management. There are substantial increases in department activity, together with reductions in expenses and waiting time for procedures. These changes have occurred while overall hospital activity has decreased. Bench-marking studies show favorable comparison with comparable radiology departments. Radiologists can have an important effect on department operations when given the responsibility and authority for managing these affairs.

  14. Developing Benchmarks for Solar Radio Bursts

    NASA Astrophysics Data System (ADS)

    Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Domm, P.; Love, J. J.; Pierson, J.

    2016-12-01

    Solar radio bursts can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan has asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The solar radio benchmark team was also asked to define the wavelength/frequency bands of interest. The benchmark team developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks and the basis used to derive them. We will also present the work that needs to be done in order to complete the final, or phase 2 benchmarks.

  15. Targeting the affordability of cigarettes: a new benchmark for taxation policy in low-income and-middle-income countries.

    PubMed

    Blecher, Evan

    2010-08-01

    To investigate the appropriateness of tax incidence (the percentage of the retail price occupied by taxes) benchmarking in low-income and-middle-income countries (LMICs) with rapidly growing economies and to explore the viability of an alternative tax policy rule based on the affordability of cigarettes. The paper outlines criticisms of tax incidence benchmarking, particularly in the context of LMICs. It then considers an affordability-based benchmark using relative income price (RIP) as a measure of affordability. The RIP measures the percentage of annual per capita GDP required to purchase 100 packs of cigarettes. Using South Africa as a case study of an LMIC, future consumption is simulated using both tax incidence benchmarks and affordability benchmarks. I show that a tax incidence benchmark is not an optimal policy tool in South Africa and that an affordability benchmark could be a more effective means of reducing tobacco consumption in the future. Although a tax incidence benchmark was successful in increasing prices and reducing tobacco consumption in South Africa in the past, this approach has drawbacks, particularly in the context of a rapidly growing LMIC economy. An affordability benchmark represents an appropriate alternative that would be more effective in reducing future cigarette consumption.

  16. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Specific entrustable professional activities for undergraduate medical internships: a method compatible with the academic curriculum.

    PubMed

    Hamui-Sutton, Alicia; Monterrosas-Rojas, Ana María; Ortiz-Montalvo, Armando; Flores-Morones, Felipe; Torruco-García, Uri; Navarrete-Martínez, Andrea; Arrioja-Guerrero, Araceli

    2017-08-25

    Competency-based education has been considered the most important pedagogical trend in Medicine in the last two decades. In clinical contexts, competencies are implemented through Entrustable Professional Activities (EPAs) which are observable and measurable. The aim of this paper is to describe the methodology used in the design of educational tools to assess students´ competencies in clinical practice during their undergraduate internship (UI). In this paper, we present the construction of specific APROCs (Actividades Profesionales Confiables) in Surgery (S), Gynecology and Obstetrics (GO) and Family Medicine (FM) rotations with three levels of performance. The study considered a mixed method exploratory type design, a qualitative phase followed by a quantitative validation exercise. In the first stage data was obtained from three rotations (FM, GO and S) through focus groups about real and expected activities of medical interns. Triangulation with other sources was made to construct benchmarks. In the second stage, narrative descriptions with the three levels were validated by professors who teach the different subjects using the Delphi technique. The results may be described both curricular and methodological wise. From the curricular point of view, APROCs were identified in three UI rotations within clinical contexts in Mexico City, benchmarks were developed by levels and validated by experts' consensus. In regard to methodological issues, this research contributed to the development of a strategy, following six steps, to build APROCs using mixed methods. Developing benchmarks provides a regular and standardized language that helps to evaluate student's performance and define educational strategies efficiently and accurately. The university academic program was aligned with APROCs in clinical contexts to assure the acquisition of competencies by students.

  18. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  19. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  20. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

Top