Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.
Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.
Semi-active control of a cable-stayed bridge under multiple-support excitations.
Dai, Ze-Bing; Huang, Jin-Zhi; Wang, Hong-Xia
2004-03-01
This paper presents a semi-active strategy for seismic protection of a benchmark cable-stayed bridge with consideration of multiple-support excitations. In this control strategy, Magnetorheological (MR) dampers are proposed as control devices, a LQG-clipped-optimal control algorithm is employed. An active control strategy, shown in previous researches to perform well at controlling the benchmark bridge when uniform earthquake motion was assumed, is also used in this study to control this benchmark bridge with consideration of multiple-support excitations. The performance of active control system is compared to that of the presented semi-active control strategy. Because the MR fluid damper is a controllable energy- dissipation device that cannot add mechanical energy to the structural system, the proposed control strategy is fail-safe in that bounded-input, bounded-output stability of the controlled structure is guaranteed. The numerical results demonstrated that the performance of the presented control design is nearly the same as that of the active control system; and that the MR dampers can effectively be used to control seismically excited cable-stayed bridges with multiple-support excitations.
BACT Simulation User Guide (Version 7.0)
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1997-01-01
This report documents the structure and operation of a simulation model of the Benchmark Active Control Technology (BACT) Wind-Tunnel Model. The BACT system was designed, built, and tested at NASA Langley Research Center as part of the Benchmark Models Program and was developed to perform wind-tunnel experiments to obtain benchmark quality data to validate computational fluid dynamics and computational aeroelasticity codes, to verify the accuracy of current aeroservoelasticity design and analysis tools, and to provide an active controls testbed for evaluating new and innovative control algorithms for flutter suppression and gust load alleviation. The BACT system has been especially valuable as a control system testbed.
Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)
2002-01-01
The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Microgravity Vibration Control and Civil Applications
NASA Technical Reports Server (NTRS)
Whorton, Mark Stephen; Alhorn, Dean Carl
1998-01-01
Controlling vibration of structures is essential for both space structures as well as terrestrial structures. Due to the ambient acceleration levels anticipated for the International Space Station, active vibration isolation is required to provide a quiescent acceleration environment for many science experiments. An overview is given of systems developed and flight tested in orbit for microgravity vibration isolation. Technology developed for vibration control of flexible space structures may also be applied to control of terrestrial structures such as buildings and bridges subject to wind loading or earthquake excitation. Recent developments in modern robust control for flexible space structures are shown to provide good structural vibration control while maintaining robustness to model uncertainties. Results of a mixed H-2/H-infinity control design are provided for a benchmark problem in structural control for earthquake resistant buildings.
NASA Technical Reports Server (NTRS)
Dcruz, Jonathan
1993-01-01
In view of the strong need for a well-documented set of experimental data which is suitable for the validation and/or calibration of modern Computational Fluid Dynamics codes, the Benchmark Models Program was initiated by the Structural Dynamics Division of the NASA Langley Research Center. One of the models in the program, the Benchmark Active Controls Testing Model, consists of a rigid wing of rectangular planform with a NACA 0012 profile and three control surfaces (a trailing-edge control surface, a lower-surface spoiler, and an upper-surface spoiler). The model is affixed to a flexible mount system which allows only plunging and/or pitching motion. An approximate analytical determination of the forces required to move this model, with its control surfaces fixed, in pure plunge and pure pitch at a number of test conditions is included. This provides a good indication of the type of actuator system required to generate the aerodynamic data resulting from pure plunging and pure pitching motion, in which much interest was expressed. The analysis makes use of previously obtained numerical results.
A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.
1998-01-01
This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.
NASA Technical Reports Server (NTRS)
Krause, David L.; Brewer, Ethan J.; Pawlik, Ralph
2013-01-01
This report provides test methodology details and qualitative results for the first structural benchmark creep test of an Advanced Stirling Convertor (ASC) heater head of ASC-E2 design heritage. The test article was recovered from a flight-like Microcast MarM-247 heater head specimen previously used in helium permeability testing. The test article was utilized for benchmark creep test rig preparation, wall thickness and diametral laser scan hardware metrological developments, and induction heater custom coil experiments. In addition, a benchmark creep test was performed, terminated after one week when through-thickness cracks propagated at thermocouple weld locations. Following this, it was used to develop a unique temperature measurement methodology using contact thermocouples, thereby enabling future benchmark testing to be performed without the use of conventional welded thermocouples, proven problematic for the alloy. This report includes an overview of heater head structural benchmark creep testing, the origin of this particular test article, test configuration developments accomplished using the test article, creep predictions for its benchmark creep test, qualitative structural benchmark creep test results, and a short summary.
MARC calculations for the second WIPP structural benchmark problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, H.S.
1981-05-01
This report describes calculations made with the MARC structural finite element code for the second WIPP structural benchmark problem. Specific aspects of problem implementation such as element choice, slip line modeling, creep law implementation, and thermal-mechanical coupling are discussed in detail. Also included are the computational results specified in the benchmark problem formulation.
Semi-active friction damper for buildings subject to seismic excitation
NASA Astrophysics Data System (ADS)
Mantilla, Juan S.; Solarte, Alexander; Gomez, Daniel; Marulanda, Johannio; Thomson, Peter
2016-04-01
Structural control systems are considered an effective alternative for reducing vibrations in civil structures and are classified according to their energy supply requirement: passive, semi-active, active and hybrid. Commonly used structural control systems in buildings are passive friction dampers, which add energy dissipation through damping mechanisms induced by sliding friction between their surfaces. Semi-Active Variable Friction Dampers (SAVFD) allow the optimum efficiency range of friction dampers to be enhanced by controlling the clamping force in real time. This paper describes the development and performance evaluation of a low-cost SAVFD for the reduction of vibrations of structures subject to earthquakes. The SAVFD and a benchmark structural control test structure were experimentally characterized and analytical models were developed and updated based on the dynamic characterization. Decentralized control algorithms were implemented and tested on a shaking table. Relative displacements and accelerations of the structure controlled with the SAVFD were 80% less than those of the uncontrolled structure
Benchmarking Distance Control and Virtual Drilling for Lateral Skull Base Surgery.
Voormolen, Eduard H J; Diederen, Sander; van Stralen, Marijn; Woerdeman, Peter A; Noordmans, Herke Jan; Viergever, Max A; Regli, Luca; Robe, Pierre A; Berkelbach van der Sprenkel, Jan Willem
2018-01-01
Novel audiovisual feedback methods were developed to improve image guidance during skull base surgery by providing audiovisual warnings when the drill tip enters a protective perimeter set at a distance around anatomic structures ("distance control") and visualizing bone drilling ("virtual drilling"). To benchmark the drill damage risk reduction provided by distance control, to quantify the accuracy of virtual drilling, and to investigate whether the proposed feedback methods are clinically feasible. In a simulated surgical scenario using human cadavers, 12 unexperienced users (medical students) drilled 12 mastoidectomies. Users were divided into a control group using standard image guidance and 3 groups using distance control with protective perimeters of 1, 2, or 3 mm. Damage to critical structures (sigmoid sinus, semicircular canals, facial nerve) was assessed. Neurosurgeons performed another 6 mastoidectomy/trans-labyrinthine and retro-labyrinthine approaches. Virtual errors as compared with real postoperative drill cavities were calculated. In a clinical setting, 3 patients received lateral skull base surgery with the proposed feedback methods. Users drilling with distance control protective perimeters of 3 mm did not damage structures, whereas the groups using smaller protective perimeters and the control group injured structures. Virtual drilling maximum cavity underestimations and overestimations were 2.8 ± 0.1 and 3.3 ± 0.4 mm, respectively. Feedback methods functioned properly in the clinical setting. Distance control reduced the risks of drill damage proportional to the protective perimeter distance. Errors in virtual drilling reflect spatial errors of the image guidance system. These feedback methods are clinically feasible. Copyright © 2017 Elsevier Inc. All rights reserved.
Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2013-01-01
The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.
Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.
2013-01-01
We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231
Evaluation of control strategies using an oxidation ditch benchmark.
Abusam, A; Keesman, K J; Spanjers, H; van, Straten G; Meinema, K
2002-01-01
This paper presents validation and implementation results of a benchmark developed for a specific full-scale oxidation ditch wastewater treatment plant. A benchmark is a standard simulation procedure that can be used as a tool in evaluating various control strategies proposed for wastewater treatment plants. It is based on model and performance criteria development. Testing of this benchmark, by comparing benchmark predictions to real measurements of the electrical energy consumptions and amounts of disposed sludge for a specific oxidation ditch WWTP, has shown that it can (reasonably) be used for evaluating the performance of this WWTP. Subsequently, the validated benchmark was then used in evaluating some basic and advanced control strategies. Some of the interesting results obtained are the following: (i) influent flow splitting ratio, between the first and the fourth aerated compartments of the ditch, has no significant effect on the TN concentrations in the effluent, and (ii) for evaluation of long-term control strategies, future benchmarks need to be able to assess settlers' performance.
Benchmarking specialty hospitals, a scoping review on theory and practice.
Wind, A; van Harten, W H
2017-04-04
Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.
Use of benchmarking and public reporting for infection control in four high-income countries.
Haustein, Thomas; Gastmeier, Petra; Holmes, Alison; Lucet, Jean-Christophe; Shannon, Richard P; Pittet, Didier; Harbarth, Stephan
2011-06-01
Benchmarking of surveillance data for health-care-associated infection (HCAI) has been used for more than three decades to inform prevention strategies and improve patients' safety. In recent years, public reporting of HCAI indicators has been mandated in several countries because of an increasing demand for transparency, although many methodological issues surrounding benchmarking remain unresolved and are highly debated. In this Review, we describe developments in benchmarking and public reporting of HCAI indicators in England, France, Germany, and the USA. Although benchmarking networks in these countries are derived from a common model and use similar methods, approaches to public reporting have been more diverse. The USA and England have predominantly focused on reporting of infection rates, whereas France has put emphasis on process and structure indicators. In Germany, HCAI indicators of individual institutions are treated confidentially and are not disseminated publicly. Although evidence for a direct effect of public reporting of indicators alone on incidence of HCAIs is weak at present, it has been associated with substantial organisational change. An opportunity now exists to learn from the different strategies that have been adopted. Copyright © 2011 Elsevier Ltd. All rights reserved.
Protein Models Docking Benchmark 2
Anishchenko, Ivan; Kundrotas, Petras J.; Tuzikov, Alexander V.; Vakser, Ilya A.
2015-01-01
Structural characterization of protein-protein interactions is essential for our ability to understand life processes. However, only a fraction of known proteins have experimentally determined structures. Such structures provide templates for modeling of a large part of the proteome, where individual proteins can be docked by template-free or template-based techniques. Still, the sensitivity of the docking methods to the inherent inaccuracies of protein models, as opposed to the experimentally determined high-resolution structures, remains largely untested, primarily due to the absence of appropriate benchmark set(s). Structures in such a set should have pre-defined inaccuracy levels and, at the same time, resemble actual protein models in terms of structural motifs/packing. The set should also be large enough to ensure statistical reliability of the benchmarking results. We present a major update of the previously developed benchmark set of protein models. For each interactor, six models were generated with the model-to-native Cα RMSD in the 1 to 6 Å range. The models in the set were generated by a new approach, which corresponds to the actual modeling of new protein structures in the “real case scenario,” as opposed to the previous set, where a significant number of structures were model-like only. In addition, the larger number of complexes (165 vs. 63 in the previous set) increases the statistical reliability of the benchmarking. We estimated the highest accuracy of the predicted complexes (according to CAPRI criteria), which can be attained using the benchmark structures. The set is available at http://dockground.bioinformatics.ku.edu. PMID:25712716
Acher, O; Bernard, J M L; Maréchal, P; Bardaine, A; Levassort, F
2009-04-01
Recent fundamental results concerning the ultimate performance of electromagnetic absorbers were adapted and extrapolated to the field of sound waves. It was possible to deduce some appropriate figures of merit indicating whether a particular structure was close to the best possible matching properties. These figures of merit had simple expressions and were easy to compute in practical cases. Numerical examples illustrated that conventional state-of-the-art matching structures had an overall efficiency of approximately 50% of the fundamental limit. However, if the bandwidth at -6 dB was retained as a benchmark, the achieved bandwidth would be, at most, 12% of the fundamental limit associated with the same mass for the matching structure. Consequently, both encouragement for future improvements and accurate estimates of the surface mass required to obtain certain desired broadband properties could be provided. The results presented here can be used to investigate the broadband sound absorption and to benchmark passive and active noise control systems.
Structured Uncertainty Bound Determination From Data for Control and Performance Validation
NASA Technical Reports Server (NTRS)
Lim, Kyong B.
2003-01-01
This report attempts to document the broad scope of issues that must be satisfactorily resolved before one can expect to methodically obtain, with a reasonable confidence, a near-optimal robust closed loop performance in physical applications. These include elements of signal processing, noise identification, system identification, model validation, and uncertainty modeling. Based on a recently developed methodology involving a parameterization of all model validating uncertainty sets for a given linear fractional transformation (LFT) structure and noise allowance, a new software, Uncertainty Bound Identification (UBID) toolbox, which conveniently executes model validation tests and determine uncertainty bounds from data, has been designed and is currently available. This toolbox also serves to benchmark the current state-of-the-art in uncertainty bound determination and in turn facilitate benchmarking of robust control technology. To help clarify the methodology and use of the new software, two tutorial examples are provided. The first involves the uncertainty characterization of a flexible structure dynamics, and the second example involves a closed loop performance validation of a ducted fan based on an uncertainty bound from data. These examples, along with other simulation and experimental results, also help describe the many factors and assumptions that determine the degree of success in applying robust control theory to practical problems.
How Benchmarking and Higher Education Came Together
ERIC Educational Resources Information Center
Levy, Gary D.; Ronco, Sharron L.
2012-01-01
This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…
The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example
ERIC Educational Resources Information Center
Steyn, H. J.
2015-01-01
Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…
[Controlling instruments in radiology].
Maurer, M
2013-10-01
Due to the rising costs and competitive pressures radiological clinics and practices are now facing, controlling instruments are gaining importance in the optimization of structures and processes of the various diagnostic examinations and interventional procedures. It will be shown how the use of selected controlling instruments can secure and improve the performance of radiological facilities. A definition of the concept of controlling will be provided. It will be shown which controlling instruments can be applied in radiological departments and practices. As an example, two of the controlling instruments, material cost analysis and benchmarking, will be illustrated.
NASA Technical Reports Server (NTRS)
Mason, Gregory S.; Berg, Martin C.; Mukhopadhyay, Vivek
2002-01-01
To study the effectiveness of various control system design methodologies, the NASA Langley Research Center initiated the Benchmark Active Controls Project. In this project, the various methodologies were applied to design a flutter suppression system for the Benchmark Active Controls Technology (BACT) Wing. This report describes the user's manual and software toolbox developed at the University of Washington to design a multirate flutter suppression control law for the BACT wing.
RNA-seq mixology: designing realistic control experiments to compare protocols and analysis methods
Holik, Aliaksei Z.; Law, Charity W.; Liu, Ruijie; Wang, Zeya; Wang, Wenyi; Ahn, Jaeil; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.
2017-01-01
Abstract Carefully designed control experiments provide a gold standard for benchmarking different genomics research tools. A shortcoming of many gene expression control studies is that replication involves profiling the same reference RNA sample multiple times. This leads to low, pure technical noise that is atypical of regular studies. To achieve a more realistic noise structure, we generated a RNA-sequencing mixture experiment using two cell lines of the same cancer type. Variability was added by extracting RNA from independent cell cultures and degrading particular samples. The systematic gene expression changes induced by this design allowed benchmarking of different library preparation kits (standard poly-A versus total RNA with Ribozero depletion) and analysis pipelines. Data generated using the total RNA kit had more signal for introns and various RNA classes (ncRNA, snRNA, snoRNA) and less variability after degradation. For differential expression analysis, voom with quality weights marginally outperformed other popular methods, while for differential splicing, DEXSeq was simultaneously the most sensitive and the most inconsistent method. For sample deconvolution analysis, DeMix outperformed IsoPure convincingly. Our RNA-sequencing data set provides a valuable resource for benchmarking different protocols and data pre-processing workflows. The extra noise mimics routine lab experiments more closely, ensuring any conclusions are widely applicable. PMID:27899618
Direct Adaptive Aircraft Control Using Dynamic Cell Structure Neural Networks
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1997-01-01
A Dynamic Cell Structure (DCS) Neural Network was developed which learns topology representing networks (TRNS) of F-15 aircraft aerodynamic stability and control derivatives. The network is integrated into a direct adaptive tracking controller. The combination produces a robust adaptive architecture capable of handling multiple accident and off- nominal flight scenarios. This paper describes the DCS network and modifications to the parameter estimation procedure. The work represents one step towards an integrated real-time reconfiguration control architecture for rapid prototyping of new aircraft designs. Performance was evaluated using three off-line benchmarks and on-line nonlinear Virtual Reality simulation. Flight control was evaluated under scenarios including differential stabilator lock, soft sensor failure, control and stability derivative variations, and air turbulence.
Michel, G
2012-01-01
The OPTIMISE study (NCT00681850) has been run in six European countries, including Luxembourg, to prospectively assess the effect of benchmarking on the quality of primary care in patients with type 2 diabetes, using major modifiable vascular risk factors as critical quality indicators. Primary care centers treating type 2 diabetic patients were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). Primary endpoint was percentage of patients in the benchmarking group achieving pre-set targets of the critical quality indicators: glycated hemoglobin (HbAlc), systolic blood pressure (SBP) and low-density lipoprotein (LDL) cholesterol after 12 months follow-up. In Luxembourg, in the benchmarking group, more patients achieved target for SBP (40.2% vs. 20%) and for LDL-cholesterol (50.4% vs. 44.2%). 12.9% of patients in the benchmarking group met all three targets compared with patients in the control group (8.3%). In this randomized, controlled study, benchmarking was shown to be an effective tool for improving critical quality indicator targets, which are the principal modifiable vascular risk factors in diabetes type 2.
How to benchmark methods for structure-based virtual screening of large compound libraries.
Christofferson, Andrew J; Huang, Niu
2012-01-01
Structure-based virtual screening is a useful computational technique for ligand discovery. To systematically evaluate different docking approaches, it is important to have a consistent benchmarking protocol that is both relevant and unbiased. Here, we describe the designing of a benchmarking data set for docking screen assessment, a standard docking screening process, and the analysis and presentation of the enrichment of annotated ligands among a background decoy database.
Manktelow, Bradley N; Seaton, Sarah E; Evans, T Alun
2016-12-01
There is an increasing use of statistical methods, such as funnel plots, to identify poorly performing healthcare providers. Funnel plots comprise the construction of control limits around a benchmark and providers with outcomes falling outside the limits are investigated as potential outliers. The benchmark is usually estimated from observed data but uncertainty in this estimate is usually ignored when constructing control limits. In this paper, the use of funnel plots in the presence of uncertainty in the value of the benchmark is reviewed for outcomes from a Binomial distribution. Two methods to derive the control limits are shown: (i) prediction intervals; (ii) tolerance intervals Tolerance intervals formally include the uncertainty in the value of the benchmark while prediction intervals do not. The probability properties of 95% control limits derived using each method were investigated through hypothesised scenarios. Neither prediction intervals nor tolerance intervals produce funnel plot control limits that satisfy the nominal probability characteristics when there is uncertainty in the value of the benchmark. This is not necessarily to say that funnel plots have no role to play in healthcare, but that without the development of intervals satisfying the nominal probability characteristics they must be interpreted with care. © The Author(s) 2014.
Benchmark Simulation Model No 2: finalisation of plant layout and default control strategy.
Nopens, I; Benedetti, L; Jeppsson, U; Pons, M-N; Alex, J; Copp, J B; Gernaey, K V; Rosen, C; Steyer, J-P; Vanrolleghem, P A
2010-01-01
The COST/IWA Benchmark Simulation Model No 1 (BSM1) has been available for almost a decade. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the research work related to the benchmark simulation models has resulted in more than 300 publications worldwide demonstrates the interest in and need of such tools within the research community. Recent efforts within the IWA Task Group on "Benchmarking of control strategies for WWTPs" have focused on an extension of the benchmark simulation model. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently, includes both pretreatment of wastewater as well as the processes describing sludge treatment. The motivation for the extension is the increasing interest and need to operate and control wastewater treatment systems not only at an individual process level but also on a plant-wide basis. To facilitate the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one week BSM1 evaluation period. In this paper, the finalised plant layout is summarised and, as was done for BSM1, a default control strategy is proposed. A demonstration of how BSM2 can be used to evaluate control strategies is also given.
Internal Benchmarking for Institutional Effectiveness
ERIC Educational Resources Information Center
Ronco, Sharron L.
2012-01-01
Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…
van Lent, Wineke A M; de Beer, Relinde D; van Harten, Wim H
2010-08-31
Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations.Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. The improved benchmarking process and the success factors can produce relevant input to improve the operations management of specialty hospitals.
2010-01-01
Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. Results We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations. Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. Conclusions The improved benchmarking process and the success factors can produce relevant input to improve the operations management of specialty hospitals. PMID:20807408
Hermans, Michel P; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos
2013-11-01
To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P < 0.001); 54.3 vs. 49.7% met the LDL cholesterol target (P = 0.006). Percentages of patients meeting all three targets increased during the study in both groups, with a statistically significant increase observed in the benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P < 0.001). In this prospective, randomized, controlled study, benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile.
Jimenez-Del-Toro, Oscar; Muller, Henning; Krenn, Markus; Gruenberg, Katharina; Taha, Abdel Aziz; Winterstein, Marianne; Eggel, Ivan; Foncubierta-Rodriguez, Antonio; Goksel, Orcun; Jakab, Andras; Kontokotsios, Georgios; Langs, Georg; Menze, Bjoern H; Salas Fernandez, Tomas; Schaer, Roger; Walleyo, Anna; Weber, Marc-Andre; Dicente Cid, Yashin; Gass, Tobias; Heinrich, Mattias; Jia, Fucang; Kahl, Fredrik; Kechichian, Razmig; Mai, Dominic; Spanier, Assaf B; Vincent, Graham; Wang, Chunliang; Wyeth, Daniel; Hanbury, Allan
2016-11-01
Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.
Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking
ERIC Educational Resources Information Center
Building State Capacity and Productivity Center, 2013
2013-01-01
This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…
NASA Astrophysics Data System (ADS)
Jung, Sang-Young
Design procedures for aircraft wing structures with control surfaces are presented using multidisciplinary design optimization. Several disciplines such as stress analysis, structural vibration, aerodynamics, and controls are considered simultaneously and combined for design optimization. Vibration data and aerodynamic data including those in the transonic regime are calculated by existing codes. Flutter analyses are performed using those data. A flutter suppression method is studied using control laws in the closed-loop flutter equation. For the design optimization, optimization techniques such as approximation, design variable linking, temporary constraint deletion, and optimality criteria are used. Sensitivity derivatives of stresses and displacements for static loads, natural frequency, flutter characteristics, and control characteristics with respect to design variables are calculated for an approximate optimization. The objective function is the structural weight. The design variables are the section properties of the structural elements and the control gain factors. Existing multidisciplinary optimization codes (ASTROS* and MSC/NASTRAN) are used to perform single and multiple constraint optimizations of fully built up finite element wing structures. Three benchmark wing models are developed and/or modified for this purpose. The models are tested extensively.
Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu
2016-01-01
A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.
Creating a meaningful infection control program: one home healthcare agency's lessons.
Poff, Renee McCoy; Browning, Sarah Via
2014-03-01
Creating a meaningful infection control program in the home care setting proved to be challenging for agency leaders of one hospital-based home healthcare agency. Challenges arose when agency leaders provided infection control (IC) data to the hospital's IC Committee. The IC Section Chief asked for national benchmark comparisons to align home healthcare reporting to that of the hospital level. At that point, it was evident that the home healthcare IC program lacked definition and structure. The purpose of this article is to share how one agency built a meaningful IC program.
ERIC Educational Resources Information Center
Galloway, Melissa Ritchie
2016-01-01
The purpose of this causal comparative study was to test the theory of assessment that relates benchmark assessments to the Georgia middle grades science Criterion Referenced Competency Test (CRCT) percentages, controlling for schools who do not administer benchmark assessments versus schools who do administer benchmark assessments for all middle…
NASA Astrophysics Data System (ADS)
Goupil, Ph.; Puyou, G.
2013-12-01
This paper presents a high-fidelity generic twin engine civil aircraft model developed by Airbus for advanced flight control system research. The main features of this benchmark are described to make the reader aware of the model complexity and representativeness. It is a complete representation including the nonlinear rigid-body aircraft model with a full set of control surfaces, actuator models, sensor models, flight control laws (FCL), and pilot inputs. Two applications of this benchmark in the framework of European projects are presented: FCL clearance using optimization and advanced fault detection and diagnosis (FDD).
Online Object Tracking, Learning and Parsing with And-Or Graphs.
Wu, Tianfu; Lu, Yang; Zhu, Song-Chun
2017-12-01
This paper presents a method, called AOGTracker, for simultaneously tracking, learning and parsing (TLP) of unknown objects in video sequences with a hierarchical and compositional And-Or graph (AOG) representation. The TLP method is formulated in the Bayesian framework with a spatial and a temporal dynamic programming (DP) algorithms inferring object bounding boxes on-the-fly. During online learning, the AOG is discriminatively learned using latent SVM [1] to account for appearance (e.g., lighting and partial occlusion) and structural (e.g., different poses and viewpoints) variations of a tracked object, as well as distractors (e.g., similar objects) in background. Three key issues in online inference and learning are addressed: (i) maintaining purity of positive and negative examples collected online, (ii) controling model complexity in latent structure learning, and (iii) identifying critical moments to re-learn the structure of AOG based on its intrackability. The intrackability measures uncertainty of an AOG based on its score maps in a frame. In experiments, our AOGTracker is tested on two popular tracking benchmarks with the same parameter setting: the TB-100/50/CVPR2013 benchmarks , [3] , and the VOT benchmarks [4] -VOT 2013, 2014, 2015 and TIR2015 (thermal imagery tracking). In the former, our AOGTracker outperforms state-of-the-art tracking algorithms including two trackers based on deep convolutional network [5] , [6] . In the latter, our AOGTracker outperforms all other trackers in VOT2013 and is comparable to the state-of-the-art methods in VOT2014, 2015 and TIR2015.
Adaptive unified continuum FEM modeling of a 3D FSI benchmark problem.
Jansson, Johan; Degirmenci, Niyazi Cem; Hoffman, Johan
2017-09-01
In this paper, we address a 3D fluid-structure interaction benchmark problem that represents important characteristics of biomedical modeling. We present a goal-oriented adaptive finite element methodology for incompressible fluid-structure interaction based on a streamline diffusion-type stabilization of the balance equations for mass and momentum for the entire continuum in the domain, which is implemented in the Unicorn/FEniCS software framework. A phase marker function and its corresponding transport equation are introduced to select the constitutive law, where the mesh tracks the discontinuous fluid-structure interface. This results in a unified simulation method for fluids and structures. We present detailed results for the benchmark problem compared with experiments, together with a mesh convergence study. Copyright © 2016 John Wiley & Sons, Ltd.
Li, Yang; Yang, Jianyi
2017-04-24
The prediction of protein-ligand binding affinity has recently been improved remarkably by machine-learning-based scoring functions. For example, using a set of simple descriptors representing the atomic distance counts, the RF-Score improves the Pearson correlation coefficient to about 0.8 on the core set of the PDBbind 2007 database, which is significantly higher than the performance of any conventional scoring function on the same benchmark. A few studies have been made to discuss the performance of machine-learning-based methods, but the reason for this improvement remains unclear. In this study, by systemically controlling the structural and sequence similarity between the training and test proteins of the PDBbind benchmark, we demonstrate that protein structural and sequence similarity makes a significant impact on machine-learning-based methods. After removal of training proteins that are highly similar to the test proteins identified by structure alignment and sequence alignment, machine-learning-based methods trained on the new training sets do not outperform the conventional scoring functions any more. On the contrary, the performance of conventional functions like X-Score is relatively stable no matter what training data are used to fit the weights of its energy terms.
Lagarde, Nathalie; Zagury, Jean-François; Montes, Matthieu
2015-07-27
Virtual screening methods are commonly used nowadays in drug discovery processes. However, to ensure their reliability, they have to be carefully evaluated. The evaluation of these methods is often realized in a retrospective way, notably by studying the enrichment of benchmarking data sets. To this purpose, numerous benchmarking data sets were developed over the years, and the resulting improvements led to the availability of high quality benchmarking data sets. However, some points still have to be considered in the selection of the active compounds, decoys, and protein structures to obtain optimal benchmarking data sets.
Benchmarking Is Associated With Improved Quality of Care in Type 2 Diabetes
Hermans, Michel P.; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos
2013-01-01
OBJECTIVE To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. RESEARCH DESIGN AND METHODS Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. RESULTS Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P < 0.001); 54.3 vs. 49.7% met the LDL cholesterol target (P = 0.006). Percentages of patients meeting all three targets increased during the study in both groups, with a statistically significant increase observed in the benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P < 0.001). CONCLUSIONS In this prospective, randomized, controlled study, benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile. PMID:23846810
McIlrath, Carole; Keeney, Sinead; McKenna, Hugh; McLaughlin, Derek
2010-02-01
This paper is a report of a study conducted to identify and gain consensus on appropriate benchmarks for effective primary care-based nursing services for adults with depression. Worldwide evidence suggests that between 5% and 16% of the population have a diagnosis of depression. Most of their care and treatment takes place in primary care. In recent years, primary care nurses, including community mental health nurses, have become more involved in the identification and management of patients with depression; however, there are no appropriate benchmarks to guide, develop and support their practice. In 2006, a three-round electronic Delphi survey was completed by a United Kingdom multi-professional expert panel (n = 67). Round 1 generated 1216 statements relating to structures (such as training and protocols), processes (such as access and screening) and outcomes (such as patient satisfaction and treatments). Content analysis was used to collapse statements into 140 benchmarks. Seventy-three benchmarks achieved consensus during subsequent rounds. Of these, 45 (61%) were related to structures, 18 (25%) to processes and 10 (14%) to outcomes. Multi-professional primary care staff have similar views about the appropriate benchmarks for care of adults with depression. These benchmarks could serve as a foundation for depression improvement initiatives in primary care and ongoing research into depression management by nurses.
A benchmark for fault tolerant flight control evaluation
NASA Astrophysics Data System (ADS)
Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.
2013-12-01
A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.
Tsimihodimos, Vasilis; Kostapanos, Michael S.; Moulis, Alexandros; Nikas, Nikos; Elisaf, Moses S.
2015-01-01
Objectives: To investigate the effect of benchmarking on the quality of type 2 diabetes (T2DM) care in Greece. Methods: The OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study [ClinicalTrials.gov identifier: NCT00681850] was an international multicenter, prospective cohort study. It included physicians randomized 3:1 to either receive benchmarking for glycated hemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) treatment targets (benchmarking group) or not (control group). The proportions of patients achieving the targets of the above-mentioned parameters were compared between groups after 12 months of treatment. Also, the proportions of patients achieving those targets at 12 months were compared with baseline in the benchmarking group. Results: In the Greek region, the OPTIMISE study included 797 adults with T2DM (570 in the benchmarking group). At month 12 the proportion of patients within the predefined targets for SBP and LDL-C was greater in the benchmarking compared with the control group (50.6 versus 35.8%, and 45.3 versus 36.1%, respectively). However, these differences were not statistically significant. No difference between groups was noted in the percentage of patients achieving the predefined target for HbA1c. At month 12 the increase in the percentage of patients achieving all three targets was greater in the benchmarking (5.9–15.0%) than in the control group (2.7–8.1%). In the benchmarking group more patients were on target regarding SBP (50.6% versus 29.8%), LDL-C (45.3% versus 31.3%) and HbA1c (63.8% versus 51.2%) at 12 months compared with baseline (p < 0.001 for all comparisons). Conclusion: Benchmarking may comprise a promising tool for improving the quality of T2DM care. Nevertheless, target achievement rates of each, and of all three, quality indicators were suboptimal, indicating there are still unmet needs in the management of T2DM. PMID:26445642
Statistics based sampling for controller and estimator design
NASA Astrophysics Data System (ADS)
Tenne, Dirk
The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.
Benchmarking Ada tasking on tightly coupled multiprocessor architectures
NASA Technical Reports Server (NTRS)
Collard, Philippe; Goforth, Andre; Marquardt, Matthew
1989-01-01
The development of benchmarks and performance measures for parallel Ada tasking is reported with emphasis on the macroscopic behavior of the benchmark across a set of load parameters. The application chosen for the study was the NASREM model for telerobot control, relevant to many NASA missions. The results of the study demonstrate the potential of parallel Ada in accomplishing the task of developing a control system for a system such as the Flight Telerobotic Servicer using the NASREM framework.
2018-01-01
Selective digestive decontamination (SDD, topical antibiotic regimens applied to the respiratory tract) appears effective for preventing ventilator associated pneumonia (VAP) in intensive care unit (ICU) patients. However, potential contextual effects of SDD on Staphylococcus aureus infections in the ICU remain unclear. The S. aureus ventilator associated pneumonia (S. aureus VAP), VAP overall and S. aureus bacteremia incidences within component (control and intervention) groups within 27 SDD studies were benchmarked against 115 observational groups. Component groups from 66 studies of various interventions other than SDD provided additional points of reference. In 27 SDD study control groups, the mean S. aureus VAP incidence is 9.6% (95% CI; 6.9–13.2) versus a benchmark derived from 115 observational groups being 4.8% (95% CI; 4.2–5.6). In nine SDD study control groups the mean S. aureus bacteremia incidence is 3.8% (95% CI; 2.1–5.7) versus a benchmark derived from 10 observational groups being 2.1% (95% CI; 1.1–4.1). The incidences of S. aureus VAP and S. aureus bacteremia within the control groups of SDD studies are each higher than literature derived benchmarks. Paradoxically, within the SDD intervention groups, the incidences of both S. aureus VAP and VAP overall are more similar to the benchmarks. PMID:29300363
Benchmark Design and Installation: A synthesis of Existing Information.
1987-07-01
casings (15 ft deep) drilled to rock and filled with concrete. Disks - 1 . Set on vertically stable structures (e.g., dam monoliths). 2 . Set in rock ...Structural movement survey 1 . Rock outcrops (first choice) -- chiseled square on high point. 2 . Massive concrete structure (second choice) - cut square on...bolt marker (type 2 ). 58,. % %--"% %I 1 ± 4 -I,.- Table Cl. Recomnded benchmarks. Type of condition or terrain Type of markert Bedrock, rock outcrops
Hurley, J C
2018-04-10
Regimens containing topical polymyxin appear to be more effective in preventing ventilator-associated pneumonia (VAP) than other methods. To benchmark the incidence rates of Acinetobacter-associated VAP (AAVAP) within component (control and intervention) groups from concurrent controlled studies of polymyxin compared with studies of various VAP prevention methods other than polymyxin (non-polymyxin studies). An AAVAP benchmark was derived using data from 77 observational groups without any VAP prevention method under study. Data from 41 non-polymyxin studies provided additional points of reference. The benchmarking was undertaken by meta-regression using generalized estimating equation methods. Within 20 studies of topical polymyxin, the mean AAVAP was 4.6% [95% confidence interval (CI) 3.0-6.9] and 3.7% (95% CI 2.0-5.3) for control and intervention groups, respectively. In contrast, the AAVAP benchmark was 1.5% (95% CI 1.2-2.0). In the AAVAP meta-regression model, group origin from a trauma intensive care unit (+0.55; +0.16 to +0.94, P = 0.006) or membership of a polymyxin control group (+0.64; +0.21 to +1.31, P = 0.023), but not membership of a polymyxin intervention group (+0.24; -0.37 to +0.84, P = 0.45), were significant positive correlates. The mean incidence of AAVAP within the control groups of studies of topical polymyxin is more than double the benchmark, whereas the incidence rates within the groups of non-polymyxin studies and, paradoxically, polymyxin intervention groups are more similar to the benchmark. These incidence rates, which are paradoxical in the context of an apparent effect against VAP within controlled trials of topical polymyxin-based interventions, force a re-appraisal. Copyright © 2018 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Yi-Kang
2017-09-01
Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.
NASA Technical Reports Server (NTRS)
Mason, Gregory S.; Berg, Martin C.; Mukhopadhyay, Vivek
2002-01-01
To study the effectiveness of various control system design methodologies, the NASA Langley Research Center initiated the Benchmark Active Controls Project. In this project, the various methodologies were applied to design a flutter suppression system for the Benchmark Active Controls Technology (BACT) Wing. This report describes a project at the University of Washington to design a multirate suppression system for the BACT wing. The objective of the project was two fold. First, to develop a methodology for designing robust multirate compensators, and second, to demonstrate the methodology by applying it to the design of a multirate flutter suppression system for the BACT wing.
Closed-Loop Neuromorphic Benchmarks
Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris
2015-01-01
Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820
Cimino, James J
2006-06-01
A 1998 paper that delineated desirable characteristics, or desiderata for controlled medical terminologies attempted to summarize emerging consensus regarding structural issues of such terminologies. Among the Desiderata was a call for terminologies to be "concept oriented." Since then, research has trended toward the extension of terminologies into ontologies. A paper by Smith, entitled "From Concepts to Clinical Reality: An Essay on the Benchmarking of Biomedical Terminologies" urges a realist approach that seeks terminologies composed of universals, rather than concepts. The current paper addresses issues raised by Smith and attempts to extend the Desiderata, not away from concepts, but towards recognition that concepts and universals must both be embraced and can coexist peaceably in controlled terminologies. To that end, additional Desiderata are defined that deal with the purpose, rather than the structure, of controlled medical terminologies.
Bauer, Matthias R; Ibrahim, Tamer M; Vogel, Simon M; Boeckler, Frank M
2013-06-24
The application of molecular benchmarking sets helps to assess the actual performance of virtual screening (VS) workflows. To improve the efficiency of structure-based VS approaches, the selection and optimization of various parameters can be guided by benchmarking. With the DEKOIS 2.0 library, we aim to further extend and complement the collection of publicly available decoy sets. Based on BindingDB bioactivity data, we provide 81 new and structurally diverse benchmark sets for a wide variety of different target classes. To ensure a meaningful selection of ligands, we address several issues that can be found in bioactivity data. We have improved our previously introduced DEKOIS methodology with enhanced physicochemical matching, now including the consideration of molecular charges, as well as a more sophisticated elimination of latent actives in the decoy set (LADS). We evaluate the docking performance of Glide, GOLD, and AutoDock Vina with our data sets and highlight existing challenges for VS tools. All DEKOIS 2.0 benchmark sets will be made accessible at http://www.dekois.com.
Benchmarking an Unstructured-Grid Model for Tsunami Current Modeling
NASA Astrophysics Data System (ADS)
Zhang, Yinglong J.; Priest, George; Allan, Jonathan; Stimely, Laura
2016-12-01
We present model results derived from a tsunami current benchmarking workshop held by the NTHMP (National Tsunami Hazard Mitigation Program) in February 2015. Modeling was undertaken using our own 3D unstructured-grid model that has been previously certified by the NTHMP for tsunami inundation. Results for two benchmark tests are described here, including: (1) vortex structure in the wake of a submerged shoal and (2) impact of tsunami waves on Hilo Harbor in the 2011 Tohoku event. The modeled current velocities are compared with available lab and field data. We demonstrate that the model is able to accurately capture the velocity field in the two benchmark tests; in particular, the 3D model gives a much more accurate wake structure than the 2D model for the first test, with the root-mean-square error and mean bias no more than 2 cm s-1 and 8 mm s-1, respectively, for the modeled velocity.
Outcome Benchmarks for Adaptations of Research-Supported Treatments for Adult Traumatic Stress
ERIC Educational Resources Information Center
Rubin, Allen; Parrish, Danielle E.; Washburn, Micki
2016-01-01
This article provides benchmark data on within-group effect sizes from published randomized controlled trials (RCTs) that evaluated the efficacy of research-supported treatments (RSTs) for adult traumatic stress. Agencies can compare these benchmarks to their treatment group effect size to inform their decisions as to whether the way they are…
Method and system for benchmarking computers
Gustafson, John L.
1993-09-14
A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.
NASA Astrophysics Data System (ADS)
Chróścielewski, Jacek; Schmidt, Rüdiger; Eremeyev, Victor A.
2018-05-01
This paper addresses modeling and finite element analysis of the transient large-amplitude vibration response of thin rod-type structures (e.g., plane curved beams, arches, ring shells) and its control by integrated piezoelectric layers. A geometrically nonlinear finite beam element for the analysis of piezolaminated structures is developed that is based on the Bernoulli hypothesis and the assumptions of small strains and finite rotations of the normal. The finite element model can be applied to static, stability, and transient analysis of smart structures consisting of a master structure and integrated piezoelectric actuator layers or patches attached to the upper and lower surfaces. Two problems are studied extensively: (i) FE analyses of a clamped semicircular ring shell that has been used as a benchmark problem for linear vibration control in several recent papers are critically reviewed and extended to account for the effects of structural nonlinearity and (ii) a smart circular arch subjected to a hydrostatic pressure load is investigated statically and dynamically in order to study the shift of bifurcation and limit points, eigenfrequencies, and eigenvectors, as well as vibration control for loading conditions which may lead to dynamic loss of stability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Will, M.E.
1994-01-01
This report presents a standard method for deriving benchmarks for the purpose of ''contaminant screening,'' performed by comparing measured ambient concentrations of chemicals. The work was performed under Work Breakdown Structure 1.4.12.2.3.04.07.02 (Activity Data Sheet 8304). In addition, this report presents sets of data concerning the effects of chemicals in soil on invertebrates and soil microbial processes, benchmarks for chemicals potentially associated with United States Department of Energy sites, and literature describing the experiments from which data were drawn for benchmark derivation.
Benchmarking methods and data sets for ligand enrichment assessment in virtual screening.
Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon
2015-01-01
Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. "analogue bias", "artificial enrichment" and "false negative". In addition, we introduce our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylases (HDACs) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The leave-one-out cross-validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased as measured by property matching, ROC curves and AUCs. Copyright © 2014 Elsevier Inc. All rights reserved.
Benchmarking Methods and Data Sets for Ligand Enrichment Assessment in Virtual Screening
Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon
2014-01-01
Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. “analogue bias”, “artificial enrichment” and “false negative”. In addition, we introduced our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylase (HDAC) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The Leave-One-Out Cross-Validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased in terms of property matching, ROC curves and AUCs. PMID:25481478
Experimental physics characteristics of a heavy-metal-reflected fast-spectrum critical assembly
NASA Technical Reports Server (NTRS)
Heneveld, W. H.; Paschall, R. K.; Springer, T. H.; Swanson, V. A.; Thiele, A. W.; Tuttle, R. J.
1971-01-01
A zero-power critical assembly was designed, constructed, and operated for the purpose of conducting a series of benchmark experiments dealing with the physics characteristics of a UN-fueled, Li-7 cooled, Mo-reflected, drum-controlled compact fast reactor for use with a space-power electric conversion system. The experimental program consisted basically of measuring the differential neutron spectra and the changes in critical mass that accompanied the stepwise addition of (Li-7)3N, Hf, Ta, and W to a basic core fueled with U metal in a pin-type Ta honeycomb structure. In addition, experimental results were obtained on power distributions, control characteristics, neutron lifetime, and reactivity worths of numerous absorber, structural, and scattering materials.
A benchmarking method to measure dietary absorption efficiency of chemicals by fish.
Xiao, Ruiyang; Adolfsson-Erici, Margaretha; Åkerman, Gun; McLachlan, Michael S; MacLeod, Matthew
2013-12-01
Understanding the dietary absorption efficiency of chemicals in the gastrointestinal tract of fish is important from both a scientific and a regulatory point of view. However, reported fish absorption efficiencies for well-studied chemicals are highly variable. In the present study, the authors developed and exploited an internal chemical benchmarking method that has the potential to reduce uncertainty and variability and, thus, to improve the precision of measurements of fish absorption efficiency. The authors applied the benchmarking method to measure the gross absorption efficiency for 15 chemicals with a wide range of physicochemical properties and structures. They selected 2,2',5,6'-tetrachlorobiphenyl (PCB53) and decabromodiphenyl ethane as absorbable and nonabsorbable benchmarks, respectively. Quantities of chemicals determined in fish were benchmarked to the fraction of PCB53 recovered in fish, and quantities of chemicals determined in feces were benchmarked to the fraction of decabromodiphenyl ethane recovered in feces. The performance of the benchmarking procedure was evaluated based on the recovery of the test chemicals and precision of absorption efficiency from repeated tests. Benchmarking did not improve the precision of the measurements; after benchmarking, however, the median recovery for 15 chemicals was 106%, and variability of recoveries was reduced compared with before benchmarking, suggesting that benchmarking could account for incomplete extraction of chemical in fish and incomplete collection of feces from different tests. © 2013 SETAC.
2016-11-01
iii Contents List of Figures v 1. Introduction 1 2. Background 1 3. Yahoo ! Cloud Serving Benchmark (YCSB) 2 3.1 Data Loading and Performance...transactional system. 3. Yahoo ! Cloud Serving Benchmark (YCSB) 3.1 Data Loading and Performance Testing Framework When originally setting out to perform the...that referred to a data loading and performance testing framework, Yahoo ! Cloud Serving Benchmark (YCSB).12 This framework is freely available and
Using a health promotion model to promote benchmarking.
Welby, Jane
2006-07-01
The North East (England) Neonatal Benchmarking Group has been established for almost a decade and has researched and developed a substantial number of evidence-based benchmarks. With no firm evidence that these were being used or that there was any standardisation of neonatal care throughout the region, the group embarked on a programme to review the benchmarks and determine what evidence-based guidelines were needed to support standardisation. A health promotion planning model was used by one subgroup to structure the programme; it enabled all members of the sub group to engage in the review process and provided the motivation and supporting documentation for implementation of changes in practice. The need for a regional guideline development group to complement the activity of the benchmarking group is being addressed.
Kohn-Sham Band Structure Benchmark Including Spin-Orbit Coupling for 2D and 3D Solids
NASA Astrophysics Data System (ADS)
Huhn, William; Blum, Volker
2015-03-01
Accurate electronic band structures serve as a primary indicator of the suitability of a material for a given application, e.g., as electronic or catalytic materials. Computed band structures, however, are subject to a host of approximations, some of which are more obvious (e.g., the treatment of the exchange-correlation of self-energy) and others less obvious (e.g., the treatment of core, semicore, or valence electrons, handling of relativistic effects, or the accuracy of the underlying basis set used). We here provide a set of accurate Kohn-Sham band structure benchmarks, using the numeric atom-centered all-electron electronic structure code FHI-aims combined with the ``traditional'' PBE functional and the hybrid HSE functional, to calculate core, valence, and low-lying conduction bands of a set of 2D and 3D materials. Benchmarks are provided with and without effects of spin-orbit coupling, using quasi-degenerate perturbation theory to predict spin-orbit splittings. This work is funded by Fritz-Haber-Institut der Max-Planck-Gesellschaft.
Nobels, Frank; Debacker, Noëmi; Brotons, Carlos; Elisaf, Moses; Hermans, Michel P; Michel, Georges; Muls, Erik
2011-09-22
To investigate the effect of physician- and patient-specific feedback with benchmarking on the quality of care in adults with type 2 diabetes mellitus (T2DM). Study centres in six European countries were randomised to either a benchmarking or control group. Physicians in both groups received feedback on modifiable outcome indicators (glycated haemoglobin [HbA1c], glycaemia, total cholesterol, high density lipoprotein-cholesterol, low density lipoprotein [LDL]-cholesterol and triglycerides) for each patient at 0, 4, 8 and 12 months, based on the four times yearly control visits recommended by international guidelines. The benchmarking group also received comparative results on three critical quality indicators of vascular risk (HbA1c, LDL-cholesterol and systolic blood pressure [SBP]), checked against the results of their colleagues from the same country, and versus pre-set targets. After 12 months of follow up, the percentage of patients achieving the pre-determined targets for the three critical quality indicators will be assessed in the two groups. Recruitment was completed in December 2008 with 3994 evaluable patients. This paper discusses the study rationale and design of OPTIMISE, a randomised controlled study, that will help assess whether benchmarking is a useful clinical tool for improving outcomes in T2DM in primary care. NCT00681850.
2011-01-01
Background To investigate the effect of physician- and patient-specific feedback with benchmarking on the quality of care in adults with type 2 diabetes mellitus (T2DM). Methods Study centres in six European countries were randomised to either a benchmarking or control group. Physicians in both groups received feedback on modifiable outcome indicators (glycated haemoglobin [HbA1c], glycaemia, total cholesterol, high density lipoprotein-cholesterol, low density lipoprotein [LDL]-cholesterol and triglycerides) for each patient at 0, 4, 8 and 12 months, based on the four times yearly control visits recommended by international guidelines. The benchmarking group also received comparative results on three critical quality indicators of vascular risk (HbA1c, LDL-cholesterol and systolic blood pressure [SBP]), checked against the results of their colleagues from the same country, and versus pre-set targets. After 12 months of follow up, the percentage of patients achieving the pre-determined targets for the three critical quality indicators will be assessed in the two groups. Results Recruitment was completed in December 2008 with 3994 evaluable patients. Conclusions This paper discusses the study rationale and design of OPTIMISE, a randomised controlled study, that will help assess whether benchmarking is a useful clinical tool for improving outcomes in T2DM in primary care. Trial registration NCT00681850 PMID:21939502
Development of risk-based nanomaterial groups for occupational exposure control
NASA Astrophysics Data System (ADS)
Kuempel, E. D.; Castranova, V.; Geraci, C. L.; Schulte, P. A.
2012-09-01
Given the almost limitless variety of nanomaterials, it will be virtually impossible to assess the possible occupational health hazard of each nanomaterial individually. The development of science-based hazard and risk categories for nanomaterials is needed for decision-making about exposure control practices in the workplace. A possible strategy would be to select representative (benchmark) materials from various mode of action (MOA) classes, evaluate the hazard and develop risk estimates, and then apply a systematic comparison of new nanomaterials with the benchmark materials in the same MOA class. Poorly soluble particles are used here as an example to illustrate quantitative risk assessment methods for possible benchmark particles and occupational exposure control groups, given mode of action and relative toxicity. Linking such benchmark particles to specific exposure control bands would facilitate the translation of health hazard and quantitative risk information to the development of effective exposure control practices in the workplace. A key challenge is obtaining sufficient dose-response data, based on standard testing, to systematically evaluate the nanomaterials' physical-chemical factors influencing their biological activity. Categorization processes involve both science-based analyses and default assumptions in the absence of substance-specific information. Utilizing data and information from related materials may facilitate initial determinations of exposure control systems for nanomaterials.
Benchmark matrix and guide: Part II.
1991-01-01
In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.
Benchmarking: applications to transfusion medicine.
Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M
2012-10-01
Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.
A Flow Solver for Three-Dimensional DRAGON Grids
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Zheng, Yao
2002-01-01
DRAGONFLOW code has been developed to solve three-dimensional Navier-Stokes equations over a complex geometry whose flow domain is discretized with the DRAGON grid-a combination of Chimera grid and a collection of unstructured grids. In the DRAGONFLOW suite, both OVERFLOW and USM3D are presented in form of module libraries, and a master module controls the invoking of these individual modules. This report includes essential aspects, programming structures, benchmark tests and numerical simulations.
Issues in Benchmarking and Assessing Institutional Engagement
ERIC Educational Resources Information Center
Furco, Andrew; Miller, William
2009-01-01
The process of assessing and benchmarking community engagement can take many forms. To date, more than two dozen assessment tools for measuring community engagement institutionalization have been published. These tools vary substantially in purpose, level of complexity, scope, process, structure, and focus. While some instruments are designed to…
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1986-01-01
An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.
WWTP dynamic disturbance modelling--an essential module for long-term benchmarking development.
Gernaey, K V; Rosen, C; Jeppsson, U
2006-01-01
Intensive use of the benchmark simulation model No. 1 (BSM1), a protocol for objective comparison of the effectiveness of control strategies in biological nitrogen removal activated sludge plants, has also revealed a number of limitations. Preliminary definitions of the long-term benchmark simulation model No. 1 (BSM1_LT) and the benchmark simulation model No. 2 (BSM2) have been made to extend BSM1 for evaluation of process monitoring methods and plant-wide control strategies, respectively. Influent-related disturbances for BSM1_LT/BSM2 are to be generated with a model, and this paper provides a general overview of the modelling methods used. Typical influent dynamic phenomena generated with the BSM1_LT/BSM2 influent disturbance model, including diurnal, weekend, seasonal and holiday effects, as well as rainfall, are illustrated with simulation results. As a result of the work described in this paper, a proposed influent model/file has been released to the benchmark developers for evaluation purposes. Pending this evaluation, a final BSM1_LT/BSM2 influent disturbance model definition is foreseen. Preliminary simulations with dynamic influent data generated by the influent disturbance model indicate that default BSM1 activated sludge plant control strategies will need extensions for BSM1_LT/BSM2 to efficiently handle 1 year of influent dynamics.
XWeB: The XML Warehouse Benchmark
NASA Astrophysics Data System (ADS)
Mahboubi, Hadj; Darmont, Jérôme
With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.
Robust on-off pulse control of flexible space vehicles
NASA Technical Reports Server (NTRS)
Wie, Bong; Sinha, Ravi
1993-01-01
The on-off reaction jet control system is often used for attitude and orbital maneuvering of various spacecraft. Future space vehicles such as the orbital transfer vehicles, orbital maneuvering vehicles, and space station will extensively use reaction jets for orbital maneuvering and attitude stabilization. The proposed robust fuel- and time-optimal control algorithm is used for a three-mass spacing model of flexible spacecraft. A fuel-efficient on-off control logic is developed for robust rest-to-rest maneuver of a flexible vehicle with minimum excitation of structural modes. The first part of this report is concerned with the problem of selecting a proper pair of jets for practical trade-offs among the maneuvering time, fuel consumption, structural mode excitation, and performance robustness. A time-optimal control problem subject to parameter robustness constraints is formulated and solved. The second part of this report deals with obtaining parameter insensitive fuel- and time- optimal control inputs by solving a constrained optimization problem subject to robustness constraints. It is shown that sensitivity to modeling errors can be significantly reduced by the proposed, robustified open-loop control approach. The final part of this report deals with sliding mode control design for uncertain flexible structures. The benchmark problem of a flexible structure is used as an example for the feedback sliding mode controller design with bounded control inputs and robustness to parameter variations is investigated.
A benchmark testing ground for integrating homology modeling and protein docking.
Bohnuud, Tanggis; Luo, Lingqi; Wodak, Shoshana J; Bonvin, Alexandre M J J; Weng, Zhiping; Vajda, Sandor; Schueler-Furman, Ora; Kozakov, Dima
2017-01-01
Protein docking procedures carry out the task of predicting the structure of a protein-protein complex starting from the known structures of the individual protein components. More often than not, however, the structure of one or both components is not known, but can be derived by homology modeling on the basis of known structures of related proteins deposited in the Protein Data Bank (PDB). Thus, the problem is to develop methods that optimally integrate homology modeling and docking with the goal of predicting the structure of a complex directly from the amino acid sequences of its component proteins. One possibility is to use the best available homology modeling and docking methods. However, the models built for the individual subunits often differ to a significant degree from the bound conformation in the complex, often much more so than the differences observed between free and bound structures of the same protein, and therefore additional conformational adjustments, both at the backbone and side chain levels need to be modeled to achieve an accurate docking prediction. In particular, even homology models of overall good accuracy frequently include localized errors that unfavorably impact docking results. The predicted reliability of the different regions in the model can also serve as a useful input for the docking calculations. Here we present a benchmark dataset that should help to explore and solve combined modeling and docking problems. This dataset comprises a subset of the experimentally solved 'target' complexes from the widely used Docking Benchmark from the Weng Lab (excluding antibody-antigen complexes). This subset is extended to include the structures from the PDB related to those of the individual components of each complex, and hence represent potential templates for investigating and benchmarking integrated homology modeling and docking approaches. Template sets can be dynamically customized by specifying ranges in sequence similarity and in PDB release dates, or using other filtering options, such as excluding sets of specific structures from the template list. Multiple sequence alignments, as well as structural alignments of the templates to their corresponding subunits in the target are also provided. The resource is accessible online or can be downloaded at http://cluspro.org/benchmark, and is updated on a weekly basis in synchrony with new PDB releases. Proteins 2016; 85:10-16. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Benchmarking to improve the quality of cystic fibrosis care.
Schechter, Michael S
2012-11-01
Benchmarking involves the ascertainment of healthcare programs with most favorable outcomes as a means to identify and spread effective strategies for delivery of care. The recent interest in the development of patient registries for patients with cystic fibrosis (CF) has been fueled in part by an interest in using them to facilitate benchmarking. This review summarizes reports of how benchmarking has been operationalized in attempts to improve CF care. Although certain goals of benchmarking can be accomplished with an exclusive focus on registry data analysis, benchmarking programs in Germany and the United States have supplemented these data analyses with exploratory interactions and discussions to better understand successful approaches to care and encourage their spread throughout the care network. Benchmarking allows the discovery and facilitates the spread of effective approaches to care. It provides a pragmatic alternative to traditional research methods such as randomized controlled trials, providing insights into methods that optimize delivery of care and allowing judgments about the relative effectiveness of different therapeutic approaches.
2015-01-01
Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the “artificial enrichment” and “analogue bias” of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD. PMID:24749745
Benchmarking in pathology: development of an activity-based costing model.
Burnett, Leslie; Wilson, Roger; Pfeffer, Sally; Lowry, John
2012-12-01
Benchmarking in Pathology (BiP) allows pathology laboratories to determine the unit cost of all laboratory tests and procedures, and also provides organisational productivity indices allowing comparisons of performance with other BiP participants. We describe 14 years of progressive enhancement to a BiP program, including the implementation of 'avoidable costs' as the accounting basis for allocation of costs rather than previous approaches using 'total costs'. A hierarchical tree-structured activity-based costing model distributes 'avoidable costs' attributable to the pathology activities component of a pathology laboratory operation. The hierarchical tree model permits costs to be allocated across multiple laboratory sites and organisational structures. This has enabled benchmarking on a number of levels, including test profiles and non-testing related workload activities. The development of methods for dealing with variable cost inputs, allocation of indirect costs using imputation techniques, panels of tests, and blood-bank record keeping, have been successfully integrated into the costing model. A variety of laboratory management reports are produced, including the 'cost per test' of each pathology 'test' output. Benchmarking comparisons may be undertaken at any and all of the 'cost per test' and 'cost per Benchmarking Complexity Unit' level, 'discipline/department' (sub-specialty) level, or overall laboratory/site and organisational levels. We have completed development of a national BiP program. An activity-based costing methodology based on avoidable costs overcomes many problems of previous benchmarking studies based on total costs. The use of benchmarking complexity adjustment permits correction for varying test-mix and diagnostic complexity between laboratories. Use of iterative communication strategies with program participants can overcome many obstacles and lead to innovations.
Xia, Jie; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren; Wang, Xiang Simon
2014-05-27
Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the "artificial enrichment" and "analogue bias" of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD.
Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja
2015-01-01
The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.
Benchmarking on Tsunami Currents with ComMIT
NASA Astrophysics Data System (ADS)
Sharghi vand, N.; Kanoglu, U.
2015-12-01
There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)
A Benchmark Problem for Development of Autonomous Structural Modal Identification
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Woodard, Stanley E.; Juang, Jer-Nan
1996-01-01
This paper summarizes modal identification results obtained using an autonomous version of the Eigensystem Realization Algorithm on a dynamically complex, laboratory structure. The benchmark problem uses 48 of 768 free-decay responses measured in a complete modal survey test. The true modal parameters of the structure are well known from two previous, independent investigations. Without user involvement, the autonomous data analysis identified 24 to 33 structural modes with good to excellent accuracy in 62 seconds of CPU time (on a DEC Alpha 4000 computer). The modal identification technique described in the paper is the baseline algorithm for NASA's Autonomous Dynamics Determination (ADD) experiment scheduled to fly on International Space Station assembly flights in 1997-1999.
Recommendations for Benchmarking Web Site Usage among Academic Libraries.
ERIC Educational Resources Information Center
Hightower, Christy; Sih, Julie; Tilghman, Adam
1998-01-01
To help library directors and Web developers create a benchmarking program to compare statistics of academic Web sites, the authors analyzed the Web server log files of 14 university science and engineering libraries. Recommends a centralized voluntary reporting structure coordinated by the Association of Research Libraries (ARL) and a method for…
Academic Achievement and Extracurricular School Activities of At-Risk High School Students
ERIC Educational Resources Information Center
Marchetti, Ryan; Wilson, Randal H.; Dunham, Mardis
2016-01-01
This study compared the employment, extracurricular participation, and family structure status of students from low socioeconomic families that achieved state-approved benchmarks on ACT reading and mathematics tests to those that did not achieve the benchmarks. Free and reduced lunch eligibility was used to determine SES. Participants included 211…
Benchmarking Alumni Relations in Community Colleges: Findings from a 2015 CASE Survey
ERIC Educational Resources Information Center
Paradise, Andrew
2016-01-01
The Benchmarking Alumni Relations in Community Colleges white paper features key data on alumni relations programs at community colleges across the United States. The paper compares results from 2015 and 2012 across such areas as the structure, operations and budget for alumni relations, alumni data collection and management, alumni communications…
Ashrafi, Parivash; Sun, Yi; Davey, Neil; Adams, Roderick G; Wilkinson, Simon C; Moss, Gary Patrick
2018-03-01
The aim of this study was to investigate how to improve predictions from Gaussian Process models by optimising the model hyperparameters. Optimisation methods, including Grid Search, Conjugate Gradient, Random Search, Evolutionary Algorithm and Hyper-prior, were evaluated and applied to previously published data. Data sets were also altered in a structured manner to reduce their size, which retained the range, or 'chemical space' of the key descriptors to assess the effect of the data range on model quality. The Hyper-prior Smoothbox kernel results in the best models for the majority of data sets, and they exhibited significantly better performance than benchmark quantitative structure-permeability relationship (QSPR) models. When the data sets were systematically reduced in size, the different optimisation methods generally retained their statistical quality, whereas benchmark QSPR models performed poorly. The design of the data set, and possibly also the approach to validation of the model, is critical in the development of improved models. The size of the data set, if carefully controlled, was not generally a significant factor for these models and that models of excellent statistical quality could be produced from substantially smaller data sets. © 2018 Royal Pharmaceutical Society.
Benchmarking initiatives in the water industry.
Parena, R; Smeets, E
2001-01-01
Customer satisfaction and service care are every day pushing professionals in the water industry to seek to improve their performance, lowering costs and increasing the provided service level. Process Benchmarking is generally recognised as a systematic mechanism of comparing one's own utility with other utilities or businesses with the intent of self-improvement by adopting structures or methods used elsewhere. The IWA Task Force on Benchmarking, operating inside the Statistics and Economics Committee, has been committed to developing a general accepted concept of Process Benchmarking to support water decision-makers in addressing issues of efficiency. In a first step the Task Force disseminated among the Committee members a questionnaire focused on providing suggestions about the kind, the evolution degree and the main concepts of Benchmarking adopted in the represented Countries. A comparison among the guidelines adopted in The Netherlands and Scandinavia has recently challenged the Task Force in drafting a methodology for a worldwide process benchmarking in water industry. The paper provides a framework of the most interesting benchmarking experiences in the water sector and describes in detail both the final results of the survey and the methodology focused on identification of possible improvement areas.
QUASAR--scoring and ranking of sequence-structure alignments.
Birzele, Fabian; Gewehr, Jan E; Zimmer, Ralf
2005-12-15
Sequence-structure alignments are a common means for protein structure prediction in the fields of fold recognition and homology modeling, and there is a broad variety of programs that provide such alignments based on sequence similarity, secondary structure or contact potentials. Nevertheless, finding the best sequence-structure alignment in a pool of alignments remains a difficult problem. QUASAR (quality of sequence-structure alignments ranking) provides a unifying framework for scoring sequence-structure alignments that aids finding well-performing combinations of well-known and custom-made scoring schemes. Those scoring functions can be benchmarked against widely accepted quality scores like MaxSub, TMScore, Touch and APDB, thus enabling users to test their own alignment scores against 'standard-of-truth' structure-based scores. Furthermore, individual score combinations can be optimized with respect to benchmark sets based on known structural relationships using QUASAR's in-built optimization routines.
Experimental benchmarking of quantum control in zero-field nuclear magnetic resonance.
Jiang, Min; Wu, Teng; Blanchard, John W; Feng, Guanru; Peng, Xinhua; Budker, Dmitry
2018-06-01
Demonstration of coherent control and characterization of the control fidelity is important for the development of quantum architectures such as nuclear magnetic resonance (NMR). We introduce an experimental approach to realize universal quantum control, and benchmarking thereof, in zero-field NMR, an analog of conventional high-field NMR that features less-constrained spin dynamics. We design a composite pulse technique for both arbitrary one-spin rotations and a two-spin controlled-not (CNOT) gate in a heteronuclear two-spin system at zero field, which experimentally demonstrates universal quantum control in such a system. Moreover, using quantum information-inspired randomized benchmarking and partial quantum process tomography, we evaluate the quality of the control, achieving single-spin control for 13 C with an average fidelity of 0.9960(2) and two-spin control via a CNOT gate with a fidelity of 0.9877(2). Our method can also be extended to more general multispin heteronuclear systems at zero field. The realization of universal quantum control in zero-field NMR is important for quantum state/coherence preparation, pulse sequence design, and is an essential step toward applications to materials science, chemical analysis, and fundamental physics.
Experimental benchmarking of quantum control in zero-field nuclear magnetic resonance
Feng, Guanru
2018-01-01
Demonstration of coherent control and characterization of the control fidelity is important for the development of quantum architectures such as nuclear magnetic resonance (NMR). We introduce an experimental approach to realize universal quantum control, and benchmarking thereof, in zero-field NMR, an analog of conventional high-field NMR that features less-constrained spin dynamics. We design a composite pulse technique for both arbitrary one-spin rotations and a two-spin controlled-not (CNOT) gate in a heteronuclear two-spin system at zero field, which experimentally demonstrates universal quantum control in such a system. Moreover, using quantum information–inspired randomized benchmarking and partial quantum process tomography, we evaluate the quality of the control, achieving single-spin control for 13C with an average fidelity of 0.9960(2) and two-spin control via a CNOT gate with a fidelity of 0.9877(2). Our method can also be extended to more general multispin heteronuclear systems at zero field. The realization of universal quantum control in zero-field NMR is important for quantum state/coherence preparation, pulse sequence design, and is an essential step toward applications to materials science, chemical analysis, and fundamental physics. PMID:29922714
Ab initio calculations, structure, NBO and NCI analyses of Xsbnd H⋯π interactions
NASA Astrophysics Data System (ADS)
Wu, Qiyang; Su, He; Wang, Hongyan; Wang, Hui
2018-02-01
The performance of ab initio methods (MP2, DFT/B3LYP, random-phase approximation (RPA), CCSD(T) and QCISD(T)) in predicting interaction energy of Xsbnd H⋯π (Xsbnd H = HCCH, HCl, HF; π = C2H2, C2H4, C6H6) hydrogen complexes are assessed systematically. The CCSD(T)/CBS benchmarks of interaction energy are reported. It is found that RPA agrees well with CCSD(T)/CBS benchmarks and experimental results. CCSD(T) and QCISD(T) perform the best only when compared with CCSD(T)/CBS benchmarks, MP2 performs well only for experimental data. B3LYP provides the worst accuracy. Additionally, the equilibrium structure, interaction type of Xsbnd H⋯π hydrogen complexes are investigated by the natural bond orbital (NBO) and the non-covalent interaction index (NCI).
Antibody-protein interactions: benchmark datasets and prediction tools evaluation
Ponomarenko, Julia V; Bourne, Philip E
2007-01-01
Background The ability to predict antibody binding sites (aka antigenic determinants or B-cell epitopes) for a given protein is a precursor to new vaccine design and diagnostics. Among the various methods of B-cell epitope identification X-ray crystallography is one of the most reliable methods. Using these experimental data computational methods exist for B-cell epitope prediction. As the number of structures of antibody-protein complexes grows, further interest in prediction methods using 3D structure is anticipated. This work aims to establish a benchmark for 3D structure-based epitope prediction methods. Results Two B-cell epitope benchmark datasets inferred from the 3D structures of antibody-protein complexes were defined. The first is a dataset of 62 representative 3D structures of protein antigens with inferred structural epitopes. The second is a dataset of 82 structures of antibody-protein complexes containing different structural epitopes. Using these datasets, eight web-servers developed for antibody and protein binding sites prediction have been evaluated. In no method did performance exceed a 40% precision and 46% recall. The values of the area under the receiver operating characteristic curve for the evaluated methods were about 0.6 for ConSurf, DiscoTope, and PPI-PRED methods and above 0.65 but not exceeding 0.70 for protein-protein docking methods when the best of the top ten models for the bound docking were considered; the remaining methods performed close to random. The benchmark datasets are included as a supplement to this paper. Conclusion It may be possible to improve epitope prediction methods through training on datasets which include only immune epitopes and through utilizing more features characterizing epitopes, for example, the evolutionary conservation score. Notwithstanding, overall poor performance may reflect the generality of antigenicity and hence the inability to decipher B-cell epitopes as an intrinsic feature of the protein. It is an open question as to whether ultimately discriminatory features can be found. PMID:17910770
Zuckerman, Stephen; Skopec, Laura; Guterman, Stuart
2017-12-01
Medicare Advantage (MA), the program that allows people to receive their Medicare benefits through private health plans, uses a benchmark-and-bidding system to induce plans to provide benefits at lower costs. However, prior research suggests medical costs, profits, and other plan costs are not as low under this system as they might otherwise be. To examine how well the current system encourages MA plans to bid their lowest cost by examining the relationship between costs and bonuses (rebates) and the benchmarks Medicare uses in determining plan payments. Regression analysis using 2015 data for HMO and local PPO plans. Costs and rebates are higher for MA plans in areas with higher benchmarks, and plan costs vary less than benchmarks do. A one-dollar increase in benchmarks is associated with 32-cent-higher plan costs and a 52-cent-higher rebate, even when controlling for market and plan factors that can affect costs. This suggests the current benchmark-and-bidding system allows plans to bid higher than local input prices and other market conditions would seem to warrant. To incentivize MA plans to maximize efficiency and minimize costs, Medicare could change the way benchmarks are set or used.
Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects
Lovaglio, Pietro Giorgio
2012-01-01
Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140
Benchmarking strategies for measuring the quality of healthcare: problems and prospects.
Lovaglio, Pietro Giorgio
2012-01-01
Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.
Hermans, Michel P; Brotons, Carlos; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank
2013-12-01
Micro- and macrovascular complications of type 2 diabetes have an adverse impact on survival, quality of life and healthcare costs. The OPTIMISE (OPtimal Type 2 dIabetes Management Including benchmarking and Standard trEatment) trial comparing physicians' individual performances with a peer group evaluates the hypothesis that benchmarking, using assessments of change in three critical quality indicators of vascular risk: glycated haemoglobin (HbA1c), low-density lipoprotein-cholesterol (LDL-C) and systolic blood pressure (SBP), may improve quality of care in type 2 diabetes in the primary care setting. This was a randomised, controlled study of 3980 patients with type 2 diabetes. Six European countries participated in the OPTIMISE study (NCT00681850). Quality of care was assessed by the percentage of patients achieving pre-set targets for the three critical quality indicators over 12 months. Physicians were randomly assigned to receive either benchmarked or non-benchmarked feedback. All physicians received feedback on six of their patients' modifiable outcome indicators (HbA1c, fasting glycaemia, total cholesterol, high-density lipoprotein-cholesterol (HDL-C), LDL-C and triglycerides). Physicians in the benchmarking group additionally received information on levels of control achieved for the three critical quality indicators compared with colleagues. At baseline, the percentage of evaluable patients (N = 3980) achieving pre-set targets was 51.2% (HbA1c; n = 2028/3964); 34.9% (LDL-C; n = 1350/3865); 27.3% (systolic blood pressure; n = 911/3337). OPTIMISE confirms that target achievement in the primary care setting is suboptimal for all three critical quality indicators. This represents an unmet but modifiable need to revisit the mechanisms and management of improving care in type 2 diabetes. OPTIMISE will help to assess whether benchmarking is a useful clinical tool for improving outcomes in type 2 diabetes.
Maximal Unbiased Benchmarking Data Sets for Human Chemokine Receptors and Comparative Analysis.
Xia, Jie; Reid, Terry-Elinor; Wu, Song; Zhang, Liangren; Wang, Xiang Simon
2018-05-29
Chemokine receptors (CRs) have long been druggable targets for the treatment of inflammatory diseases and HIV-1 infection. As a powerful technique, virtual screening (VS) has been widely applied to identifying small molecule leads for modern drug targets including CRs. For rational selection of a wide variety of VS approaches, ligand enrichment assessment based on a benchmarking data set has become an indispensable practice. However, the lack of versatile benchmarking sets for the whole CRs family that are able to unbiasedly evaluate every single approach including both structure- and ligand-based VS somewhat hinders modern drug discovery efforts. To address this issue, we constructed Maximal Unbiased Benchmarking Data sets for human Chemokine Receptors (MUBD-hCRs) using our recently developed tools of MUBD-DecoyMaker. The MUBD-hCRs encompasses 13 subtypes out of 20 chemokine receptors, composed of 404 ligands and 15756 decoys so far and is readily expandable in the future. It had been thoroughly validated that MUBD-hCRs ligands are chemically diverse while its decoys are maximal unbiased in terms of "artificial enrichment", "analogue bias". In addition, we studied the performance of MUBD-hCRs, in particular CXCR4 and CCR5 data sets, in ligand enrichment assessments of both structure- and ligand-based VS approaches in comparison with other benchmarking data sets available in the public domain and demonstrated that MUBD-hCRs is very capable of designating the optimal VS approach. MUBD-hCRs is a unique and maximal unbiased benchmarking set that covers major CRs subtypes so far.
Nonlinear system identification of smart structures under high impact loads
NASA Astrophysics Data System (ADS)
Sarp Arsava, Kemal; Kim, Yeesock; El-Korchi, Tahar; Park, Hyo Seon
2013-05-01
The main purpose of this paper is to develop numerical models for the prediction and analysis of the highly nonlinear behavior of integrated structure control systems subjected to high impact loading. A time-delayed adaptive neuro-fuzzy inference system (TANFIS) is proposed for modeling of the complex nonlinear behavior of smart structures equipped with magnetorheological (MR) dampers under high impact forces. Experimental studies are performed to generate sets of input and output data for training and validation of the TANFIS models. The high impact load and current signals are used as the input disturbance and control signals while the displacement and acceleration responses from the structure-MR damper system are used as the output signals. The benchmark adaptive neuro-fuzzy inference system (ANFIS) is used as a baseline. Comparisons of the trained TANFIS models with experimental results demonstrate that the TANFIS modeling framework is an effective way to capture nonlinear behavior of integrated structure-MR damper systems under high impact loading. In addition, the performance of the TANFIS model is much better than that of ANFIS in both the training and the validation processes.
NASA Astrophysics Data System (ADS)
Capo-Lugo, Pedro A.
Formation flying consists of multiple spacecraft orbiting in a required configuration about a planet or through Space. The National Aeronautics and Space Administration (NASA) Benchmark Tetrahedron Constellation is one of the proposed constellations to be launched in the year 2009 and provides the motivation for this investigation. The problem that will be researched here consists of three stages. The first stage contains the deployment of the satellites; the second stage is the reconfiguration process to transfer the satellites through different specific sizes of the NASA benchmark problem; and, the third stage is the station-keeping procedure for the tetrahedron constellation. Every stage contains different control schemes and transfer procedures to obtain/maintain the proposed tetrahedron constellation. In the first stage, the deployment procedure will depend on a combination of two techniques in which impulsive maneuvers and a digital controller are used to deploy the satellites and to maintain the tetrahedron constellation at the following apogee point. The second stage that corresponds to the reconfiguration procedure shows a different control scheme in which the intelligent control systems are implemented to perform this procedure. In this research work, intelligent systems will eliminate the use of complex mathematical models and will reduce the computational time to perform different maneuvers. Finally, the station-keeping process, which is the third stage of this research problem, will be implemented with a two-level hierarchical control scheme to maintain the separation distance constraints of the NASA Benchmark Tetrahedron Constellation. For this station-keeping procedure, the system of equations defining the dynamics of a pair of satellites is transformed to take in account the perturbation due to the oblateness of the Earth and the disturbances due to solar pressure. The control procedures used in this research will be transformed from a continuous control system to a digital control system which will simplify the implementation into the computer onboard the satellite. In addition, this research will show an introductory chapter on attitude dynamics that can be used to maintain the orientation of the satellites, and an adaptive intelligent control scheme will be proposed to maintain the desired orientation of the spacecraft. In conclusion, a solution for the dynamics of the NASA Benchmark Tetrahedron Constellation will be presented in this research work. The main contribution of this work is the use of discrete control schemes, impulsive maneuvers, and intelligent control schemes that can be used to reduce the computational time in which these control schemes can be easily implemented in the computer onboard the satellite. These contributions are explained through the deployment, reconfiguration, and station-keeping process of the proposed NASA Benchmark Tetrahedron Constellation.
Benchmark Analysis of Career and Technical Education in Lenawee County. Final Report.
ERIC Educational Resources Information Center
Hollenbeck, Kevin
The career and technical education (CTE) provided in grades K-12 in the county's vocational-technical center and 12 local public school districts of Lenawee County, Michigan, was benchmarked with respect to its attention to career development. Data were collected from the following sources: structured interviews with a number of key respondents…
Robust control of seismically excited cable stayed bridges with MR dampers
NASA Astrophysics Data System (ADS)
YeganehFallah, Arash; Khajeh Ahamd Attari, Nader
2017-03-01
In recent decades active and semi-active structural control are becoming attractive alternatives for enhancing performance of civil infrastructures subjected to seismic and winds loads. However, in order to have reliable active and semi-active control, there is a need to include information of uncertainties in design of the controller. In real world for civil structures, parameters such as loading places, stiffness, mass and damping are time variant and uncertain. These uncertainties in many cases model as parametric uncertainties. The motivation of this research is to design a robust controller for attenuating the vibrational responses of civil infrastructures, regarding their dynamical uncertainties. Uncertainties in structural dynamic’s parameters are modeled as affine uncertainties in state space modeling. These uncertainties are decoupled from the system through Linear Fractional Transformation (LFT) and are assumed to be unknown input to the system but norm bounded. The robust H ∞ controller is designed for the decoupled system to regulate the evaluation outputs and it is robust to effects of uncertainties, disturbance and sensors noise. The cable stayed bridge benchmark which is equipped with MR damper is considered for the numerical simulation. The simulated results show that the proposed robust controller can effectively mitigate undesired uncertainties effects on systems’ responds under seismic loading.
U.S. Solar Photovoltaic System Cost Benchmark: Q1 2017
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Ran; Feldman, David; Margolis, Robert
This report benchmarks U.S. solar photovoltaic (PV) system installed costs as of the first quarter of 2017 (Q1 2017). We use a bottom-up methodology, accounting for all system and projectdevelopment costs incurred during the installation to model the costs for residential, commercial, and utility-scale systems. In general, we attempt to model the typical installation techniques and business operations from an installed-cost perspective. Costs are represented from the perspective of the developer/installer; thus, all hardware costs represent the price at which components are purchased by the developer/installer, not accounting for preexisting supply agreements or other contracts. Importantly, the benchmark also representsmore » the sales price paid to the installer; therefore, it includes profit in the cost of the hardware, 1 along with the profit the installer/developer receives, as a separate cost category. However, it does not include any additional net profit, such as a developer fee or price gross-up, which is common in the marketplace. We adopt this approach owing to the wide variation in developer profits in all three sectors, where project pricing is highly dependent on region and project specifics such as local retail electricity rate structures, local rebate and incentive structures, competitive environment, and overall project or deal structures. Finally, our benchmarks are national averages weighted by state installed capacities.« less
Structural Benchmark Creep Testing for the Advanced Stirling Convertor Heater Head
NASA Technical Reports Server (NTRS)
Krause, David L.; Kalluri, Sreeramesh; Bowman, Randy R.; Shah, Ashwin R.
2008-01-01
The National Aeronautics and Space Administration (NASA) has identified the high efficiency Advanced Stirling Radioisotope Generator (ASRG) as a candidate power source for use on long duration Science missions such as lunar applications, Mars rovers, and deep space missions. For the inherent long life times required, a structurally significant design limit for the heater head component of the ASRG Advanced Stirling Convertor (ASC) is creep deformation induced at low stress levels and high temperatures. Demonstrating proof of adequate margins on creep deformation and rupture for the operating conditions and the MarM-247 material of construction is a challenge that the NASA Glenn Research Center is addressing. The combined analytical and experimental program ensures integrity and high reliability of the heater head for its 17-year design life. The life assessment approach starts with an extensive series of uniaxial creep tests on thin MarM-247 specimens that comprise the same chemistry, microstructure, and heat treatment processing as the heater head itself. This effort addresses a scarcity of openly available creep properties for the material as well as for the virtual absence of understanding of the effect on creep properties due to very thin walls, fine grains, low stress levels, and high-temperature fabrication steps. The approach continues with a considerable analytical effort, both deterministically to evaluate the median creep life using nonlinear finite element analysis, and probabilistically to calculate the heater head s reliability to a higher degree. Finally, the approach includes a substantial structural benchmark creep testing activity to calibrate and validate the analytical work. This last element provides high fidelity testing of prototypical heater head test articles; the testing includes the relevant material issues and the essential multiaxial stress state, and applies prototypical and accelerated temperature profiles for timely results in a highly controlled laboratory environment. This paper focuses on the last element and presents a preliminary methodology for creep rate prediction, the experimental methods, test challenges, and results from benchmark testing of a trial MarM-247 heater head test article. The results compare favorably with the analytical strain predictions. A description of other test findings is provided, and recommendations for future test procedures are suggested. The manuscript concludes with describing the potential impact of the heater head creep life assessment and benchmark testing effort on the ASC program.
Polarization Control with Piezoelectric and LiNbO3 Transducers
NASA Astrophysics Data System (ADS)
Bradley, E.; Miles, E.; Loginov, B.; Vu, N.
Several Polarization control transducers have appeared on the market, and now automated, endless polarization control systems using these transducers are becoming available. Unfortunately it is not entirely clear what benchmark performance tests a polarization control system must pass, and the polarization disturbances a system must handle are open to some debate. We present quantitative measurements of realistic polarization disturbances and two benchmark tests we have successfully used to evaluate the performance of an automated, endless polarization control system. We use these tests to compare the performance of a system using piezoelectric transducers to that of a system using LiNbO3 transducers.
Vreck, D; Gernaey, K V; Rosen, C; Jeppsson, U
2006-01-01
In this paper, implementation of the Benchmark Simulation Model No 2 (BSM2) within Matlab-Simulink is presented. The BSM2 is developed for plant-wide WWTP control strategy evaluation on a long-term basis. It consists of a pre-treatment process, an activated sludge process and sludge treatment processes. Extended evaluation criteria are proposed for plant-wide control strategy assessment. Default open-loop and closed-loop strategies are also proposed to be used as references with which to compare other control strategies. Simulations indicate that the BM2 is an appropriate tool for plant-wide control strategy evaluation.
Yu, Zhenpeng; Wang, Jiandong
2016-09-01
This paper assesses the performance of feedforward controllers for disturbance rejection in univariate feedback plus feedforward control loops. The structures of feedback and feedforward controllers are confined to proportional-integral-derivative and static-lead-lag forms, respectively, and the effects of feedback controllers are not considered. The integral squared error (ISE) and total squared variation (TSV) are used as performance metrics. A performance index is formulated by comparing the current ISE and TSV metrics to their own lower bounds as performance benchmarks. A controller performance assessment (CPA) method is proposed to calculate the performance index from measurements. The proposed CPA method resolves two critical limitations in the existing CPA methods, in order to be consistent with industrial scenarios. Numerical and experimental examples illustrate the effectiveness of the obtained results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Benchmark Problems for Spacecraft Formation Flying Missions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.
2003-01-01
To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.
NASA Astrophysics Data System (ADS)
Handley, Sean J.; Willis, Trevor J.; Cole, Russell G.; Bradley, Anna; Cairney, Daniel J.; Brown, Stephen N.; Carter, Megan E.
2014-02-01
Trawling and dredge fisheries remove vulnerable fauna, homogenise sediments and assemblages, and break down biogenic habitats, but the full extent of these effects can be difficult to quantify in the absence of adequate control sites. Our study utilised rare control sites containing biogenic habitat, the Separation Point exclusion zone, formally protected for 28 years, as the basis for assessing the degree of change experienced by adjacent areas subject to benthic fishing. Sidescan sonar surveys verified that intensive trawling and dredging occurred in areas adjacent to, but not inside, the exclusion area. We compared sediment composition, biogenic cover, macrofaunal assemblages, biomass, and productivity of the benthos, inside and outside the exclusion zone. Disturbed sites were dominated by fine mud, with little or no shell-gravel, reduced number of species, and loss of large bodied animals, with concomitant reductions in biomass and productivity. At protected sites, large, rarer molluscs were more abundant and contributed the most to size-based estimates of productivity and biomass. Functional changes in fished assemblages were consistent with previously reported relative increases in scavengers, predators and deposit feeders at the expense of filter feeders and a grazer. We propose that the colonisation of biogenic species in protected sites was contingent on the presence of shell-gravel atop these soft sediments. The process of sediment homogenisation by bottom fishing and elimination of shell-gravels from surficial sediments appeared to have occurred over decades - a ‘shifting baseline’. Therefore, benchmarking historical sediment structure at control site like the Separation Point exclusion zone is necessary to determine the full extent of physical habitat change wrought by contact gears on sheltered soft sediment habitats to better underpin appropriate conservation, restoration or fisheries management goals.
Adsorption structures and energetics of molecules on metal surfaces: Bridging experiment and theory
NASA Astrophysics Data System (ADS)
Maurer, Reinhard J.; Ruiz, Victor G.; Camarillo-Cisneros, Javier; Liu, Wei; Ferri, Nicola; Reuter, Karsten; Tkatchenko, Alexandre
2016-05-01
Adsorption geometry and stability of organic molecules on surfaces are key parameters that determine the observable properties and functions of hybrid inorganic/organic systems (HIOSs). Despite many recent advances in precise experimental characterization and improvements in first-principles electronic structure methods, reliable databases of structures and energetics for large adsorbed molecules are largely amiss. In this review, we present such a database for a range of molecules adsorbed on metal single-crystal surfaces. The systems we analyze include noble-gas atoms, conjugated aromatic molecules, carbon nanostructures, and heteroaromatic compounds adsorbed on five different metal surfaces. The overall objective is to establish a diverse benchmark dataset that enables an assessment of current and future electronic structure methods, and motivates further experimental studies that provide ever more reliable data. Specifically, the benchmark structures and energetics from experiment are here compared with the recently developed van der Waals (vdW) inclusive density-functional theory (DFT) method, DFT + vdWsurf. In comparison to 23 adsorption heights and 17 adsorption energies from experiment we find a mean average deviation of 0.06 Å and 0.16 eV, respectively. This confirms the DFT + vdWsurf method as an accurate and efficient approach to treat HIOSs. A detailed discussion identifies remaining challenges to be addressed in future development of electronic structure methods, for which the here presented benchmark database may serve as an important reference.
Classification and assessment tools for structural motif discovery algorithms.
Badr, Ghada; Al-Turaiki, Isra; Mathkour, Hassan
2013-01-01
Motif discovery is the problem of finding recurring patterns in biological data. Patterns can be sequential, mainly when discovered in DNA sequences. They can also be structural (e.g. when discovering RNA motifs). Finding common structural patterns helps to gain a better understanding of the mechanism of action (e.g. post-transcriptional regulation). Unlike DNA motifs, which are sequentially conserved, RNA motifs exhibit conservation in structure, which may be common even if the sequences are different. Over the past few years, hundreds of algorithms have been developed to solve the sequential motif discovery problem, while less work has been done for the structural case. In this paper, we survey, classify, and compare different algorithms that solve the structural motif discovery problem, where the underlying sequences may be different. We highlight their strengths and weaknesses. We start by proposing a benchmark dataset and a measurement tool that can be used to evaluate different motif discovery approaches. Then, we proceed by proposing our experimental setup. Finally, results are obtained using the proposed benchmark to compare available tools. To the best of our knowledge, this is the first attempt to compare tools solely designed for structural motif discovery. Results show that the accuracy of discovered motifs is relatively low. The results also suggest a complementary behavior among tools where some tools perform well on simple structures, while other tools are better for complex structures. We have classified and evaluated the performance of available structural motif discovery tools. In addition, we have proposed a benchmark dataset with tools that can be used to evaluate newly developed tools.
U.S. Solar Photovoltaic System Cost Benchmark: Q1 2017
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Ran; Feldman, David J.; Margolis, Robert M.
NREL has been modeling U.S. photovoltaic (PV) system costs since 2009. This year, our report benchmarks costs of U.S. solar PV for residential, commercial, and utility-scale systems built in the first quarter of 2017 (Q1 2017). Costs are represented from the perspective of the developer/installer, thus all hardware costs represent the price at which components are purchased by the developer/installer, not accounting for preexisting supply agreements or other contracts. Importantly, the benchmark this year (2017) also represents the sales price paid to the installer; therefore, it includes profit in the cost of the hardware, along with the profit the installer/developermore » receives, as a separate cost category. However, it does not include any additional net profit, such as a developer fee or price gross-up, which are common in the marketplace. We adopt this approach owing to the wide variation in developer profits in all three sectors, where project pricing is highly dependent on region and project specifics such as local retail electricity rate structures, local rebate and incentive structures, competitive environment, and overall project or deal structures.« less
TRUST. I. A 3D externally illuminated slab benchmark for dust radiative transfer
NASA Astrophysics Data System (ADS)
Gordon, K. D.; Baes, M.; Bianchi, S.; Camps, P.; Juvela, M.; Kuiper, R.; Lunttila, T.; Misselt, K. A.; Natale, G.; Robitaille, T.; Steinacker, J.
2017-07-01
Context. The radiative transport of photons through arbitrary three-dimensional (3D) structures of dust is a challenging problem due to the anisotropic scattering of dust grains and strong coupling between different spatial regions. The radiative transfer problem in 3D is solved using Monte Carlo or Ray Tracing techniques as no full analytic solution exists for the true 3D structures. Aims: We provide the first 3D dust radiative transfer benchmark composed of a slab of dust with uniform density externally illuminated by a star. This simple 3D benchmark is explicitly formulated to provide tests of the different components of the radiative transfer problem including dust absorption, scattering, and emission. Methods: The details of the external star, the slab itself, and the dust properties are provided. This benchmark includes models with a range of dust optical depths fully probing cases that are optically thin at all wavelengths to optically thick at most wavelengths. The dust properties adopted are characteristic of the diffuse Milky Way interstellar medium. This benchmark includes solutions for the full dust emission including single photon (stochastic) heating as well as two simplifying approximations: One where all grains are considered in equilibrium with the radiation field and one where the emission is from a single effective grain with size-distribution-averaged properties. A total of six Monte Carlo codes and one Ray Tracing code provide solutions to this benchmark. Results: The solution to this benchmark is given as global spectral energy distributions (SEDs) and images at select diagnostic wavelengths from the ultraviolet through the infrared. Comparison of the results revealed that the global SEDs are consistent on average to a few percent for all but the scattered stellar flux at very high optical depths. The image results are consistent within 10%, again except for the stellar scattered flux at very high optical depths. The lack of agreement between different codes of the scattered flux at high optical depths is quantified for the first time. Convergence tests using one of the Monte Carlo codes illustrate the sensitivity of the solutions to various model parameters. Conclusions: We provide the first 3D dust radiative transfer benchmark and validate the accuracy of this benchmark through comparisons between multiple independent codes and detailed convergence tests.
Goodkind, Daniel; Lollock, Lisa; Choi, Yoonjoung; McDevitt, Thomas; West, Loraine
2018-01-01
Meeting demand for family planning can facilitate progress towards all major themes of the United Nations Sustainable Development Goals (SDGs): people, planet, prosperity, peace, and partnership. Many policymakers have embraced a benchmark goal that at least 75% of the demand for family planning in all countries be satisfied with modern contraceptive methods by the year 2030. This study examines the demographic impact (and development implications) of achieving the 75% benchmark in 13 developing countries that are expected to be the furthest from achieving that benchmark. Estimation of the demographic impact of achieving the 75% benchmark requires three steps in each country: 1) translate contraceptive prevalence assumptions (with and without intervention) into future fertility levels based on biometric models, 2) incorporate each pair of fertility assumptions into separate population projections, and 3) compare the demographic differences between the two population projections. Data are drawn from the United Nations, the US Census Bureau, and Demographic and Health Surveys. The demographic impact of meeting the 75% benchmark is examined via projected differences in fertility rates (average expected births per woman's reproductive lifetime), total population, growth rates, age structure, and youth dependency. On average, meeting the benchmark would imply a 16 percentage point increase in modern contraceptive prevalence by 2030 and a 20% decline in youth dependency, which portends a potential demographic dividend to spur economic growth. Improvements in meeting the demand for family planning with modern contraceptive methods can bring substantial benefits to developing countries. To our knowledge, this is the first study to show formally how such improvements can alter population size and age structure. Declines in youth dependency portend a demographic dividend, an added bonus to the already well-known benefits of meeting existing demands for family planning.
Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides
Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.
2016-01-01
Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics of sediment, and uncertainty in TEB values. Additional evaluations of benchmarks in relation to sediment chemistry and toxicity are ongoing.
NASA Astrophysics Data System (ADS)
Cao, S. Q.; Su, M. G.; Min, Q.; Sun, D. X.; O'Sullivan, G.; Dong, C. Z.
2018-02-01
A spatio-temporally resolved spectral measurement system of highly charged ions from laser-produced plasmas is presented. Corresponding semiautomated computer software for measurement control and spectral analysis has been written to achieve the best synchronicity possible among the instruments. This avoids the tedious comparative processes between experimental and theoretical results. To demonstrate the capabilities of this system, a series of spatio-temporally resolved experiments of laser-produced Al plasmas have been performed and applied to benchmark the software. The system is a useful tool for studying the spectral structures of highly charged ions and for evaluating the spatio-temporal evolution of laser-produced plasmas.
ICSBEP Benchmarks For Nuclear Data Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briggs, J. Blair
2005-05-24
The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) -- Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive andmore » internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled ''International Handbook of Evaluated Criticality Safety Benchmark Experiments.'' The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.« less
Developing integrated benchmarks for DOE performance measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.
1992-09-30
The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance couldmore » be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.« less
NASA Technical Reports Server (NTRS)
deWit, A.; Cohn, N.
1999-01-01
The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure, This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.
NASA Technical Reports Server (NTRS)
de Wit, A.; Cohn, N.
1999-01-01
The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.
Model evaluation using a community benchmarking system for land surface models
NASA Astrophysics Data System (ADS)
Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Kluzek, E. B.; Koven, C. D.; Randerson, J. T.
2014-12-01
Evaluation of atmosphere, ocean, sea ice, and land surface models is an important step in identifying deficiencies in Earth system models and developing improved estimates of future change. For the land surface and carbon cycle, the design of an open-source system has been an important objective of the International Land Model Benchmarking (ILAMB) project. Here we evaluated CMIP5 and CLM models using a benchmarking system that enables users to specify models, data sets, and scoring systems so that results can be tailored to specific model intercomparison projects. Our scoring system used information from four different aspects of global datasets, including climatological mean spatial patterns, seasonal cycle dynamics, interannual variability, and long-term trends. Variable-to-variable comparisons enable investigation of the mechanistic underpinnings of model behavior, and allow for some control of biases in model drivers. Graphics modules allow users to evaluate model performance at local, regional, and global scales. Use of modular structures makes it relatively easy for users to add new variables, diagnostic metrics, benchmarking datasets, or model simulations. Diagnostic results are automatically organized into HTML files, so users can conveniently share results with colleagues. We used this system to evaluate atmospheric carbon dioxide, burned area, global biomass and soil carbon stocks, net ecosystem exchange, gross primary production, ecosystem respiration, terrestrial water storage, evapotranspiration, and surface radiation from CMIP5 historical and ESM historical simulations. We found that the multi-model mean often performed better than many of the individual models for most variables. We plan to publicly release a stable version of the software during fall of 2014 that has land surface, carbon cycle, hydrology, radiation and energy cycle components.
NASA Astrophysics Data System (ADS)
Voorhoeve, Robbert; van der Maas, Annemiek; Oomen, Tom
2018-05-01
Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF identification of lightly damped mechanical systems with improved speed and accuracy. The proposed method is based on local rational models, which can efficiently handle the lightly-damped resonant dynamics. A key aspect herein is the freedom in the multivariable rational model parametrizations. Several choices for such multivariable rational model parametrizations are proposed and investigated. For systems with many inputs and outputs the required number of model parameters can rapidly increase, adversely affecting the performance of the local modeling approach. Therefore, low-order model structures are investigated. The structure of these low-order parametrizations leads to an undesired directionality in the identification problem. To address this, an iterative local rational modeling algorithm is proposed. As a special case recently developed SISO algorithms are recovered. The proposed approach is successfully demonstrated on simulations and on an active vibration isolation system benchmark, confirming good performance of the method using significantly less parameters compared with alternative approaches.
Solution of the neutronics code dynamic benchmark by finite element method
NASA Astrophysics Data System (ADS)
Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.
2016-10-01
The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.
Benchmarking Controlled Trial--a novel concept covering all observational effectiveness studies.
Malmivaara, Antti
2015-06-01
The Benchmarking Controlled Trial (BCT) is a novel concept which covers all observational studies aiming to assess effectiveness. BCTs provide evidence of the comparative effectiveness between health service providers, and of effectiveness due to particular features of the health and social care systems. BCTs complement randomized controlled trials (RCTs) as the sources of evidence on effectiveness. This paper presents a definition of the BCT; compares the position of BCTs in assessing effectiveness with that of RCTs; presents a checklist for assessing methodological validity of a BCT; and pilot-tests the checklist with BCTs published recently in the leading medical journals.
NASA Astrophysics Data System (ADS)
Zhao, Erdong; Guo, Chaoran; Liu, Liwei; Dai, Sichen; Li, Shangqi
2017-04-01
In recent years, China broke out a large-scale of fog and haze, particularly Beijing. Energy production and consumption of fossil fuel combustion emissions is the main source of environmental pollution and haze, and it is most prominent in the power industry. In this paper, we evaluate the relationship between Beijing power structure and the prevention and control of atmospheric pollution by Integrated Policy Assessment Model for China - Second Generation Model (IPAC-SGM). This paper explores the propulsion effect of the new energy industry on Beijing’s air pollution prevention and control by simulating the change of development of electric energy in Beijing under three scenarios which are benchmark scenario, general policy scenario and reinforced policy scenario.
Dual linear structured support vector machine tracking method via scale correlation filter
NASA Astrophysics Data System (ADS)
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
Liebe, J D; Hübner, U
2013-01-01
Continuous improvements of IT-performance in healthcare organisations require actionable performance indicators, regularly conducted, independent measurements and meaningful and scalable reference groups. Existing IT-benchmarking initiatives have focussed on the development of reliable and valid indicators, but less on the questions about how to implement an environment for conducting easily repeatable and scalable IT-benchmarks. This study aims at developing and trialling a procedure that meets the afore-mentioned requirements. We chose a well established, regularly conducted (inter-) national IT-survey of healthcare organisations (IT-Report Healthcare) as the environment and offered the participants of the 2011 survey (CIOs of hospitals) to enter a benchmark. The 61 structural and functional performance indicators covered among others the implementation status and integration of IT-systems and functions, global user satisfaction and the resources of the IT-department. Healthcare organisations were grouped by size and ownership. The benchmark results were made available electronically and feedback on the use of these results was requested after several months. Fifty-ninehospitals participated in the benchmarking. Reference groups consisted of up to 141 members depending on the number of beds (size) and the ownership (public vs. private). A total of 122 charts showing single indicator frequency views were sent to each participant. The evaluation showed that 94.1% of the CIOs who participated in the evaluation considered this benchmarking beneficial and reported that they would enter again. Based on the feedback of the participants we developed two additional views that provide a more consolidated picture. The results demonstrate that establishing an independent, easily repeatable and scalable IT-benchmarking procedure is possible and was deemed desirable. Based on these encouraging results a new benchmarking round which includes process indicators is currently conducted.
Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen
2017-01-01
Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children’s strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders’ NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children’s NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders’ NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children’s benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children’s age and familiarity with the number range, these additional external benchmarks might need to be labeled. PMID:28713302
Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen
2017-01-01
Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children's age and familiarity with the number range, these additional external benchmarks might need to be labeled.
2011-01-01
Introduction Selective digestive decontamination (SDD) appears to have a more compelling evidence base than non-antimicrobial methods for the prevention of ventilator associated pneumonia (VAP). However, the striking variability in ventilator associated pneumonia-incidence proportion (VAP-IP) among the SDD studies remains unexplained and a postulated contextual effect remains untested for. Methods Nine reviews were used to source 45 observational (benchmark) groups and 137 component (control and intervention) groups of studies of SDD and studies of three non-antimicrobial methods of VAP prevention. The logit VAP-IP data were summarized by meta-analysis using random effects methods and the associated heterogeneity (tau2) was measured. As group level predictors of logit VAP-IP, the mode of VAP diagnosis, proportion of trauma admissions, the proportion receiving prolonged ventilation and the intervention method under study were examined in meta-regression models containing the benchmark groups together with either the control (models 1 to 3) or intervention (models 4 to 6) groups of the prevention studies. Results The VAP-IP benchmark derived here is 22.1% (95% confidence interval; 95% CI; 19.2 to 25.5; tau2 0.34) whereas the mean VAP-IP of control groups from studies of SDD and of non-antimicrobial methods, is 35.7 (29.7 to 41.8; tau2 0.63) versus 20.4 (17.2 to 24.0; tau2 0.41), respectively (P < 0.001). The disparity between the benchmark groups and the control groups of the SDD studies, which was most apparent for the highest quality studies, could not be explained in the meta-regression models after adjusting for various group level factors. The mean VAP-IP (95% CI) of intervention groups is 16.0 (12.6 to 20.3; tau2 0.59) and 17.1 (14.2 to 20.3; tau2 0.35) for SDD studies versus studies of non-antimicrobial methods, respectively. Conclusions The VAP-IP among the intervention groups within the SDD evidence base is less variable and more similar to the benchmark than among the control groups. These paradoxical observations cannot readily be explained. The interpretation of the SDD evidence base cannot proceed without further consideration of this contextual effect. PMID:21214897
Analogue experiments as benchmarks for models of lava flow emplacement
NASA Astrophysics Data System (ADS)
Garel, F.; Kaminski, E. C.; Tait, S.; Limare, A.
2013-12-01
During an effusive volcanic eruption, the crisis management is mainly based on the prediction of lava flow advance and its velocity. The spreading of a lava flow, seen as a gravity current, depends on its "effective rheology" and on the effusion rate. Fast-computing models have arisen in the past decade in order to predict in near real time lava flow path and rate of advance. This type of model, crucial to mitigate volcanic hazards and organize potential evacuation, has been mainly compared a posteriori to real cases of emplaced lava flows. The input parameters of such simulations applied to natural eruptions, especially effusion rate and topography, are often not known precisely, and are difficult to evaluate after the eruption. It is therefore not straightforward to identify the causes of discrepancies between model outputs and observed lava emplacement, whereas the comparison of models with controlled laboratory experiments appears easier. The challenge for numerical simulations of lava flow emplacement is to model the simultaneous advance and thermal structure of viscous lava flows. To provide original constraints later to be used in benchmark numerical simulations, we have performed lab-scale experiments investigating the cooling of isoviscous gravity currents. The simplest experimental set-up is as follows: silicone oil, whose viscosity, around 5 Pa.s, varies less than a factor of 2 in the temperature range studied, is injected from a point source onto a horizontal plate and spreads axisymmetrically. The oil is injected hot, and progressively cools down to ambient temperature away from the source. Once the flow is developed, it presents a stationary radial thermal structure whose characteristics depend on the input flow rate. In addition to the experimental observations, we have developed in Garel et al., JGR, 2012 a theoretical model confirming the relationship between supply rate, flow advance and stationary surface thermal structure. We also provide experimental observations of the effect of wind the surface thermal structure of a viscous flow, that could be used to benchmark a thermal heat loss model. We will also briefly present more complex analogue experiments using wax material. These experiments present discontinuous advance behavior, and a dual surface thermal structure with low (solidified) vs. high (hot liquid exposed at the surface) surface temperatures regions. Emplacement models should tend to reproduce these two features, also observed on lava flows, to better predict the hazard of lava inundation.
The national hydrologic bench-mark network
Cobb, Ernest D.; Biesecker, J.E.
1971-01-01
The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.
Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems
NASA Astrophysics Data System (ADS)
Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald
A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.
Benchmarking facilities providing care: An international overview of initiatives
Thonon, Frédérique; Watson, Jonathan; Saghatchian, Mahasti
2015-01-01
We performed a literature review of existing benchmarking projects of health facilities to explore (1) the rationales for those projects, (2) the motivation for health facilities to participate, (3) the indicators used and (4) the success and threat factors linked to those projects. We studied both peer-reviewed and grey literature. We examined 23 benchmarking projects of different medical specialities. The majority of projects used a mix of structure, process and outcome indicators. For some projects, participants had a direct or indirect financial incentive to participate (such as reimbursement by Medicaid/Medicare or litigation costs related to quality of care). A positive impact was reported for most projects, mainly in terms of improvement of practice and adoption of guidelines and, to a lesser extent, improvement in communication. Only 1 project reported positive impact in terms of clinical outcomes. Success factors and threats are linked to both the benchmarking process (such as organisation of meetings, link with existing projects) and indicators used (such as adjustment for diagnostic-related groups). The results of this review will help coordinators of a benchmarking project to set it up successfully. PMID:26770800
Benchmarking forensic mental health organizations.
Coombs, Tim; Taylor, Monica; Pirkis, Jane
2011-04-01
This paper describes the forensic mental health forums that were conducted as part of the National Mental Health Benchmarking Project (NMHBP). These forums encouraged participating organizations to compare their performance on a range of key performance indicators (KPIs) with that of their peers. Four forensic mental health organizations took part in the NMHBP. Representatives from these organizations attended eight benchmarking forums at which they documented their performance against previously agreed KPIs. They also undertook three special projects which explored some of the factors that might explain inter-organizational variation in performance. The inter-organizational range for many of the indicators was substantial. Observing this led participants to conduct the special projects to explore three factors which might help explain the variability - seclusion practices, delivery of community mental health services, and provision of court liaison services. The process of conducting the special projects gave participants insights into the practices and structures employed by their counterparts, and provided them with some important lessons for quality improvement. The forensic mental health benchmarking forums have demonstrated that benchmarking is feasible and likely to be useful in improving service performance and quality.
Revisiting the PLUMBER Experiments from a Process-Diagnostics Perspective
NASA Astrophysics Data System (ADS)
Nearing, G. S.; Ruddell, B. L.; Clark, M. P.; Nijssen, B.; Peters-Lidard, C. D.
2017-12-01
The PLUMBER benchmarking experiments [1] showed that some of the most sophisticated land models (CABLE, CH-TESSEL, COLA-SSiB, ISBA-SURFEX, JULES, Mosaic, Noah, ORCHIDEE) were outperformed - in simulations of half-hourly surface energy fluxes - by instantaneous, out-of-sample, and globally-stationary regressions with no state memory. One criticism of PLUMBER is that the benchmarking methodology was not derived formally, so that applying a similar methodology with different performance metrics can result in qualitatively different results. Another common criticism of model intercomparison projects in general is that they offer little insight into process-level deficiencies in the models, and therefore are of marginal value for helping to improve the models. We address both of these issues by proposing a formal benchmarking methodology that also yields a formal and quantitative method for process-level diagnostics. We apply this to the PLUMBER experiments to show that (1) the PLUMBER conclusions were generally correct - the models use only a fraction of the information available to them from met forcing data (<50% by our analysis), and (2) all of the land models investigated by PLUMBER have similar process-level error structures, and therefore together do not represent a meaningful sample of structural or epistemic uncertainty. We conclude by suggesting two ways to improve the experimental design of model intercomparison and/or model benchmarking studies like PLUMBER. First, PLUMBER did not report model parameter values, and it is necessary to know these values to separate parameter uncertainty from structural uncertainty. This is a first order requirement if we want to use intercomparison studies to provide feedback to model development. Second, technical documentation of land models is inadequate. Future model intercomparison projects should begin with a collaborative effort by model developers to document specific differences between model structures. This could be done in a reproducible way using a unified, process-flexible system like SUMMA [2]. [1] Best, M.J. et al. (2015) 'The plumbing of land surface models: benchmarking model performance', J. Hydrometeor. [2] Clark, M.P. et al. (2015) 'A unified approach for process-based hydrologic modeling: 1. Modeling concept', Water Resour. Res.
Implementation of BT, SP, LU, and FT of NAS Parallel Benchmarks in Java
NASA Technical Reports Server (NTRS)
Schultz, Matthew; Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry
2000-01-01
A number of Java features make it an attractive but a debatable choice for High Performance Computing. We have implemented benchmarks working on single structured grid BT,SP,LU and FT in Java. The performance and scalability of the Java code shows that a significant improvement in Java compiler technology and in Java thread implementation are necessary for Java to compete with Fortran in HPC applications.
Yu, Jinchao; Guerois, Raphaël
2016-12-15
Protein-protein docking methods are of great importance for understanding interactomes at the structural level. It has become increasingly appealing to use not only experimental structures but also homology models of unbound subunits as input for docking simulations. So far we are missing a large scale assessment of the success of rigid-body free docking methods on homology models. We explored how we could benefit from comparative modelling of unbound subunits to expand docking benchmark datasets. Starting from a collection of 3157 non-redundant, high X-ray resolution heterodimers, we developed the PPI4DOCK benchmark containing 1417 docking targets based on unbound homology models. Rigid-body docking by Zdock showed that for 1208 cases (85.2%), at least one correct decoy was generated, emphasizing the efficiency of rigid-body docking in generating correct assemblies. Overall, the PPI4DOCK benchmark contains a large set of realistic cases and provides new ground for assessing docking and scoring methodologies. Benchmark sets can be downloaded from http://biodev.cea.fr/interevol/ppi4dock/ CONTACT: guerois@cea.frSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A CPU benchmark for protein crystallographic refinement.
Bourne, P E; Hendrickson, W A
1990-01-01
The CPU time required to complete a cycle of restrained least-squares refinement of a protein structure from X-ray crystallographic data using the FORTRAN codes PROTIN and PROLSQ are reported for 48 different processors, ranging from single-user workstations to supercomputers. Sequential, vector, VLIW, multiprocessor, and RISC hardware architectures are compared using both a small and a large protein structure. Representative compile times for each hardware type are also given, and the improvement in run-time when coding for a specific hardware architecture considered. The benchmarks involve scalar integer and vector floating point arithmetic and are representative of the calculations performed in many scientific disciplines.
ForceGen 3D structure and conformer generation: from small lead-like molecules to macrocyclic drugs
NASA Astrophysics Data System (ADS)
Cleves, Ann E.; Jain, Ajay N.
2017-05-01
We introduce the ForceGen method for 3D structure generation and conformer elaboration of drug-like small molecules. ForceGen is novel, avoiding use of distance geometry, molecular templates, or simulation-oriented stochastic sampling. The method is primarily driven by the molecular force field, implemented using an extension of MMFF94s and a partial charge estimator based on electronegativity-equalization. The force field is coupled to algorithms for direct sampling of realistic physical movements made by small molecules. Results are presented on a standard benchmark from the Cambridge Crystallographic Database of 480 drug-like small molecules, including full structure generation from SMILES strings. Reproduction of protein-bound crystallographic ligand poses is demonstrated on four carefully curated data sets: the ConfGen Set (667 ligands), the PINC cross-docking benchmark (1062 ligands), a large set of macrocyclic ligands (182 total with typical ring sizes of 12-23 atoms), and a commonly used benchmark for evaluating macrocycle conformer generation (30 ligands total). Results compare favorably to alternative methods, and performance on macrocyclic compounds approaches that observed on non-macrocycles while yielding a roughly 100-fold speed improvement over alternative MD-based methods with comparable performance.
Benditz, A; Drescher, J; Greimel, F; Zeman, F; Grifka, J; Meißner, W; Völlner, F
2016-12-05
Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16 th in terms of activity-related pain and 9 th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1 st activity-related pain and to 2 nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA.
Benditz, A.; Drescher, J.; Greimel, F.; Zeman, F.; Grifka, J.; Meißner, W.; Völlner, F.
2016-01-01
Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16th in terms of activity-related pain and 9th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1st activity-related pain and to 2nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA. PMID:27917911
NASA Technical Reports Server (NTRS)
Rivera, Jose A., Jr.; Dansberry, Bryan E.; Farmer, Moses G.; Eckstrom, Clinton V.; Seidel, David A.; Bennett, Robert M.
1991-01-01
The Structural Dynamics Div. at NASA-Langley has started a wind tunnel activity referred to as the Benchmark Models Program. The objective is to acquire test data that will be useful for developing and evaluating aeroelastic type Computational Fluid Dynamics codes currently in use or under development. The progress is described which was achieved in testing the first model in the Benchmark Models Program. Experimental flutter boundaries are presented for a rigid semispan model (NACA 0012 airfoil section) mounted on a flexible mount system. Also, steady and unsteady pressure measurements taken at the flutter condition are presented. The pressure data were acquired over the entire model chord located at the 60 pct. span station.
Benchmarking Controlled Trial—a novel concept covering all observational effectiveness studies
Malmivaara, Antti
2015-01-01
Abstract The Benchmarking Controlled Trial (BCT) is a novel concept which covers all observational studies aiming to assess effectiveness. BCTs provide evidence of the comparative effectiveness between health service providers, and of effectiveness due to particular features of the health and social care systems. BCTs complement randomized controlled trials (RCTs) as the sources of evidence on effectiveness. This paper presents a definition of the BCT; compares the position of BCTs in assessing effectiveness with that of RCTs; presents a checklist for assessing methodological validity of a BCT; and pilot-tests the checklist with BCTs published recently in the leading medical journals. PMID:25965700
Aeroservoelastic and Flight Dynamics Analysis Using Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Arena, Andrew S., Jr.
1999-01-01
This document in large part is based on the Masters Thesis of Cole Stephens. The document encompasses a variety of technical and practical issues involved when using the STARS codes for Aeroservoelastic analysis of vehicles. The document covers in great detail a number of technical issues and step-by-step details involved in the simulation of a system where aerodynamics, structures and controls are tightly coupled. Comparisons are made to a benchmark experimental program conducted at NASA Langley. One of the significant advantages of the methodology detailed is that as a result of the technique used to accelerate the CFD-based simulation, a systems model is produced which is very useful for developing the control law strategy, and subsequent high-speed simulations.
A benchmark for subduction zone modeling
NASA Astrophysics Data System (ADS)
van Keken, P.; King, S.; Peacock, S.
2003-04-01
Our understanding of subduction zones hinges critically on the ability to discern its thermal structure and dynamics. Computational modeling has become an essential complementary approach to observational and experimental studies. The accurate modeling of subduction zones is challenging due to the unique geometry, complicated rheological description and influence of fluid and melt formation. The complicated physics causes problems for the accurate numerical solution of the governing equations. As a consequence it is essential for the subduction zone community to be able to evaluate the ability and limitations of various modeling approaches. The participants of a workshop on the modeling of subduction zones, held at the University of Michigan at Ann Arbor, MI, USA in 2002, formulated a number of case studies to be developed into a benchmark similar to previous mantle convection benchmarks (Blankenbach et al., 1989; Busse et al., 1991; Van Keken et al., 1997). Our initial benchmark focuses on the dynamics of the mantle wedge and investigates three different rheologies: constant viscosity, diffusion creep, and dislocation creep. In addition we investigate the ability of codes to accurate model dynamic pressure and advection dominated flows. Proceedings of the workshop and the formulation of the benchmark are available at www.geo.lsa.umich.edu/~keken/subduction02.html We strongly encourage interested research groups to participate in this benchmark. At Nice 2003 we will provide an update and first set of benchmark results. Interested researchers are encouraged to contact one of the authors for further details.
Global-local methodologies and their application to nonlinear analysis
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1989-01-01
An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.
A Knowledge Database on Thermal Control in Manufacturing Processes
NASA Astrophysics Data System (ADS)
Hirasawa, Shigeki; Satoh, Isao
A prototype version of a knowledge database on thermal control in manufacturing processes, specifically, molding, semiconductor manufacturing, and micro-scale manufacturing has been developed. The knowledge database has search functions for technical data, evaluated benchmark data, academic papers, and patents. The database also displays trends and future roadmaps for research topics. It has quick-calculation functions for basic design. This paper summarizes present research topics and future research on thermal control in manufacturing engineering to collate the information to the knowledge database. In the molding process, the initial mold and melt temperatures are very important parameters. In addition, thermal control is related to many semiconductor processes, and the main parameter is temperature variation in wafers. Accurate in-situ temperature measurment of wafers is important. And many technologies are being developed to manufacture micro-structures. Accordingly, the knowledge database will help further advance these technologies.
Higher representations on the lattice: Numerical simulations, SU(2) with adjoint fermions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Debbio, Luigi; Patella, Agostino; Pica, Claudio
2010-05-01
We discuss the lattice formulation of gauge theories with fermions in arbitrary representations of the color group and present in detail the implementation of the hybrid Monte Carlo (HMC)/rational HMC algorithm for simulating dynamical fermions. We discuss the validation of the implementation through an extensive set of tests and the stability of simulations by monitoring the distribution of the lowest eigenvalue of the Wilson-Dirac operator. Working with two flavors of Wilson fermions in the adjoint representation, benchmark results for realistic lattice simulations are presented. Runs are performed on different lattice sizes ranging from 4{sup 3}x8 to 24{sup 3}x64 sites. Formore » the two smallest lattices we also report the measured values of benchmark mesonic observables. These results can be used as a baseline for rapid cross-checks of simulations in higher representations. The results presented here are the first steps toward more extensive investigations with controlled systematic errors, aiming at a detailed understanding of the phase structure of these theories, and of their viability as candidates for strong dynamics beyond the standard model.« less
von Websky, Martin W; Raptis, Dimitri A; Vitz, Martina; Rosenthal, Rachel; Clavien, P A; Hahnloser, Dieter
2013-11-01
Virtual reality (VR) simulators are widely used to familiarize surgical novices with laparoscopy, but VR training methods differ in efficacy. In the present trial, self-controlled basic VR training (SC-training) was tested against training based on peer-group-derived benchmarks (PGD-training). First, novice laparoscopic residents were randomized into a SC group (n = 34), and a group using PGD-benchmarks (n = 34) for basic laparoscopic training. After completing basic training, both groups performed 60 VR laparoscopic cholecystectomies for performance analysis. Primary endpoints were simulator metrics; secondary endpoints were program adherence, trainee motivation, and training efficacy. Altogether, 66 residents completed basic training, and 3,837 of 3,960 (96.8 %) cholecystectomies were available for analysis. Course adherence was good, with only two dropouts, both in the SC-group. The PGD-group spent more time and repetitions in basic training until the benchmarks were reached and subsequently showed better performance in the readout cholecystectomies: Median time (gallbladder extraction) showed significant differences of 520 s (IQR 354-738 s) in SC-training versus 390 s (IQR 278-536 s) in the PGD-group (p < 0.001) and 215 s (IQR 175-276 s) in experts, respectively. Path length of the right instrument also showed significant differences, again with the PGD-training group being more efficient. Basic VR laparoscopic training based on PGD benchmarks with external assessment is superior to SC training, resulting in higher trainee motivation and better performance in simulated laparoscopic cholecystectomies. We recommend such a basic course based on PGD benchmarks before advancing to more elaborate VR training.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-26
... Benchmark Engineering Corp., Civil Action No. 10-40131 was lodged with the United States District Court for... requires Defendants to pay a civil penalty of $150,000, perform a Supplemental Environmental Project, and.... Fafard Real Estate and Development Corp., FRE Building Co. Inc., and Benchmark Engineering Corp., D.J...
PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.
von Eiff, Wilfried
2015-01-01
Hospitals worldwide are facing the same opportunities and threats: the demographics of an aging population; steady increases in chronic diseases and severe illnesses; and a steadily increasing demand for medical services with more intensive treatment for multi-morbid patients. Additionally, patients are becoming more demanding. They expect high quality medicine within a dignity-driven and painless healing environment. The severe financial pressures that these developments entail oblige care providers to more and more cost-containment and to apply process reengineering, as well as continuous performance improvement measures, so as to achieve future financial sustainability. At the same time, regulators are calling for improved patient outcomes. Benchmarking and best practice management are successfully proven performance improvement tools for enabling hospitals to achieve a higher level of clinical output quality, enhanced patient satisfaction, and care delivery capability, while simultaneously containing and reducing costs. This chapter aims to clarify what benchmarking is and what it is not. Furthermore, it is stated that benchmarking is a powerful managerial tool for improving decision-making processes that can contribute to the above-mentioned improvement measures in health care delivery. The benchmarking approach described in this chapter is oriented toward the philosophy of an input-output model and is explained based on practical international examples from different industries in various countries. Benchmarking is not a project with a defined start and end point, but a continuous initiative of comparing key performance indicators, process structures, and best practices from best-in-class companies inside and outside industry. Benchmarking is an ongoing process of measuring and searching for best-in-class performance: Measure yourself with yourself over time against key performance indicators. Measure yourself against others. Identify best practices. Equal or exceed this best practice in your institution. Focus on simple and effective ways to implement solutions. Comparing only figures, such as average length of stay, costs of procedures, infection rates, or out-of-stock rates, can lead easily to wrong conclusions and decision making with often-disastrous consequences. Just looking at figures and ratios is not the basis for detecting potential excellence. It is necessary to look beyond the numbers to understand how processes work and contribute to best-in-class results. Best practices from even quite different industries can enable hospitals to leapfrog results in patient orientation, clinical excellence, and cost-effectiveness. Despite common benchmarking approaches, it is pointed out that a comparison without "looking behind the figures" (what it means to be familiar with the process structure, process dynamic and drivers, process institutions/rules and process-related incentive components) will be extremely limited referring to reliability and quality of findings. In order to demonstrate transferability of benchmarking results between different industries practical examples from health care, automotive, and hotel service have been selected. Additionally, it is depicted that international comparisons between hospitals providing medical services in different health care systems do have a great potential for achieving leapfrog results in medical quality, organization of service provision, effective work structures, purchasing and logistics processes, or management, etc.
Model and controller reduction of large-scale structures based on projection methods
NASA Astrophysics Data System (ADS)
Gildin, Eduardo
The design of low-order controllers for high-order plants is a challenging problem theoretically as well as from a computational point of view. Frequently, robust controller design techniques result in high-order controllers. It is then interesting to achieve reduced-order models and controllers while maintaining robustness properties. Controller designed for large structures based on models obtained by finite element techniques yield large state-space dimensions. In this case, problems related to storage, accuracy and computational speed may arise. Thus, model reduction methods capable of addressing controller reduction problems are of primary importance to allow the practical applicability of advanced controller design methods for high-order systems. A challenging large-scale control problem that has emerged recently is the protection of civil structures, such as high-rise buildings and long-span bridges, from dynamic loadings such as earthquakes, high wind, heavy traffic, and deliberate attacks. Even though significant effort has been spent in the application of control theory to the design of civil structures in order increase their safety and reliability, several challenging issues are open problems for real-time implementation. This dissertation addresses with the development of methodologies for controller reduction for real-time implementation in seismic protection of civil structures using projection methods. Three classes of schemes are analyzed for model and controller reduction: nodal truncation, singular value decomposition methods and Krylov-based methods. A family of benchmark problems for structural control are used as a framework for a comparative study of model and controller reduction techniques. It is shown that classical model and controller reduction techniques, such as balanced truncation, modal truncation and moment matching by Krylov techniques, yield reduced-order controllers that do not guarantee stability of the closed-loop system, that is, the reduced-order controller implemented with the full-order plant. A controller reduction approach is proposed such that to guarantee closed-loop stability. It is based on the concept of dissipativity (or positivity) of linear dynamical systems. Utilizing passivity preserving model reduction together with dissipative-LQG controllers, effective low-order optimal controllers are obtained. Results are shown through simulations.
Suwazono, Yasushi; Dochi, Mirei; Kobayashi, Etsuko; Oishi, Mitsuhiro; Okubo, Yasushi; Tanaka, Kumihiko; Sakata, Kouichi
2008-12-01
The objective of this study was to calculate benchmark durations and lower 95% confidence limits for benchmark durations of working hours associated with subjective fatigue symptoms by applying the benchmark dose approach while adjusting for job-related stress using multiple logistic regression analyses. A self-administered questionnaire was completed by 3,069 male and 412 female daytime workers (age 18-67 years) in a Japanese steel company. The eight dependent variables in the Cumulative Fatigue Symptoms Index were decreased vitality, general fatigue, physical disorders, irritability, decreased willingness to work, anxiety, depressive feelings, and chronic tiredness. Independent variables were daily working hours, four subscales (job demand, job control, interpersonal relationship, and job suitability) of the Brief Job Stress Questionnaire, and other potential covariates. Using significant parameters for working hours and those for other covariates, the benchmark durations of working hours were calculated for the corresponding Index property. Benchmark response was set at 5% or 10%. Assuming a condition of worst job stress, the benchmark duration/lower 95% confidence limit for benchmark duration of working hours per day with a benchmark response of 5% or 10% were 10.0/9.4 or 11.7/10.7 (irritability) and 9.2/8.9 or 10.4/9.8 (chronic tiredness) in men and 8.9/8.4 or 9.8/8.9 (chronic tiredness) in women. The threshold amounts of working hours for fatigue symptoms under the worst job-related stress were very close to the standard daily working hours in Japan. The results strongly suggest that special attention should be paid to employees whose working hours exceed threshold amounts based on individual levels of job-related stress.
PID controller tuning using metaheuristic optimization algorithms for benchmark problems
NASA Astrophysics Data System (ADS)
Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.
2017-11-01
This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.
Gatemon Benchmarking and Two-Qubit Operation
NASA Astrophysics Data System (ADS)
Casparis, Lucas; Larsen, Thorvald; Olsen, Michael; Petersson, Karl; Kuemmeth, Ferdinand; Krogstrup, Peter; Nygard, Jesper; Marcus, Charles
Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability singular to semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors of ~0.5 % for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent SWAP operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of ~91 %, demonstrating the potential of gatemon qubits for building scalable quantum processors. We acknowledge financial support from Microsoft Project Q and the Danish National Research Foundation.
NASA Technical Reports Server (NTRS)
Stewart, H. E.; Blom, R.; Abrams, M.; Daily, M.
1980-01-01
Satellite synthetic aperture radar (SAR) images is evaluated in terms of its geologic applications. The benchmark to which the SAR images are compared is LANDSAT, used both for structural and lithologic interpretations.
Schnipper, Jeffrey Lawrence; Messler, Jordan; Ramos, Pedro; Kulasa, Kristen; Nolan, Ann; Rogers, Kendall
2014-01-01
Background: Insulin is a top source of adverse drug events in the hospital, and glycemic control is a focus of improvement efforts across the country. Yet, the majority of hospitals have no data to gauge their performance on glycemic control, hypoglycemia rates, or hypoglycemic management. Current tools to outsource glucometrics reports are limited in availability or function. Methods: Society of Hospital Medicine (SHM) faculty designed and implemented a web-based data and reporting center that calculates glucometrics on blood glucose data files securely uploaded by users. Unit labels, care type (critical care, non–critical care), and unit type (eg, medical, surgical, mixed, pediatrics) are defined on upload allowing for robust, flexible reporting. Reports for any date range, care type, unit type, or any combination of units are available on demand for review or downloading into a variety of file formats. Four reports with supporting graphics depict glycemic control, hypoglycemia, and hypoglycemia management by patient day or patient stay. Benchmarking and performance ranking reports are generated periodically for all hospitals in the database. Results: In all, 76 hospitals have uploaded at least 12 months of data for non–critical care areas and 67 sites have uploaded critical care data. Critical care benchmarking reveals wide variability in performance. Some hospitals achieve top quartile performance in both glycemic control and hypoglycemia parameters. Conclusions: This new web-based glucometrics data and reporting tool allows hospitals to track their performance with a flexible reporting system, and provides them with external benchmarking. Tools like this help to establish standardized glucometrics and performance standards. PMID:24876426
Maynard, Greg; Schnipper, Jeffrey Lawrence; Messler, Jordan; Ramos, Pedro; Kulasa, Kristen; Nolan, Ann; Rogers, Kendall
2014-07-01
Insulin is a top source of adverse drug events in the hospital, and glycemic control is a focus of improvement efforts across the country. Yet, the majority of hospitals have no data to gauge their performance on glycemic control, hypoglycemia rates, or hypoglycemic management. Current tools to outsource glucometrics reports are limited in availability or function. Society of Hospital Medicine (SHM) faculty designed and implemented a web-based data and reporting center that calculates glucometrics on blood glucose data files securely uploaded by users. Unit labels, care type (critical care, non-critical care), and unit type (eg, medical, surgical, mixed, pediatrics) are defined on upload allowing for robust, flexible reporting. Reports for any date range, care type, unit type, or any combination of units are available on demand for review or downloading into a variety of file formats. Four reports with supporting graphics depict glycemic control, hypoglycemia, and hypoglycemia management by patient day or patient stay. Benchmarking and performance ranking reports are generated periodically for all hospitals in the database. In all, 76 hospitals have uploaded at least 12 months of data for non-critical care areas and 67 sites have uploaded critical care data. Critical care benchmarking reveals wide variability in performance. Some hospitals achieve top quartile performance in both glycemic control and hypoglycemia parameters. This new web-based glucometrics data and reporting tool allows hospitals to track their performance with a flexible reporting system, and provides them with external benchmarking. Tools like this help to establish standardized glucometrics and performance standards. © 2014 Diabetes Technology Society.
NASA Astrophysics Data System (ADS)
Alloui, Mebarka; Belaidi, Salah; Othmani, Hasna; Jaidane, Nejm-Eddine; Hochlaf, Majdi
2018-03-01
We performed benchmark studies on the molecular geometry, electron properties and vibrational analysis of imidazole using semi-empirical, density functional theory and post Hartree-Fock methods. These studies validated the use of AM1 for the treatment of larger systems. Then, we treated the structural, physical and chemical relationships for a series of imidazole derivatives acting as angiotensin II AT1 receptor blockers using AM1. QSAR studies were done for these imidazole derivatives using a combination of various physicochemical descriptors. A multiple linear regression procedure was used to design the relationships between molecular descriptor and the activity of imidazole derivatives. Results validate the derived QSAR model.
NASA Astrophysics Data System (ADS)
May, Matthias M.; Lewerenz, Hans-Joachim; Lackner, David; Dimroth, Frank; Hannappel, Thomas
2015-09-01
Photosynthesis is nature's route to convert intermittent solar irradiation into storable energy, while its use for an industrial energy supply is impaired by low efficiency. Artificial photosynthesis provides a promising alternative for efficient robust carbon-neutral renewable energy generation. The approach of direct hydrogen generation by photoelectrochemical water splitting utilizes customized tandem absorber structures to mimic the Z-scheme of natural photosynthesis. Here a combined chemical surface transformation of a tandem structure and catalyst deposition at ambient temperature yields photocurrents approaching the theoretical limit of the absorber and results in a solar-to-hydrogen efficiency of 14%. The potentiostatically assisted photoelectrode efficiency is 17%. Present benchmarks for integrated systems are clearly exceeded. Details of the in situ interface transformation, the electronic improvement and chemical passivation are presented. The surface functionalization procedure is widely applicable and can be precisely controlled, allowing further developments of high-efficiency robust hydrogen generators.
May, Matthias M.; Lewerenz, Hans-Joachim; Lackner, David; Dimroth, Frank; Hannappel, Thomas
2015-01-01
Photosynthesis is nature's route to convert intermittent solar irradiation into storable energy, while its use for an industrial energy supply is impaired by low efficiency. Artificial photosynthesis provides a promising alternative for efficient robust carbon-neutral renewable energy generation. The approach of direct hydrogen generation by photoelectrochemical water splitting utilizes customized tandem absorber structures to mimic the Z-scheme of natural photosynthesis. Here a combined chemical surface transformation of a tandem structure and catalyst deposition at ambient temperature yields photocurrents approaching the theoretical limit of the absorber and results in a solar-to-hydrogen efficiency of 14%. The potentiostatically assisted photoelectrode efficiency is 17%. Present benchmarks for integrated systems are clearly exceeded. Details of the in situ interface transformation, the electronic improvement and chemical passivation are presented. The surface functionalization procedure is widely applicable and can be precisely controlled, allowing further developments of high-efficiency robust hydrogen generators. PMID:26369620
Benchmarking Brain-Computer Interfaces Outside the Laboratory: The Cybathlon 2016
Novak, Domen; Sigrist, Roland; Gerig, Nicolas J.; Wyss, Dario; Bauer, René; Götz, Ulrich; Riener, Robert
2018-01-01
This paper presents a new approach to benchmarking brain-computer interfaces (BCIs) outside the lab. A computer game was created that mimics a real-world application of assistive BCIs, with the main outcome metric being the time needed to complete the game. This approach was used at the Cybathlon 2016, a competition for people with disabilities who use assistive technology to achieve tasks. The paper summarizes the technical challenges of BCIs, describes the design of the benchmarking game, then describes the rules for acceptable hardware, software and inclusion of human pilots in the BCI competition at the Cybathlon. The 11 participating teams, their approaches, and their results at the Cybathlon are presented. Though the benchmarking procedure has some limitations (for instance, we were unable to identify any factors that clearly contribute to BCI performance), it can be successfully used to analyze BCI performance in realistic, less structured conditions. In the future, the parameters of the benchmarking game could be modified to better mimic different applications (e.g., the need to use some commands more frequently than others). Furthermore, the Cybathlon has the potential to showcase such devices to the general public. PMID:29375294
NASA Astrophysics Data System (ADS)
Nagy, Julia; Eilert, Tobias; Michaelis, Jens
2018-03-01
Modern hybrid structural analysis methods have opened new possibilities to analyze and resolve flexible protein complexes where conventional crystallographic methods have reached their limits. Here, the Fast-Nano-Positioning System (Fast-NPS), a Bayesian parameter estimation-based analysis method and software, is an interesting method since it allows for the localization of unknown fluorescent dye molecules attached to macromolecular complexes based on single-molecule Förster resonance energy transfer (smFRET) measurements. However, the precision, accuracy, and reliability of structural models derived from results based on such complex calculation schemes are oftentimes difficult to evaluate. Therefore, we present two proof-of-principle benchmark studies where we use smFRET data to localize supposedly unknown positions on a DNA as well as on a protein-nucleic acid complex. Since we use complexes where structural information is available, we can compare Fast-NPS localization to the existing structural data. In particular, we compare different dye models and discuss how both accuracy and precision can be optimized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thrower, A.W.; Patric, J.; Keister, M.
2008-07-01
The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how thesemore » findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in safely and efficiently shipping spent nuclear fuel and other radioactive materials. Additional business processes may be examined in this phase. The findings of these benchmarking efforts will help determine the organizational structure and requirements of the national transportation system. (authors)« less
Time and frequency structure of causal correlation networks in the China bond market
NASA Astrophysics Data System (ADS)
Wang, Zhongxing; Yan, Yan; Chen, Xiaosong
2017-07-01
There are more than eight hundred interest rates published in the China bond market every day. Identifying the benchmark interest rates that have broad influences on most other interest rates is a major concern for economists. In this paper, a multi-variable Granger causality test is developed and applied to construct a directed network of interest rates, whose important nodes, regarded as key interest rates, are evaluated with CheiRank scores. The results indicate that repo rates are the benchmark of short-term rates, the central bank bill rates are in the core position of mid-term interest rates network, and treasury bond rates lead the long-term bond rates. The evolution of benchmark interest rates from 2008 to 2014 is also studied, and it is found that SHIBOR has generally become the benchmark interest rate in China. In the frequency domain we identify the properties of information flows between interest rates, and the result confirms the existence of market segmentation in the China bond market.
The Filament Sensor for Near Real-Time Detection of Cytoskeletal Fiber Structures
Eltzner, Benjamin; Wollnik, Carina; Gottschlich, Carsten; Huckemann, Stephan; Rehfeldt, Florian
2015-01-01
A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source. PMID:25996921
Mergner, Thomas; Lippi, Vittorio
2018-01-01
Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1) four basic physical disturbances (support surface (SS) tilt and translation, field and contact forces) may affect the balancing in any given degree of freedom (DoF). Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2) Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with "reactive" balancing of external disturbances and "proactive" balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3) Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking criteria build on the evoked sway magnitude, normalized to robot weight and Center of mass (COM) height, in relation to reference ranges that remain to be established. The references may include human likeness features. The proposed benchmarking concept may in principle also be applied to wearable robots, where a human user may command movements, but may not be aware of the additionally required postural control, which then needs to be implemented into the robot.
Hamdy, M; Hamdan, I
2015-07-01
In this paper, a robust H∞ fuzzy output feedback controller is designed for a class of affine nonlinear systems with disturbance via Takagi-Sugeno (T-S) fuzzy bilinear model. The parallel distributed compensation (PDC) technique is utilized to design a fuzzy controller. The stability conditions of the overall closed loop T-S fuzzy bilinear model are formulated in terms of Lyapunov function via linear matrix inequality (LMI). The control law is robustified by H∞ sense to attenuate external disturbance. Moreover, the desired controller gains can be obtained by solving a set of LMI. A continuous stirred tank reactor (CSTR), which is a benchmark problem in nonlinear process control, is discussed in detail to verify the effectiveness of the proposed approach with a comparative study. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Crampton, Andrew S.; Rötzer, Marian D.; Ridge, Claron J.; ...
2016-01-28
The sensitivity, or insensitivity, of catalysed reactions to catalyst structure is a commonly employed fundamental concept. Here we report on the nature of nano-catalysed ethylene hydrogenation, investigated through experiments on size-selected Pt n (n=8-15) clusters soft-landed on magnesia and first-principles simulations, yielding benchmark information about the validity of structure sensitivity/insensitivity at the bottom of the catalyst size range. Both ethylene-hydrogenation-to-ethane and the parallel hydrogenation–dehydrogenation ethylidyne-producing route are considered, uncovering that at the <1 nm size-scale the reaction exhibits characteristics consistent with structure sensitivity, in contrast to structure insensitivity found for larger particles. The onset of catalysed hydrogenation occurs for Ptmore » n (n≥10) clusters at T>150 K, with maximum room temperature reactivity observed for Pt 13. Structure insensitivity, inherent for specific cluster sizes, is induced in the more active Pt 13 by a temperature increase up to 400 K leading to ethylidyne formation. As a result, control of sub-nanometre particle size may be used for tuning catalysed hydrogenation activity and selectivity.« less
Crampton, Andrew S.; Rötzer, Marian D.; Ridge, Claron J.; Schweinberger, Florian F.; Heiz, Ueli; Yoon, Bokwon; Landman, Uzi
2016-01-01
The sensitivity, or insensitivity, of catalysed reactions to catalyst structure is a commonly employed fundamental concept. Here we report on the nature of nano-catalysed ethylene hydrogenation, investigated through experiments on size-selected Ptn (n=8–15) clusters soft-landed on magnesia and first-principles simulations, yielding benchmark information about the validity of structure sensitivity/insensitivity at the bottom of the catalyst size range. Both ethylene-hydrogenation-to-ethane and the parallel hydrogenation–dehydrogenation ethylidyne-producing route are considered, uncovering that at the <1 nm size-scale the reaction exhibits characteristics consistent with structure sensitivity, in contrast to structure insensitivity found for larger particles. The onset of catalysed hydrogenation occurs for Ptn (n≥10) clusters at T>150 K, with maximum room temperature reactivity observed for Pt13. Structure insensitivity, inherent for specific cluster sizes, is induced in the more active Pt13 by a temperature increase up to 400 K leading to ethylidyne formation. Control of sub-nanometre particle size may be used for tuning catalysed hydrogenation activity and selectivity. PMID:26817713
NASA Astrophysics Data System (ADS)
Crampton, Andrew S.; Rötzer, Marian D.; Ridge, Claron J.; Schweinberger, Florian F.; Heiz, Ueli; Yoon, Bokwon; Landman, Uzi
2016-01-01
The sensitivity, or insensitivity, of catalysed reactions to catalyst structure is a commonly employed fundamental concept. Here we report on the nature of nano-catalysed ethylene hydrogenation, investigated through experiments on size-selected Ptn (n=8-15) clusters soft-landed on magnesia and first-principles simulations, yielding benchmark information about the validity of structure sensitivity/insensitivity at the bottom of the catalyst size range. Both ethylene-hydrogenation-to-ethane and the parallel hydrogenation-dehydrogenation ethylidyne-producing route are considered, uncovering that at the <1 nm size-scale the reaction exhibits characteristics consistent with structure sensitivity, in contrast to structure insensitivity found for larger particles. The onset of catalysed hydrogenation occurs for Ptn (n>=10) clusters at T>150 K, with maximum room temperature reactivity observed for Pt13. Structure insensitivity, inherent for specific cluster sizes, is induced in the more active Pt13 by a temperature increase up to 400 K leading to ethylidyne formation. Control of sub-nanometre particle size may be used for tuning catalysed hydrogenation activity and selectivity.
Space Operations Training Concepts Benchmark Study (Training in a Continuous Operations Environment)
NASA Technical Reports Server (NTRS)
Johnston, Alan E.; Gilchrist, Michael; Underwood, Debrah (Technical Monitor)
2002-01-01
The NASA/USAF Benchmark Space Operations Training Concepts Study will perform a comparative analysis of the space operations training programs utilized by the United States Air Force Space Command with those utilized by the National Aeronautics and Space Administration. The concentration of the study will be focused on Ground Controller/Flight Controller Training for the International Space Station Payload Program. The duration of the study is expected to be five months with report completion by 30 June 2002. The U.S. Air Force Space Command was chosen as the most likely candidate for this benchmark study because their experience in payload operations controller training and user interfaces compares favorably with the Payload Operations Integration Center's training and user interfaces. These similarities can be seen in the dynamics of missions/payloads, controller on-console requirements, and currency/proficiency challenges to name a few. It is expected that the report will look at the respective programs and investigate goals of each training program, unique training challenges posed by space operations ground controller environments, processes of setting up controller training programs, phases of controller training, methods of controller training, techniques to evaluate adequacy of controller knowledge and the training received, and approaches to training administration. The report will provide recommendations to the respective agencies based on the findings. Attached is a preliminary outline of the study. Following selection of participants and an approval to proceed, initial contact will be made with U.S. Air Force Space Command Directorate of Training to discuss steps to accomplish the study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hennig, J., E-mail: jonas.hennig@ovgu.de; Dadgar, A.; Witte, H.
2015-07-15
We report on GaN based field-effect transistor (FET) structures exhibiting sheet carrier densities of n = 2.9 10{sup 13} cm{sup −2} for high-power transistor applications. By grading the indium-content of InGaN layers grown prior to a conventional GaN/AlN/AlInN FET structure control of the channel width at the GaN/AlN interface is obtained. The composition of the InGaN layer was graded from nominally x{sub In} = 30 % to pure GaN just below the AlN/AlInN interface. Simulations reveal the impact of the additional InGaN layer on the potential well width which controls the sheet carrier density within the channel region of the devices.more » Benchmarking the In{sub x}Ga{sub 1−x}N/GaN/AlN/Al{sub 0.87}In{sub 0.13}N based FETs against GaN/AlN/AlInN FET reference structures we found increased maximum current densities of I{sub SD} = 1300 mA/mm (560 mA/mm). In addition, the InGaN layer helps to achieve broader transconductance profiles as well as reduced leakage currents.« less
Mergner, Thomas; Lippi, Vittorio
2018-01-01
Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1) four basic physical disturbances (support surface (SS) tilt and translation, field and contact forces) may affect the balancing in any given degree of freedom (DoF). Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2) Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with “reactive” balancing of external disturbances and “proactive” balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3) Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking criteria build on the evoked sway magnitude, normalized to robot weight and Center of mass (COM) height, in relation to reference ranges that remain to be established. The references may include human likeness features. The proposed benchmarking concept may in principle also be applied to wearable robots, where a human user may command movements, but may not be aware of the additionally required postural control, which then needs to be implemented into the robot. PMID:29867428
Tuppurainen, Kari; Viisas, Marja; Laatikainen, Reino; Peräkylä, Mikael
2002-01-01
A novel electronic eigenvalue (EEVA) descriptor of molecular structure for use in the derivation of predictive QSAR/QSPR models is described. Like other spectroscopic QSAR/QSPR descriptors, EEVA is also invariant as to the alignment of the structures concerned. Its performance was tested with respect to the CBG (corticosteroid binding globulin) affinity of 31 benchmark steroids. It appeared that the electronic structure of the steroids, i.e., the "spectra" derived from molecular orbital energies, is directly related to the CBG binding affinities. The predictive ability of EEVA is compared to other QSAR approaches, and its performance is discussed in the context of the Hammett equation. The good performance of EEVA is an indication of the essential quantum mechanical nature of QSAR. The EEVA method is a supplement to conventional 3D QSAR methods, which employ fields or surface properties derived from Coulombic and van der Waals interactions.
Hedman, C.W.; Grace, S.L.; King, S.E.
2000-01-01
Longleaf pine (Pinus palustris) ecosystems are characterized by a diverse community of native groundcover species. Critics of plantation forestry claim that loblolly (Pinus taeda) and slash pine (Pinus elliottii) forests are devoid of native groundcover due to associated management practices. As a result of these practices, some believe that ecosystem functions characteristic of longleaf pine are lost under loblolly and slash pine plantation management. Our objective was to quantify and compare vegetation composition and structure of longleaf, loblolly, and slash pine forests of differing ages, management strategies, and land-use histories. Information from this study will further our understanding and lead to inferences about functional differences among pine cover types. Vegetation and environmental data were collected in 49 overstory plots across Southlands Experiment Forest in Bainbridge, GA. Nested plots, i.e. midstory, understory, and herbaceous, were replicated four times within each overstory plot. Over 400 species were identified. Herbaceous species richness was variable for all three pine cover types. Herbaceous richness for longleaf, slash, and loblolly pine averaged 15, 13, and 12 species per m2, respectively. Longleaf pine plots had significantly more (p < 0.029) herbaceous species and greater herbaceous cover (p < 0.001) than loblolly or slash pine plots. Longleaf and slash pine plots were otherwise similar in species richness and stand structure, both having lower overstory density, midstory density, and midstory cover than loblolly pine plots. Multivariate analyses provided additional perspectives on vegetation patterns. Ordination and classification procedures consistently placed herbaceous plots into two groups which we refer to as longleaf pine benchmark (34 plots) and non-benchmark (15 plots). Benchmark plots typically contained numerous herbaceous species characteristic of relic longleaf pine/wiregrass communities found in the area. Conversely, non-benchmark plots contained fewer species characteristic of relic longleaf pine/wiregrass communities and more ruderal species common to highly disturbed sites. The benchmark group included 12 naturally regenerated longleaf plots and 22 loblolly, slash, and longleaf pine plantation plots encompassing a broad range of silvicultural disturbances. Non-benchmark plots included eight afforested old-field plantation plots and seven cutover plantation plots. Regardless of overstory species, all afforested old fields were low either in native species richness or in abundance. Varying degrees of this groundcover condition were also found in some cutover plantation plots that were classified as non-benchmark. Environmental variables strongly influencing vegetation patterns included agricultural history and fire frequency. Results suggest that land-use history, particularly related to agriculture, has a greater influence on groundcover composition and structure in southern pine forests than more recent forest management activities or pine cover type. Additional research is needed to identify the potential for afforested old fields to recover native herbaceous species. In the interim, high-yield plantation management should initially target old-field sites which already support reduced numbers of groundcover species. Sites which have not been farmed in the past 50-60 years should be considered for longleaf pine restoration and multiple-use objectives, since they have the greatest potential for supporting diverse native vegetation. (C) 2000 Elsevier Science B.V.
Ramasamy, Thilagavathi; Selvam, Chelliah
2015-10-15
Virtual screening has become an important tool in drug discovery process. Structure based and ligand based approaches are generally used in virtual screening process. To date, several benchmark sets for evaluating the performance of the virtual screening tool are available. In this study, our aim is to compare the performance of both structure based and ligand based virtual screening methods. Ten anti-cancer targets and their corresponding benchmark sets from 'Demanding Evaluation Kits for Objective In silico Screening' (DEKOIS) library were selected. X-ray crystal structures of protein-ligand complexes were selected based on their resolution. Openeye tools such as FRED, vROCS were used and the results were carefully analyzed. At EF1%, vROCS produced better results but at EF5% and EF10%, both FRED and ROCS produced almost similar results. It was noticed that the enrichment factor values were decreased while going from EF1% to EF5% and EF10% in many cases. Published by Elsevier Ltd.
Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andonov, Rumen; Djidjev, Hristo Nikolov; Klau, Gunnar W.
In this paper, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of protein representations. Having a metric in that space allows one to avoid pairwise comparisons on the entire database and, thus, to significantly accelerate exploring the protein space compared to no-metric spaces. We show on a gold standard superfamily classification benchmark set of 6759 proteins that our exact k-nearest neighbor (k-NN) scheme classifiesmore » up to 224 out of 236 queries correctly and on a larger, extended version of the benchmark with 60; 850 additional structures, up to 1361 out of 1369 queries. Finally, our k-NN classification thus provides a promising approach for the automatic classification of protein structures based on flexible contact map overlap alignments.« less
Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric
Andonov, Rumen; Djidjev, Hristo Nikolov; Klau, Gunnar W.; ...
2015-10-09
In this paper, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of protein representations. Having a metric in that space allows one to avoid pairwise comparisons on the entire database and, thus, to significantly accelerate exploring the protein space compared to no-metric spaces. We show on a gold standard superfamily classification benchmark set of 6759 proteins that our exact k-nearest neighbor (k-NN) scheme classifiesmore » up to 224 out of 236 queries correctly and on a larger, extended version of the benchmark with 60; 850 additional structures, up to 1361 out of 1369 queries. Finally, our k-NN classification thus provides a promising approach for the automatic classification of protein structures based on flexible contact map overlap alignments.« less
Sintered Cathodes for All-Solid-State Structural Lithium-Ion Batteries
NASA Technical Reports Server (NTRS)
Huddleston, William; Dynys, Frederick; Sehirlioglu, Alp
2017-01-01
All-solid-state structural lithium ion batteries serve as both structural load-bearing components and as electrical energy storage devices to achieve system level weight savings in aerospace and other transportation applications. This multifunctional design goal is critical for the realization of next generation hybrid or all-electric propulsion systems. Additionally, transitioning to solid state technology improves upon battery safety from previous volatile architectures. This research established baseline solid state processing conditions and performance benchmarks for intercalation-type layered oxide materials for multifunctional application. Under consideration were lithium cobalt oxide and lithium nickel manganese cobalt oxide. Pertinent characteristics such as electrical conductivity, strength, chemical stability, and microstructure were characterized for future application in all-solid-state structural battery cathodes. The study includes characterization by XRD, ICP, SEM, ring-on-ring mechanical testing, and electrical impedance spectroscopy to elucidate optimal processing parameters, material characteristics, and multifunctional performance benchmarks. These findings provide initial conditions for implementing existing cathode materials in load bearing applications.
Improving quality of care in general practices by self-audit, benchmarking and quality circles.
Mahlknecht, Angelika; Abuzahra, Muna E; Piccoliori, Giuliano; Enthaler, Nina; Engl, Adolf; Sönnichsen, Andreas
2016-10-01
Guideline adherence of general practitioners (GP) regarding treatment of chronic conditions shows room for improvement. Thus, concepts have to be designed to promote quality of care. The aim of the interventional study "Improvement of Quality by Benchmarking" was to assess whether quality can be improved by self-auditing, benchmarking and quality circles in Salzburg (Austria) and South Tyrol (Italy). In this publication we present the Austrian results. Quality indicators were developed in a consensus process for eight chronic diseases based on pre-existing quality management systems. A quality score consisting of 35 indicators was calculated (0-5 points per indicator depending on fulfilment, maximum 175 points). Data were extracted from the electronic health records of participating practices in 2012, 2013 and 2014. A statistical pre-post analysis was performed using Wilcoxon signed-rank tests. A total of 20 GPs participated in the project. The mean quality score increased from 62.0 at baseline to 84.0 at the second follow-up (p = 0.003). Regarding the individual quality indicators, strong improvements were achieved between baseline and first follow-up, especially in process indicators concerning documentation. Between the first and second follow-up, quality remained in most cases at the same level. The validity of results is limited because of structural and technical problems. Due to the uncontrolled pre-post design we cannot exclude external influences on the results. Nevertheless, the intervention was able to improve measured quality of care. Barriers were detected that should be considered in a possible implementation of quality control programs.
Arithmetic Data Cube as a Data Intensive Benchmark
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Shabano, Leonid
2003-01-01
Data movement across computational grids and across memory hierarchy of individual grid machines is known to be a limiting factor for application involving large data sets. In this paper we introduce the Data Cube Operator on an Arithmetic Data Set which we call Arithmetic Data Cube (ADC). We propose to use the ADC to benchmark grid capabilities to handle large distributed data sets. The ADC stresses all levels of grid memory by producing 2d views of an Arithmetic Data Set of d-tuples described by a small number of parameters. We control data intensity of the ADC by controlling the sizes of the views through choice of the tuple parameters.
Novel Computational Approaches to Drug Discovery
NASA Astrophysics Data System (ADS)
Skolnick, Jeffrey; Brylinski, Michal
2010-01-01
New approaches to protein functional inference based on protein structure and evolution are described. First, FINDSITE, a threading based approach to protein function prediction, is summarized. Then, the results of large scale benchmarking of ligand binding site prediction, ligand screening, including applications to HIV protease, and GO molecular functional inference are presented. A key advantage of FINDSITE is its ability to use low resolution, predicted structures as well as high resolution experimental structures. Then, an extension of FINDSITE to ligand screening in GPCRs using predicted GPCR structures, FINDSITE/QDOCKX, is presented. This is a particularly difficult case as there are few experimentally solved GPCR structures. Thus, we first train on a subset of known binding ligands for a set of GPCRs; this is then followed by benchmarking against a large ligand library. For the virtual ligand screening of a number of Dopamine receptors, encouraging results are seen, with significant enrichment in identified ligands over those found in the training set. Thus, FINDSITE and its extensions represent a powerful approach to the successful prediction of a variety of molecular functions.
Alfa, Michelle J; Fatima, Iram; Olson, Nancy
2013-03-01
The study objective was to verify that the adenosine triphosphate (ATP) benchmark of <200 relative light units (RLUs) was achievable in a busy endoscopy clinic that followed the manufacturer's manual cleaning instructions. All channels from patient-used colonoscopes (20) and duodenoscopes (20) in a tertiary care hospital endoscopy clinic were sampled after manual cleaning and tested for residual ATP. The ATP test benchmark for adequate manual cleaning was set at <200 RLUs. The benchmark for protein was <6.4 μg/cm(2), and, for bioburden, it was <4-log10 colony-forming units/cm(2). Our data demonstrated that 96% (115/120) of channels from 20 colonoscopes and 20 duodenoscopes evaluated met the ATP benchmark of <200 RLUs. The 5 channels that exceeded 200 RLUs were all elevator guide-wire channels. All 120 of the manually cleaned endoscopes tested had protein and bioburden levels that were compliant with accepted benchmarks for manual cleaning for suction-biopsy, air-water, and auxiliary water channels. Our data confirmed that, by following the endoscope manufacturer's manual cleaning recommendations, 96% of channels in gastrointestinal endoscopes would have <200 RLUs for the ATP test kit evaluated and would meet the accepted clean benchmarks for protein and bioburden. Copyright © 2013 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.
Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements
Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.; ...
2014-11-04
Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less
Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.
Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less
An approach to radiation safety department benchmarking in academic and medical facilities.
Harvey, Richard P
2015-02-01
Based on anecdotal evidence and networking with colleagues at other facilities, it has become evident that some radiation safety departments are not adequately staffed and radiation safety professionals need to increase their staffing levels. Discussions with management regarding radiation safety department staffing often lead to similar conclusions. Management acknowledges the Radiation Safety Officer (RSO) or Director of Radiation Safety's concern but asks the RSO to provide benchmarking and justification for additional full-time equivalents (FTEs). The RSO must determine a method to benchmark and justify additional staffing needs while struggling to maintain a safe and compliant radiation safety program. Benchmarking and justification are extremely important tools that are commonly used to demonstrate the need for increased staffing in other disciplines and are tools that can be used by radiation safety professionals. Parameters that most RSOs would expect to be positive predictors of radiation safety staff size generally are and can be emphasized in benchmarking and justification report summaries. Facilities with large radiation safety departments tend to have large numbers of authorized users, be broad-scope programs, be subject to increased controls regulations, have large clinical operations, have significant numbers of academic radiation-producing machines, and have laser safety responsibilities.
The National Practice Benchmark for oncology, 2014 report on 2013 data.
Towle, Elaine L; Barr, Thomas R; Senese, James L
2014-11-01
The National Practice Benchmark (NPB) is a unique tool to measure oncology practices against others across the country in a way that allows meaningful comparisons despite differences in practice size or setting. In today's economic environment every oncology practice, regardless of business structure or affiliation, should be able to produce, monitor, and benchmark basic metrics to meet current business pressures for increased efficiency and efficacy of care. Although we recognize that the NPB survey results do not capture the experience of all oncology practices, practices that can and do participate demonstrate exceptional managerial capability, and this year those practices are recognized for their participation. In this report, we continue to emphasize the methodology introduced last year in which we reported medical revenue net of the cost of the drugs as net medical revenue for the hematology/oncology product line. The effect of this is to capture only the gross margin attributable to drugs as revenue. New this year, we introduce six measures of clinical data density and expand the radiation oncology benchmarks. Copyright © 2014 by American Society of Clinical Oncology.
Interface Pattern Selection in Directional Solidification
NASA Technical Reports Server (NTRS)
Trivedi, Rohit; Tewari, Surendra N.
2001-01-01
The central focus of this research is to establish key scientific concepts that govern the selection of cellular and dendritic patterns during the directional solidification of alloys. Ground-based studies have established that the conditions under which cellular and dendritic microstructures form are precisely where convection effects are dominant in bulk samples. Thus, experimental data can not be obtained terrestrially under pure diffusive regime. Furthermore, reliable theoretical models are not yet possible which can quantitatively incorporate fluid flow in the pattern selection criterion. Consequently, microgravity experiments on cellular and dendritic growth are designed to obtain benchmark data under diffusive growth conditions that can be quantitatively analyzed and compared with the rigorous theoretical model to establish the fundamental principles that govern the selection of specific microstructure and its length scales. In the cellular structure, different cells in an array are strongly coupled so that the cellular pattern evolution is controlled by complex interactions between thermal diffusion, solute diffusion and interface effects. These interactions give infinity of solutions, and the system selects only a narrow band of solutions. The aim of this investigation is to obtain benchmark data and develop a rigorous theoretical model that will allow us to quantitatively establish the physics of this selection process.
Benchmark for Numerical Models of Stented Coronary Bifurcation Flow.
García Carrascal, P; García García, J; Sierra Pallares, J; Castro Ruiz, F; Manuel Martín, F J
2018-09-01
In-stent restenosis ails many patients who have undergone stenting. When the stented artery is a bifurcation, the intervention is particularly critical because of the complex stent geometry involved in these structures. Computational fluid dynamics (CFD) has been shown to be an effective approach when modeling blood flow behavior and understanding the mechanisms that underlie in-stent restenosis. However, these CFD models require validation through experimental data in order to be reliable. It is with this purpose in mind that we performed particle image velocimetry (PIV) measurements of velocity fields within flows through a simplified coronary bifurcation. Although the flow in this simplified bifurcation differs from the actual blood flow, it emulates the main fluid dynamic mechanisms found in hemodynamic flow. Experimental measurements were performed for several stenting techniques in both steady and unsteady flow conditions. The test conditions were strictly controlled, and uncertainty was accurately predicted. The results obtained in this research represent readily accessible, easy to emulate, detailed velocity fields and geometry, and they have been successfully used to validate our numerical model. These data can be used as a benchmark for further development of numerical CFD modeling in terms of comparison of the main flow pattern characteristics.
NASA Technical Reports Server (NTRS)
Padovan, J.; Adams, M.; Fertis, J.; Zeid, I.; Lam, P.
1982-01-01
Finite element codes are used in modelling rotor-bearing-stator structure common to the turbine industry. Engine dynamic simulation is used by developing strategies which enable the use of available finite element codes. benchmarking the elements developed are benchmarked by incorporation into a general purpose code (ADINA); the numerical characteristics of finite element type rotor-bearing-stator simulations are evaluated through the use of various types of explicit/implicit numerical integration operators. Improving the overall numerical efficiency of the procedure is improved.
Surflex-Dock: Docking benchmarks and real-world application
NASA Astrophysics Data System (ADS)
Spitzer, Russell; Jain, Ajay N.
2012-06-01
Benchmarks for molecular docking have historically focused on re-docking the cognate ligand of a well-determined protein-ligand complex to measure geometric pose prediction accuracy, and measurement of virtual screening performance has been focused on increasingly large and diverse sets of target protein structures, cognate ligands, and various types of decoy sets. Here, pose prediction is reported on the Astex Diverse set of 85 protein ligand complexes, and virtual screening performance is reported on the DUD set of 40 protein targets. In both cases, prepared structures of targets and ligands were provided by symposium organizers. The re-prepared data sets yielded results not significantly different than previous reports of Surflex-Dock on the two benchmarks. Minor changes to protein coordinates resulting from complex pre-optimization had large effects on observed performance, highlighting the limitations of cognate ligand re-docking for pose prediction assessment. Docking protocols developed for cross-docking, which address protein flexibility and produce discrete families of predicted poses, produced substantially better performance for pose prediction. Performance on virtual screening performance was shown to benefit by employing and combining multiple screening methods: docking, 2D molecular similarity, and 3D molecular similarity. In addition, use of multiple protein conformations significantly improved screening enrichment.
Characterization of addressability by simultaneous randomized benchmarking.
Gambetta, Jay M; Córcoles, A D; Merkel, S T; Johnson, B R; Smolin, John A; Chow, Jerry M; Ryan, Colm A; Rigetti, Chad; Poletto, S; Ohki, Thomas A; Ketchen, Mark B; Steffen, M
2012-12-14
The control and handling of errors arising from cross talk and unwanted interactions in multiqubit systems is an important issue in quantum information processing architectures. We introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking experiments run both individually and simultaneously on pairs of qubits. A relevant figure of merit for the addressability is then related to the differences in the measured average gate fidelities in the two experiments. We present results from two similar samples with differing cross talk and unwanted qubit-qubit interactions. The results agree with predictions based on simple models of the classical cross talk and Stark shifts.
Modeling Blast Loading on Buried Reinforced Concrete Structures with Zapotec
Bessette, Greg C.
2008-01-01
A coupled Euler-Lagrange solution approach is used to model the response of a buried reinforced concrete structure subjected to a close-in detonation of a high explosive charge. The coupling algorithm is discussed along with a set of benchmark calculations involving detonations in clay and sand.
Gauterin, Eckhard; Kammerer, Philipp; Kühn, Martin; Schulte, Horst
2016-05-01
Advanced model-based control of wind turbines requires knowledge of the states and the wind speed. This paper benchmarks a nonlinear Takagi-Sugeno observer for wind speed estimation with enhanced Kalman Filter techniques: The performance and robustness towards model-structure uncertainties of the Takagi-Sugeno observer, a Linear, Extended and Unscented Kalman Filter are assessed. Hence the Takagi-Sugeno observer and enhanced Kalman Filter techniques are compared based on reduced-order models of a reference wind turbine with different modelling details. The objective is the systematic comparison with different design assumptions and requirements and the numerical evaluation of the reconstruction quality of the wind speed. Exemplified by a feedforward loop employing the reconstructed wind speed, the benefit of wind speed estimation within wind turbine control is illustrated. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Benchmarking hardware architecture candidates for the NFIRAOS real-time controller
NASA Astrophysics Data System (ADS)
Smith, Malcolm; Kerley, Dan; Herriot, Glen; Véran, Jean-Pierre
2014-07-01
As a part of the trade study for the Narrow Field Infrared Adaptive Optics System, the adaptive optics system for the Thirty Meter Telescope, we investigated the feasibility of performing real-time control computation using a Linux operating system and Intel Xeon E5 CPUs. We also investigated a Xeon Phi based architecture which allows higher levels of parallelism. This paper summarizes both the CPU based real-time controller architecture and the Xeon Phi based RTC. The Intel Xeon E5 CPU solution meets the requirements and performs the computation for one AO cycle in an average of 767 microseconds. The Xeon Phi solution did not meet the 1200 microsecond time requirement and also suffered from unpredictable execution times. More detailed benchmark results are reported for both architectures.
Impact of quality circles for improvement of asthma care: results of a randomized controlled trial
Schneider, Antonius; Wensing, Michel; Biessecker, Kathrin; Quinzler, Renate; Kaufmann-Kolle, Petra; Szecsenyi, Joachim
2008-01-01
Rationale and aims Quality circles (QCs) are well established as a means of aiding doctors. New quality improvement strategies include benchmarking activities. The aim of this paper was to evaluate the efficacy of QCs for asthma care working either with general feedback or with an open benchmark. Methods Twelve QCs, involving 96 general practitioners, were organized in a randomized controlled trial. Six worked with traditional anonymous feedback and six with an open benchmark; both had guided discussion from a trained moderator. Forty-three primary care practices agreed to give out questionnaires to patients to evaluate the efficacy of QCs. Results A total of 256 patients participated in the survey, of whom 185 (72.3%) responded to the follow-up 1 year later. Use of inhaled steroids at baseline was high (69%) and self-management low (asthma education 27%, individual emergency plan 8%, and peak flow meter at home 21%). Guideline adherence in drug treatment increased (P = 0.19), and asthma steps improved (P = 0.02). Delivery of individual emergency plans increased (P = 0.008), and unscheduled emergency visits decreased (P = 0.064). There was no change in asthma education and peak flow meter usage. High medication guideline adherence was associated with reduced emergency visits (OR 0.24; 95% CI 0.07–0.89). Use of theophylline was associated with hospitalization (OR 7.1; 95% CI 1.5–34.3) and emergency visits (OR 4.9; 95% CI 1.6–14.7). There was no difference between traditional and benchmarking QCs. Conclusions Quality circles working with individualized feedback are effective at improving asthma care. The trial may have been underpowered to detect specific benchmarking effects. Further research is necessary to evaluate strategies for improving the self-management of asthma patients. PMID:18093108
Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program
NASA Technical Reports Server (NTRS)
Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.
2010-01-01
The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the Aeroelasticity Branch will examine other experimental efforts within the Subsonic Fixed Wing (SFW) program (such as testing of the NASA Common Research Model (CRM)) and other NASA programs and assess aeroelasticity issues and research topics.
Theoretical Background and Prognostic Modeling for Benchmarking SHM Sensors for Composite Structures
2010-10-01
minimum flaw size can be detected by the existing SHM based monitoring methods. Sandwich panels with foam , WebCore and honeycomb structures were...Whether it be hat stiffened, corrugated sandwich, honeycomb sandwich, or foam filled sandwich, all composite structures have one basic handicap in...based monitoring methods. Sandwich panels with foam , WebCore and honeycomb structures were considered for use in this study. Eigenmode frequency
Matt: local flexibility aids protein multiple structure alignment.
Menke, Matthew; Berger, Bonnie; Cowen, Lenore
2008-01-01
Even when there is agreement on what measure a protein multiple structure alignment should be optimizing, finding the optimal alignment is computationally prohibitive. One approach used by many previous methods is aligned fragment pair chaining, where short structural fragments from all the proteins are aligned against each other optimally, and the final alignment chains these together in geometrically consistent ways. Ye and Godzik have recently suggested that adding geometric flexibility may help better model protein structures in a variety of contexts. We introduce the program Matt (Multiple Alignment with Translations and Twists), an aligned fragment pair chaining algorithm that, in intermediate steps, allows local flexibility between fragments: small translations and rotations are temporarily allowed to bring sets of aligned fragments closer, even if they are physically impossible under rigid body transformations. After a dynamic programming assembly guided by these "bent" alignments, geometric consistency is restored in the final step before the alignment is output. Matt is tested against other recent multiple protein structure alignment programs on the popular Homstrad and SABmark benchmark datasets. Matt's global performance is competitive with the other programs on Homstrad, but outperforms the other programs on SABmark, a benchmark of multiple structure alignments of proteins with more distant homology. On both datasets, Matt demonstrates an ability to better align the ends of alpha-helices and beta-strands, an important characteristic of any structure alignment program intended to help construct a structural template library for threading approaches to the inverse protein-folding problem. The related question of whether Matt alignments can be used to distinguish distantly homologous structure pairs from pairs of proteins that are not homologous is also considered. For this purpose, a p-value score based on the length of the common core and average root mean squared deviation (RMSD) of Matt alignments is shown to largely separate decoys from homologous protein structures in the SABmark benchmark dataset. We postulate that Matt's strong performance comes from its ability to model proteins in different conformational states and, perhaps even more important, its ability to model backbone distortions in more distantly related proteins.
Chen, Tsung-Tai; Chang, Yun-Jau; Ku, Shei-Ling; Chung, Kuo-Piao
2010-10-01
There is much research using statistical process control (SPC) to monitor surgical performance, including comparisons among groups to detect small process shifts, but few of these studies have included a stabilization process. This study aimed to analyse the performance of surgeons in operating room (OR) and set a benchmark by SPC after stabilized process. The OR profile of 499 patients who underwent laparoscopic cholecystectomy performed by 16 surgeons at a tertiary hospital in Taiwan during 2005 and 2006 were recorded. SPC was applied to analyse operative and non-operative times using the following five steps: first, the times were divided into two segments; second, they were normalized; third, they were evaluated as individual processes; fourth, the ARL(0) was calculated;, and fifth, the different groups (surgeons) were compared. Outliers were excluded to ensure stability for each group and to facilitate inter-group comparison. The results showed that in the stabilized process, only one surgeon exhibited a significantly shorter total process time (including operative time and non-operative time). In this study, we use five steps to demonstrate how to control surgical and non-surgical time in phase I. There are some measures that can be taken to prevent skew and instability in the process. Also, using SPC, one surgeon can be shown to be a real benchmark. © 2010 Blackwell Publishing Ltd.
Development of Unsteady Aerodynamic and Aeroelastic Reduced-Order Models Using the FUN3D Code
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Vatsa, Veer N.; Biedron, Robert T.
2009-01-01
Recent significant improvements to the development of CFD-based unsteady aerodynamic reduced-order models (ROMs) are implemented into the FUN3D unstructured flow solver. These improvements include the simultaneous excitation of the structural modes of the CFD-based unsteady aerodynamic system via a single CFD solution, minimization of the error between the full CFD and the ROM unsteady aero- dynamic solution, and computation of a root locus plot of the aeroelastic ROM. Results are presented for a viscous version of the two-dimensional Benchmark Active Controls Technology (BACT) model and an inviscid version of the AGARD 445.6 aeroelastic wing using the FUN3D code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.D.; Kornreich, D.E.
Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less
Kaufmann-Kolle, Petra; Szecsenyi, Joachim; Broge, Björn; Haefeli, Walter Emil; Schneider, Antonius
2011-01-01
The purpose of this cluster-randomised controlled trial was to evaluate the efficacy of quality circles (QCs) working either with general data-based feedback or with an open benchmark within the field of asthma care and drug-drug interactions. Twelve QCs, involving 96 general practitioners from 85 practices, were randomised. Six QCs worked with traditional anonymous feedback and six with an open benchmark. Two QC meetings supported with feedback reports were held covering the topics "drug-drug interactions" and "asthma"; in both cases discussions were guided by a trained moderator. Outcome measures included health-related quality of life and patient satisfaction with treatment, asthma severity and number of potentially inappropriate drug combinations as well as the general practitioners' satisfaction in relation to the performance of the QC. A significant improvement in the treatment of asthma was observed in both trial arms. However, there was only a slight improvement regarding inappropriate drug combinations. There were no relevant differences between the group with open benchmark (B-QC) and traditional quality circles (T-QC). The physicians' satisfaction with the QC performance was significantly higher in the T-QCs. General practitioners seem to take a critical perspective about open benchmarking in quality circles. Caution should be used when implementing benchmarking in a quality circle as it did not improve healthcare when compared to the traditional procedure with anonymised comparisons. Copyright © 2011. Published by Elsevier GmbH.
Escobar, Gabriel J; Baker, Jennifer M; Turk, Benjamin J; Draper, David; Liu, Vincent; Kipnis, Patricia
2017-01-01
This article is not a traditional research report. It describes how conducting a specific set of benchmarking analyses led us to broader reflections on hospital benchmarking. We reexamined an issue that has received far less attention from researchers than in the past: How variations in the hospital admission threshold might affect hospital rankings. Considering this threshold made us reconsider what benchmarking is and what future benchmarking studies might be like. Although we recognize that some of our assertions are speculative, they are based on our reading of the literature and previous and ongoing data analyses being conducted in our research unit. We describe the benchmarking analyses that led to these reflections. The Centers for Medicare and Medicaid Services' Hospital Compare Web site includes data on fee-for-service Medicare beneficiaries but does not control for severity of illness, which requires physiologic data now available in most electronic medical records.To address this limitation, we compared hospital processes and outcomes among Kaiser Permanente Northern California's (KPNC) Medicare Advantage beneficiaries and non-KPNC California Medicare beneficiaries between 2009 and 2010. We assigned a simulated severity of illness measure to each record and explored the effect of having the additional information on outcomes. We found that if the admission severity of illness in non-KPNC hospitals increased, KPNC hospitals' mortality performance would appear worse; conversely, if admission severity at non-KPNC hospitals' decreased, KPNC hospitals' performance would appear better. Future hospital benchmarking should consider the impact of variation in admission thresholds.
NACA0012 benchmark model experimental flutter results with unsteady pressure distributions
NASA Technical Reports Server (NTRS)
Rivera, Jose A., Jr.; Dansberry, Bryan E.; Bennett, Robert M.; Durham, Michael H.; Silva, Walter A.
1992-01-01
The Structural Dynamics Division at NASA Langley Research Center has started a wind tunnel activity referred to as the Benchmark Models Program. The primary objective of this program is to acquire measured dynamic instability and corresponding pressure data that will be useful for developing and evaluating aeroelastic type computational fluid dynamics codes currently in use or under development. The program is a multi-year activity that will involve testing of several different models to investigate various aeroelastic phenomena. This paper describes results obtained from a second wind tunnel test of the first model in the Benchmark Models Program. This first model consisted of a rigid semispan wing having a rectangular planform and a NACA 0012 airfoil shape which was mounted on a flexible two degree of freedom mount system. Experimental flutter boundaries and corresponding unsteady pressure distribution data acquired over two model chords located at the 60 and 95 percent span stations are presented.
Anharmonic Vibrational Spectroscopy on Metal Transition Complexes
NASA Astrophysics Data System (ADS)
Latouche, Camille; Bloino, Julien; Barone, Vincenzo
2014-06-01
Advances in hardware performance and the availability of efficient and reliable computational models have made possible the application of computational spectroscopy to ever larger molecular systems. The systematic interpretation of experimental data and the full characterization of complex molecules can then be facilitated. Focusing on vibrational spectroscopy, several approaches have been proposed to simulate spectra beyond the double harmonic approximation, so that more details become available. However, a routine use of such tools requires the preliminary definition of a valid protocol with the most appropriate combination of electronic structure and nuclear calculation models. Several benchmark of anharmonic calculations frequency have been realized on organic molecules. Nevertheless, benchmarks of organometallics or inorganic metal complexes at this level are strongly lacking despite the interest of these systems due to their strong emission and vibrational properties. Herein we report the benchmark study realized with anharmonic calculations on simple metal complexes, along with some pilot applications on systems of direct technological or biological interest.
Benchmark simulation model no 2: general protocol and exploratory case studies.
Jeppsson, U; Pons, M-N; Nopens, I; Alex, J; Copp, J B; Gernaey, K V; Rosen, C; Steyer, J-P; Vanrolleghem, P A
2007-01-01
Over a decade ago, the concept of objectively evaluating the performance of control strategies by simulating them using a standard model implementation was introduced for activated sludge wastewater treatment plants. The resulting Benchmark Simulation Model No 1 (BSM1) has been the basis for a significant new development that is reported on here: Rather than only evaluating control strategies at the level of the activated sludge unit (bioreactors and secondary clarifier) the new BSM2 now allows the evaluation of control strategies at the level of the whole plant, including primary clarifier and sludge treatment with anaerobic sludge digestion. In this contribution, the decisions that have been made over the past three years regarding the models used within the BSM2 are presented and argued, with particular emphasis on the ADM1 description of the digester, the interfaces between activated sludge and digester models, the included temperature dependencies and the reject water storage. BSM2-implementations are now available in a wide range of simulation platforms and a ring test has verified their proper implementation, consistent with the BSM2 definition. This guarantees that users can focus on the control strategy evaluation rather than on modelling issues. Finally, for illustration, twelve simple operational strategies have been implemented in BSM2 and their performance evaluated. Results show that it is an interesting control engineering challenge to further improve the performance of the BSM2 plant (which is the whole idea behind benchmarking) and that integrated control (i.e. acting at different places in the whole plant) is certainly worthwhile to achieve overall improvement.
Design of a self-tuning regulator for temperature control of a polymerization reactor.
Vasanthi, D; Pranavamoorthy, B; Pappa, N
2012-01-01
The temperature control of a polymerization reactor described by Chylla and Haase, a control engineering benchmark problem, is used to illustrate the potential of adaptive control design by employing a self-tuning regulator concept. In the benchmark scenario, the operation of the reactor must be guaranteed under various disturbing influences, e.g., changing ambient temperatures or impurity of the monomer. The conventional cascade control provides a robust operation, but often lacks in control performance concerning the required strict temperature tolerances. The self-tuning control concept presented in this contribution solves the problem. This design calculates a trajectory for the cooling jacket temperature in order to follow a predefined trajectory of the reactor temperature. The reaction heat and the heat transfer coefficient in the energy balance are estimated online by using an unscented Kalman filter (UKF). Two simple physically motivated relations are employed, which allow the non-delayed estimation of both quantities. Simulation results under model uncertainties show the effectiveness of the self-tuning control concept. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
RNA inverse folding using Monte Carlo tree search.
Yang, Xiufeng; Yoshizoe, Kazuki; Taneda, Akito; Tsuda, Koji
2017-11-06
Artificially synthesized RNA molecules provide important ways for creating a variety of novel functional molecules. State-of-the-art RNA inverse folding algorithms can design simple and short RNA sequences of specific GC content, that fold into the target RNA structure. However, their performance is not satisfactory in complicated cases. We present a new inverse folding algorithm called MCTS-RNA, which uses Monte Carlo tree search (MCTS), a technique that has shown exceptional performance in Computer Go recently, to represent and discover the essential part of the sequence space. To obtain high accuracy, initial sequences generated by MCTS are further improved by a series of local updates. Our algorithm has an ability to control the GC content precisely and can deal with pseudoknot structures. Using common benchmark datasets for evaluation, MCTS-RNA showed a lot of promise as a standard method of RNA inverse folding. MCTS-RNA is available at https://github.com/tsudalab/MCTS-RNA .
1993-09-01
against Japanese competitors (Camp, 1989:6; Geber , 1990:38). Due to their incredible success in controlling costs, Xerox adopted the technique company...important first step to benchmarking outside the organization ( Geber , 1990:40). Due to availability of information and cooperation of partners, this is...science ( Geber , 1990:42). Several avenues can be pursued to find best-in-class companies. Search business publications for companies frequently
Structural Mechanics and Dynamics Branch
NASA Technical Reports Server (NTRS)
Stefko, George
2003-01-01
The 2002 annual report of the Structural Mechanics and Dynamics Branch reflects the majority of the work performed by the branch staff during the 2002 calendar year. Its purpose is to give a brief review of the branch s technical accomplishments. The Structural Mechanics and Dynamics Branch develops innovative computational tools, benchmark experimental data, and solutions to long-term barrier problems in the areas of propulsion aeroelasticity, active and passive damping, engine vibration control, rotor dynamics, magnetic suspension, structural mechanics, probabilistics, smart structures, engine system dynamics, and engine containment. Furthermore, the branch is developing a compact, nonpolluting, bearingless electric machine with electric power supplied by fuel cells for future "more electric" aircraft. An ultra-high-power-density machine that can generate projected power densities of 50 hp/lb or more, in comparison to conventional electric machines, which generate usually 0.2 hp/lb, is under development for application to electric drives for propulsive fans or propellers. In the future, propulsion and power systems will need to be lighter, to operate at higher temperatures, and to be more reliable in order to achieve higher performance and economic viability. The Structural Mechanics and Dynamics Branch is working to achieve these complex, challenging goals.
NASA Astrophysics Data System (ADS)
Bernier, Caroline; Gazzola, Mattia; Ronsse, Renaud; Chatelain, Philippe
2017-11-01
We present a 2D fluid-structure interaction simulation method with a specific focus on articulated and actuated structures. The proposed algorithm combines a viscous Vortex Particle-Mesh (VPM) method based on a penalization technique and a Multi-Body System (MBS) solver. The hydrodynamic forces and moments acting on the structure parts are not computed explicitly from the surface stresses; they are rather recovered from the projection and penalization steps within the VPM method. The MBS solver accounts for the body dynamics via the Euler-Lagrange formalism. The deformations of the structure are dictated by the hydrodynamic efforts and actuation torques. Here, we focus on simplified swimming structures composed of neutrally buoyant ellipses connected by virtual joints. The joints are actuated through a simple controller in order to reproduce the swimming patterns of an eel-like swimmer. The method enables to recover the histories of torques applied on each hinge along the body. The method is verified on several benchmarks: an impulsively started elastically mounted cylinder and free swimming articulated fish-like structures. Validation will be performed by means of an experimental swimming robot that reproduces the 2D articulated ellipses.
Role of the standard deviation in the estimation of benchmark doses with continuous data.
Gaylor, David W; Slikker, William
2004-12-01
For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.
Raab, Mario; Jusuk, Ija; Molle, Julia; Buhr, Egbert; Bodermann, Bernd; Bergmann, Detlef; Bosse, Harald; Tinnefeld, Philip
2018-01-29
In recent years, DNA origami nanorulers for superresolution (SR) fluorescence microscopy have been developed from fundamental proof-of-principle experiments to commercially available test structures. The self-assembled nanostructures allow placing a defined number of fluorescent dye molecules in defined geometries in the nanometer range. Besides the unprecedented control over matter on the nanoscale, robust DNA origami nanorulers are reproducibly obtained in high yields. The distances between their fluorescent marks can be easily analysed yielding intermark distance histograms from many identical structures. Thus, DNA origami nanorulers have become excellent reference and training structures for superresolution microscopy. In this work, we go one step further and develop a calibration process for the measured distances between the fluorescent marks on DNA origami nanorulers. The superresolution technique DNA-PAINT is used to achieve nanometrological traceability of nanoruler distances following the guide to the expression of uncertainty in measurement (GUM). We further show two examples how these nanorulers are used to evaluate the performance of TIRF microscopes that are capable of single-molecule localization microscopy (SMLM).
Benchmark Testing of the Largest Titanium Aluminide Sheet Subelement Conducted
NASA Technical Reports Server (NTRS)
Bartolotta, Paul A.; Krause, David L.
2000-01-01
To evaluate wrought titanium aluminide (gamma TiAl) as a viable candidate material for the High-Speed Civil Transport (HSCT) exhaust nozzle, an international team led by the NASA Glenn Research Center at Lewis Field successfully fabricated and tested the largest gamma TiAl sheet structure ever manufactured. The gamma TiAl sheet structure, a 56-percent subscale divergent flap subelement, was fabricated for benchmark testing in three-point bending. Overall, the subelement was 84-cm (33-in.) long by 13-cm (5-in.) wide by 8-cm (3-in.) deep. Incorporated into the subelement were features that might be used in the fabrication of a full-scale divergent flap. These features include the use of: (1) gamma TiAl shear clips to join together sections of corrugations, (2) multiple gamma TiAl face sheets, (3) double hot-formed gamma TiAl corrugations, and (4) brazed joints. The structural integrity of the gamma TiAl sheet subelement was evaluated by conducting a room-temperature three-point static bend test.
USDA-ARS?s Scientific Manuscript database
Ecosystems that maximize soil organic matter and good soil structure maintain high soil biological functioning, soil health and plant growth. Natural ecosystems such as prairies are valuable benchmarks for developing sustainable crop and soil management practices. Soil biological properties critical...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lell, R. M.; Schaefer, R. W.; McKnight, R. D.
Over a period of 30 years more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited to form the basis for criticality safety benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactormore » physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. The term 'benchmark' in a ZPR program connotes a particularly simple loading aimed at gaining basic reactor physics insight, as opposed to studying a reactor design. In fact, the ZPR-6/7 Benchmark Assembly (Reference 1) had a very simple core unit cell assembled from plates of depleted uranium, sodium, iron oxide, U3O8, and plutonium. The ZPR-6/7 core cell-average composition is typical of the interior region of liquid-metal fast breeder reactors (LMFBRs) of the era. It was one part of the Demonstration Reactor Benchmark Program,a which provided integral experiments characterizing the important features of demonstration-size LMFBRs. As a benchmark, ZPR-6/7 was devoid of many 'real' reactor features, such as simulated control rods and multiple enrichment zones, in its reference form. Those kinds of features were investigated experimentally in variants of the reference ZPR-6/7 or in other critical assemblies in the Demonstration Reactor Benchmark Program.« less
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1998-01-01
This report describes the formulation of a model of the dynamic behavior of the Benchmark Active Controls Technology (BACT) wind tunnel model for active control design and analysis applications. The model is formed by combining the equations of motion for the BACT wind tunnel model with actuator models and a model of wind tunnel turbulence. The primary focus of this report is the development of the equations of motion from first principles by using Lagrange's equations and the principle of virtual work. A numerical form of the model is generated by making use of parameters obtained from both experiment and analysis. Comparisons between experimental and analytical data obtained from the numerical model show excellent agreement and suggest that simple coefficient-based aerodynamics are sufficient to accurately characterize the aeroelastic response of the BACT wind tunnel model. The equations of motion developed herein have been used to aid in the design and analysis of a number of flutter suppression controllers that have been successfully implemented.
NASA Astrophysics Data System (ADS)
L'Hostis, V.; Brunet, C.; Poupard, O.; Petre-Lazar, I.
2006-11-01
Several ageing models are available for the prediction of the mechanical consequences of rebar corrosion. They are used for service life prediction of reinforced concrete structures. Concerning corrosion diagnosis of reinforced concrete, some Non Destructive Testing (NDT) tools have been developed, and have been in use for some years. However, these developments require validation on existing concrete structures. The French project “Benchmark des Poutres de la Rance” contributes to this aspect. It has two main objectives: (i) validation of mechanical models to estimate the influence of rebar corrosion on the load bearing capacity of a structure, (ii) qualification of the use of the NDT results to collect information on steel corrosion within reinforced-concrete structures. Ten French and European institutions from both academic research laboratories and industrial companies contributed during the years 2004 and 2005. This paper presents the project that was divided into several work packages: (i) the reinforced concrete beams were characterized from non-destructive testing tools, (ii) the mechanical behaviour of the beams was experimentally tested, (iii) complementary laboratory analysis were performed and (iv) finally numerical simulations results were compared to the experimental results obtained with the mechanical tests.
Fixed-Order Mixed Norm Designs for Building Vibration Control
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.
2000-01-01
This study investigates the use of H2, mu-synthesis, and mixed H2/mu methods to construct full order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodeled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full order compensators that are robust to both unmodeled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H2 design performance levels while providing the same levels of robust stability as the mu designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H2 designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom; Cristian Rabiti; Andrea Alfonsi
2012-10-01
PHISICS is a neutronics code system currently under development at the Idaho National Laboratory (INL). Its goal is to provide state of the art simulation capability to reactor designers. The different modules for PHISICS currently under development are a nodal and semi-structured transport core solver (INSTANT), a depletion module (MRTAU) and a cross section interpolation (MIXER) module. The INSTANT module is the most developed of the mentioned above. Basic functionalities are ready to use, but the code is still in continuous development to extend its capabilities. This paper reports on the effort of coupling the nodal kinetics code package PHISICSmore » (INSTANT/MRTAU/MIXER) to the thermal hydraulics system code RELAP5-3D, to enable full core and system modeling. This will enable the possibility to model coupled (thermal-hydraulics and neutronics) problems with more options for 3D neutron kinetics, compared to the existing diffusion theory neutron kinetics module in RELAP5-3D (NESTLE). In the second part of the paper, an overview of the OECD/NEA MHTGR-350 MW benchmark is given. This benchmark has been approved by the OECD, and is based on the General Atomics 350 MW Modular High Temperature Gas Reactor (MHTGR) design. The benchmark includes coupled neutronics thermal hydraulics exercises that require more capabilities than RELAP5-3D with NESTLE offers. Therefore, the MHTGR benchmark makes extensive use of the new PHISICS/RELAP5-3D coupling capabilities. The paper presents the preliminary results of the three steady state exercises specified in Phase I of the benchmark using PHISICS/RELAP5-3D.« less
Watkinson, William; Raison, Nicholas; Abe, Takashige; Harrison, Patrick; Khan, Shamim; Van der Poel, Henk; Dasgupta, Prokar; Ahmed, Kamran
2018-05-01
To establish objective benchmarks at the level of a competent robotic surgeon across different exercises and metrics for the RobotiX Mentor virtual reality (VR) simulator suitable for use within a robotic surgical training curriculum. This retrospective observational study analysed results from multiple data sources, all of which used the RobotiX Mentor VR simulator. 123 participants with varying experience from novice to expert completed the exercises. Competency was established as the 25th centile of the mean advanced intermediate score. Three basic skill exercises and two advanced skill exercises were used. King's College London. 84 Novice, 26 beginner intermediates, 9 advanced intermediates and 4 experts were used in this retrospective observational study. Objective benchmarks derived from the 25th centile of the mean scores of the advanced intermediates provided suitably challenging yet also achievable targets for training surgeons. The disparity in scores was greatest for the advanced exercises. Novice surgeons are able to achieve the benchmarks across all exercises in the majority of metrics. We have successfully created this proof-of-concept study, which requires validation in a larger cohort. Objective benchmarks obtained from the 25th centile of the mean scores of advanced intermediates provide clinically relevant benchmarks at the standard of a competent robotic surgeon that are challenging yet also attainable. That can be used within a VR training curriculum allowing participants to track and monitor their progress in a structured and progressional manner through five exercises. Providing clearly defined targets, ensuring that a universal training standard has been achieved across training surgeons. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Optimization of a solid-state electron spin qubit using Gate Set Tomography
Dehollain, Juan P.; Muhonen, Juha T.; Blume-Kohout, Robin J.; ...
2016-10-13
Here, state of the art qubit systems are reaching the gate fidelities required for scalable quantum computation architectures. Further improvements in the fidelity of quantum gates demands characterization and benchmarking protocols that are efficient, reliable and extremely accurate. Ideally, a benchmarking protocol should also provide information on how to rectify residual errors. Gate Set Tomography (GST) is one such protocol designed to give detailed characterization of as-built qubits. We implemented GST on a high-fidelity electron-spin qubit confined by a single 31P atom in 28Si. The results reveal systematic errors that a randomized benchmarking analysis could measure but not identify, whereasmore » GST indicated the need for improved calibration of the length of the control pulses. After introducing this modification, we measured a new benchmark average gate fidelity of 99.942(8)%, an improvement on the previous value of 99.90(2)%. Furthermore, GST revealed high levels of non-Markovian noise in the system, which will need to be understood and addressed when the qubit is used within a fault-tolerant quantum computation scheme.« less
2017-01-01
The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100
NASA Astrophysics Data System (ADS)
Koscheev, Vladimir; Manturov, Gennady; Pronyaev, Vladimir; Rozhikhin, Evgeny; Semenov, Mikhail; Tsibulya, Anatoly
2017-09-01
Several k∞ experiments were performed on the KBR critical facility at the Institute of Physics and Power Engineering (IPPE), Obninsk, Russia during the 1970s and 80s for study of neutron absorption properties of Cr, Mn, Fe, Ni, Zr, and Mo. Calculations of these benchmarks with almost any modern evaluated nuclear data libraries demonstrate bad agreement with the experiment. Neutron capture cross sections of the odd isotopes of Cr, Mn, Fe, and Ni in the ROSFOND-2010 library have been reevaluated and another evaluation of the Zr nuclear data has been adopted. Use of the modified nuclear data for Cr, Mn, Fe, Ni, and Zr leads to significant improvement of the C/E ratio for the KBR assemblies. Also a significant improvement in agreement between calculated and evaluated values for benchmarks with Fe reflectors was observed. C/E results obtained with the modified ROSFOND library for complex benchmark models that are highly sensitive to the cross sections of structural materials are no worse than results obtained with other major evaluated data libraries. Possible improvement in results by decreasing the capture cross section for Zr and Mo at the energies above 1 keV is indicated.
Flexible missile autopilot design studies with PC-MATLAB/386
NASA Technical Reports Server (NTRS)
Ruth, Michael J.
1989-01-01
Development of a responsive, high-bandwidth missile autopilot for airframes which have structural modes of unusually low frequency presents a challenging design task. Such systems are viable candidates for modern, state-space control design methods. The PC-MATLAB interactive software package provides an environment well-suited to the development of candidate linear control laws for flexible missile autopilots. The strengths of MATLAB include: (1) exceptionally high speed (MATLAB's version for 80386-based PC's offers benchmarks approaching minicomputer and mainframe performance); (2) ability to handle large design models of several hundred degrees of freedom, if necessary; and (3) broad extensibility through user-defined functions. To characterize MATLAB capabilities, a simplified design example is presented. This involves interactive definition of an observer-based state-space compensator for a flexible missile autopilot design task. MATLAB capabilities and limitations, in the context of this design task, are then summarized.
Rare-earth nickelates RNiO3: thin films and heterostructures
NASA Astrophysics Data System (ADS)
Catalano, S.; Gibert, M.; Fowlie, J.; Íñiguez, J.; Triscone, J.-M.; Kreisel, J.
2018-04-01
This review stands in the larger framework of functional materials by focussing on heterostructures of rare-earth nickelates, described by the chemical formula RNiO3 where R is a trivalent rare-earth R = La, Pr, Nd, Sm, …, Lu. Nickelates are characterized by a rich phase diagram of structural and physical properties and serve as a benchmark for the physics of phase transitions in correlated oxides where electron–lattice coupling plays a key role. Much of the recent interest in nickelates concerns heterostructures, that is single layers of thin film, multilayers or superlattices, with the general objective of modulating their physical properties through strain control, confinement or interface effects. We will discuss the extensive studies on nickelate heterostructures as well as outline different approaches to tuning and controlling their physical properties and, finally, review application concepts for future devices.
Rare-earth nickelates RNiO3: thin films and heterostructures.
Catalano, S; Gibert, M; Fowlie, J; Íñiguez, J; Triscone, J-M; Kreisel, J
2018-04-01
This review stands in the larger framework of functional materials by focussing on heterostructures of rare-earth nickelates, described by the chemical formula RNiO 3 where R is a trivalent rare-earth R = La, Pr, Nd, Sm, …, Lu. Nickelates are characterized by a rich phase diagram of structural and physical properties and serve as a benchmark for the physics of phase transitions in correlated oxides where electron-lattice coupling plays a key role. Much of the recent interest in nickelates concerns heterostructures, that is single layers of thin film, multilayers or superlattices, with the general objective of modulating their physical properties through strain control, confinement or interface effects. We will discuss the extensive studies on nickelate heterostructures as well as outline different approaches to tuning and controlling their physical properties and, finally, review application concepts for future devices.
An investigation of potential applications of OP-SAPS: Operational sampled analog processors
NASA Technical Reports Server (NTRS)
Parrish, E. A.; Mcvey, E. S.
1976-01-01
The impact of charge-coupled device (CCD) processors on future instrumentation was investigated. The CCD devices studied process sampled analog data and are referred to as OP-SAPS - operational sampled analog processors. Preliminary studies into various architectural configurations for systems composed of OP-SAPS show that they have potential in such diverse applications as pattern recognition and automatic control. It appears probable that OP-SAPS may be used to construct computing structures which can serve as special peripherals to large-scale computer complexes used in real time flight simulation. The research was limited to the following benchmark programs: (1) face recognition, (2) voice command and control, (3) terrain classification, and (4) terrain identification. A small amount of effort was spent on examining a method by which OP-SAPS may be used to decrease the limiting ground sampling distance encountered in remote sensing from satellites.
Pasler, Marlies; Kaas, Jochem; Perik, Thijs; Geuze, Job; Dreindl, Ralf; Künzler, Thomas; Wittkamper, Frits; Georg, Dietmar
2015-12-01
To systematically evaluate machine specific quality assurance (QA) for volumetric modulated arc therapy (VMAT) based on log files by applying a dynamic benchmark plan. A VMAT benchmark plan was created and tested on 18 Elekta linacs (13 MLCi or MLCi2, 5 Agility) at 4 different institutions. Linac log files were analyzed and a delivery robustness index was introduced. For dosimetric measurements an ionization chamber array was used. Relative dose deviations were assessed by mean gamma for each control point and compared to the log file evaluation. Fourteen linacs delivered the VMAT benchmark plan, while 4 linacs failed by consistently terminating the delivery. The mean leaf error (±1SD) was 0.3±0.2 mm for all linacs. Large MLC maximum errors up to 6.5 mm were observed at reversal positions. Delivery robustness index accounting for MLC position correction (0.8-1.0) correlated with delivery time (80-128 s) and depended on dose rate performance. Dosimetric evaluation indicated in general accurate plan reproducibility with γ(mean)(±1 SD)=0.4±0.2 for 1 mm/1%. However single control point analysis revealed larger deviations and attributed well to log file analysis. The designed benchmark plan helped identify linac related malfunctions in dynamic mode for VMAT. Log files serve as an important additional QA measure to understand and visualize dynamic linac parameters. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yoon, Ilsang; Weinberg, Martin D.; Katz, Neal
2011-06-01
We introduce a new galaxy image decomposition tool, GALPHAT (GALaxy PHotometric ATtributes), which is a front-end application of the Bayesian Inference Engine (BIE), a parallel Markov chain Monte Carlo package, to provide full posterior probability distributions and reliable confidence intervals for all model parameters. The BIE relies on GALPHAT to compute the likelihood function. GALPHAT generates scale-free cumulative image tables for the desired model family with precise error control. Interpolation of this table yields accurate pixellated images with any centre, scale and inclination angle. GALPHAT then rotates the image by position angle using a Fourier shift theorem, yielding high-speed, accurate likelihood computation. We benchmark this approach using an ensemble of simulated Sérsic model galaxies over a wide range of observational conditions: the signal-to-noise ratio S/N, the ratio of galaxy size to the point spread function (PSF) and the image size, and errors in the assumed PSF; and a range of structural parameters: the half-light radius re and the Sérsic index n. We characterize the strength of parameter covariance in the Sérsic model, which increases with S/N and n, and the results strongly motivate the need for the full posterior probability distribution in galaxy morphology analyses and later inferences. The test results for simulated galaxies successfully demonstrate that, with a careful choice of Markov chain Monte Carlo algorithms and fast model image generation, GALPHAT is a powerful analysis tool for reliably inferring morphological parameters from a large ensemble of galaxies over a wide range of different observational conditions.
ERIC Educational Resources Information Center
Lindo, Endia J.; Weiser, Beverly; Cheatham, Jennifer P.; Allor, Jill H.
2018-01-01
This study examines the effectiveness of minimally trained tutors providing a highly structured tutoring intervention for struggling readers. We screened students in Grades K-6 for participation in an after-school tutoring program. We randomly assigned those students not meeting the benchmark on a reading screening measure to either a tutoring…
Escobar, Gabriel J; Baker, Jennifer M; Turk, Benjamin J; Draper, David; Liu, Vincent; Kipnis, Patricia
2017-01-01
Introduction This article is not a traditional research report. It describes how conducting a specific set of benchmarking analyses led us to broader reflections on hospital benchmarking. We reexamined an issue that has received far less attention from researchers than in the past: How variations in the hospital admission threshold might affect hospital rankings. Considering this threshold made us reconsider what benchmarking is and what future benchmarking studies might be like. Although we recognize that some of our assertions are speculative, they are based on our reading of the literature and previous and ongoing data analyses being conducted in our research unit. We describe the benchmarking analyses that led to these reflections. Objectives The Centers for Medicare and Medicaid Services’ Hospital Compare Web site includes data on fee-for-service Medicare beneficiaries but does not control for severity of illness, which requires physiologic data now available in most electronic medical records. To address this limitation, we compared hospital processes and outcomes among Kaiser Permanente Northern California’s (KPNC) Medicare Advantage beneficiaries and non-KPNC California Medicare beneficiaries between 2009 and 2010. Methods We assigned a simulated severity of illness measure to each record and explored the effect of having the additional information on outcomes. Results We found that if the admission severity of illness in non-KPNC hospitals increased, KPNC hospitals’ mortality performance would appear worse; conversely, if admission severity at non-KPNC hospitals’ decreased, KPNC hospitals’ performance would appear better. Conclusion Future hospital benchmarking should consider the impact of variation in admission thresholds. PMID:29035176
Transonic Flutter Suppression Control Law Design, Analysis and Wind-Tunnel Results
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1999-01-01
The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using classical, and minimax techniques are described. A unified general formulation and solution for the minimax approach, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.
Information-Theoretic Benchmarking of Land Surface Models
NASA Astrophysics Data System (ADS)
Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong
2016-04-01
Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed about 40%. There was relatively little difference between the different models. 1. G. Abramowitz, R. Leuning, M. Clark, A. Pitman, Evaluating the performance of land surface models. Journal of Climate 21, (2008). 2. W. Gong, H. V. Gupta, D. Yang, K. Sricharan, A. O. Hero, Estimating Epistemic & Aleatory Uncertainties During Hydrologic Modeling: An Information Theoretic Approach. Water Resources Research 49, 2253-2273 (2013). 3. G. S. Nearing, H. V. Gupta, The quantity and quality of information in hydrologic models. Water Resources Research 51, 524-538 (2015). 4. H. V. Gupta, G. S. Nearing, Using models and data to learn: A systems theoretic perspective on the future of hydrological science. Water Resources Research 50(6), 5351-5359 (2014). 5. H. V. Gupta et al., Large-sample hydrology: a need to balance depth with breadth. Hydrology and Earth System Sciences Discussions 10, 9147-9189 (2013).
Lessons Learned over Four Benchmark Exercises from the Community Structure-Activity Resource
Carlson, Heather A.
2016-01-01
Preparing datasets and analyzing the results is difficult and time-consuming, and I hope the points raised here will help other scientists avoid some of the thorny issues we wrestled with. PMID:27345761
High-Strength Composite Fabric Tested at Structural Benchmark Test Facility
NASA Technical Reports Server (NTRS)
Krause, David L.
2002-01-01
Large sheets of ultrahigh strength fabric were put to the test at NASA Glenn Research Center's Structural Benchmark Test Facility. The material was stretched like a snare drum head until the last ounce of strength was reached, when it burst with a cacophonous release of tension. Along the way, the 3-ft square samples were also pulled, warped, tweaked, pinched, and yanked to predict the material's physical reactions to the many loads that it will experience during its proposed use. The material tested was a unique multi-ply composite fabric, reinforced with fibers that had a tensile strength eight times that of common carbon steel. The fiber plies were oriented at 0 and 90 to provide great membrane stiffness, as well as oriented at 45 to provide an unusually high resistance to shear distortion. The fabric's heritage is in astronaut space suits and other NASA programs.
Nema, Vijay; Pal, Sudhir Kumar
2013-01-01
This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)(2)-V(2), Modweb were used for the comparison and model generation. Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure.
NASA Technical Reports Server (NTRS)
Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Jost, Gabriele
2004-01-01
In this paper we describe the parallelization of the multi-zone code versions of the NAS Parallel Benchmarks employing multi-level OpenMP parallelism. For our study we use the NanosCompiler, which supports nesting of OpenMP directives and provides clauses to control the grouping of threads, load balancing, and synchronization. We report the benchmark results, compare the timings with those of different hybrid parallelization paradigms and discuss OpenMP implementation issues which effect the performance of multi-level parallel applications.
Benchmark Problems for Space Mission Formation Flying
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Leitner, Jesse A.; Folta, David C.; Burns, Richard
2003-01-01
To provide a high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested for space mission formation flying. The problems cover formation flying in low altitude, near-circular Earth orbit, high altitude, highly elliptical Earth orbits, and large amplitude lissajous trajectories about co-linear libration points of the Sun-Earth/Moon system. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions that are of interest to various agencies.
Better Medicare Cost Report data are needed to help hospitals benchmark costs and performance.
Magnus, S A; Smith, D G
2000-01-01
To evaluate costs and achieve cost control in the face of new technology and demands for efficiency from both managed care and governmental payers, hospitals need to benchmark their costs against those of other comparable hospitals. Since they typically use Medicare Cost Report (MCR) data for this purpose, a variety of cost accounting problems with the MCR may hamper hospitals' understanding of their relative costs and performance. Managers and researchers alike need to investigate the validity, accuracy, and timeliness of the MCR's cost accounting data.
Benchmark tests of JENDL-3.2 for thermal and fast reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takano, Hideki; Akie, Hiroshi; Kikuchi, Yasuyuki
1994-12-31
Benchmark calculations for a variety of thermal and fast reactors have been performed by using the newly evaluated JENDL-3 Version-2 (JENDL-3.2) file. In the thermal reactor calculations for the uranium and plutonium fueled cores of TRX and TCA, the k{sub eff} and lattice parameters were well predicted. The fast reactor calculations for ZPPR-9 and FCA assemblies showed that the k{sub eff} reactivity worths of Doppler, sodium void and control rod, and reaction rate distribution were in a very good agreement with the experiments.
NASA Astrophysics Data System (ADS)
Yasuda, K.; Tadokoro, K.; Ikuta, R.; Watanabe, T.; Nagai, S.; Sayanagi, K.
2013-12-01
Observation of seafloor crustal deformation is crucial for megathrust earthquake because most of the focal areas are located below seafloor. Seafloor crustal deformation can be observed GPS/Acoustic technique, and this technique has been carried out at subduction margins in Japan, e.g., Japan Trench, Suruga Trough, and Nankai Trough. At the present, the accuracy of seafloor positioning is one to several centimeters for each epoch. Velocity vectors at seafloor site are estimated through repeated observations. Co- and post- seismic slip distribution and interseismic deformation are estimated from results of seafloor geodetic measurement (e.g., Iinuma et al., 2012; Tadokoro et al., 2012). We repeatedly observed seafloor crustal deformations at two sites across the Suruga Trough from 2005 to investigate interplate locking condition at the focal area of the anticipated megathrust, Tokai, earthquake. We observed 12 and 16 times at an east site of the Suruga Trough (SNE) and at an west site of the Suruga Trough (SNW), respectively. We reinstalled seafloor benchmarks at both sites because of run out of batteries in 2012. We calculated and removed the bias between the old and new seafloor benchmarks. Furthermore, we evaluated two type of analysis. One is Fixed triangular configuration Analysis (FTA). When we determine the seafloor benchmark position, we fix the triangular configuration of seafloor units averaging all the measurements to improve trade-off relation between seafloor benchmark position and sound speed structure. Sound speed structure is assumed to be horizontal layered structure. The other one is Fixed Triangle and Gradient structure of sound speed structure (FTGA). We fixed triangular configuration same as FTA. Sound speed structure is assumed to have gradient structure. Comparing FTA with FTGA, the RMS of horizontal position analyzed through FTA is smaller than that through FTGA at SNE site. On the other hand, the RMS of horizontal position analyzed through FTA is larger than that through FTGA at SNW site. We estimated the displacement velocities with relative to the Amurian plate from the result of repeated observation. The estimated displacement velocity vectors at SNE and SNW are 42×8 mm/y to N94W direction and 46×13 mm/y to N77W direction, respectively. The directions are the same as those measured at the on-land GPS stations. The magnitudes of velocity vector indicate significant shortening by approximately 11 mm/y between SNW and on-land GPS stations at the western part of the Suruga Trough. We also calculated the theoretical surface deformation pattern to depict the interplate locking condition. These results show that the plate interface at the shallow zone of the northernmost Suruga trough is strongly locked.
NASA Technical Reports Server (NTRS)
Waszak, Martin R.; Fung, Jimmy
1998-01-01
This report describes the development of transfer function models for the trailing-edge and upper and lower spoiler actuators of the Benchmark Active Control Technology (BACT) wind tunnel model for application to control system analysis and design. A simple nonlinear least-squares parameter estimation approach is applied to determine transfer function parameters from frequency response data. Unconstrained quasi-Newton minimization of weighted frequency response error was employed to estimate the transfer function parameters. An analysis of the behavior of the actuators over time to assess the effects of wear and aerodynamic load by using the transfer function models is also presented. The frequency responses indicate consistent actuator behavior throughout the wind tunnel test and only slight degradation in effectiveness due to aerodynamic hinge loading. The resulting actuator models have been used in design, analysis, and simulation of controllers for the BACT to successfully suppress flutter over a wide range of conditions.
Seismic Response Control Of Structures Using Semi-Active and Passive Variable Stiffness Devices
NASA Astrophysics Data System (ADS)
Salem, Mohamed M. A.
Controllable devices such as Magneto-Rheological Fluid Dampers, Electro-Rheological Dampers, and controllable friction devices have been studied extensively with limited implementation in real structures. Such devices have shown great potential in reducing seismic demands, either as smart base isolation systems, or as smart devices for multistory structures. Although variable stiffness devices can be used for seismic control of structures, the vast majority of research effort has been given to the control of damping. The primary focus of this dissertation is to evaluate the seismic control of structures using semi-active and passive variable stiffness characteristics. Smart base isolation systems employing variable stiffness devices have been studied, and two semi-active control strategies are proposed. The control algorithms were designed to reduce the superstructure and base accelerations of seismically isolated structures subject to near-fault and far-field ground motions. Computational simulations of the proposed control algorithms on the benchmark structure have shown that excessive base displacements associated with the near-fault ground motions may be better mitigated with the use of variable stiffness devices. However, the device properties must be controllable to produce a wide range of stiffness changes for an effective control of the base displacements. The potential of controllable stiffness devices in limiting the base displacement due to near-fault excitation without compromising the performance of conventionally isolated structures, is illustrated. The application of passive variable stiffness devices for seismic response mitigation of multistory structures is also investigated. A stiffening bracing system (SBS) is proposed to replace the conventional bracing systems of braced frames. An optimization process for the SBS parameters has been developed. The main objective of the design process is to maintain a uniform inter-story drift angle over the building's height, which in turn would evenly distribute the seismic demand over the building. This behavior is particularly essential so that any possible damage is not concentrated in a single story. Furthermore, the proposed design ensures that additional damping devices distributed over the building's height work efficiently with their maximum design capacity, leading to a cost efficient design. An integrated and comprehensive design procedure that can be readily adopted by the current seismic design codes is proposed. An equivalent lateral force distribution is developed that shows a good agreement with the response history analyses in terms of seismic performance and demand prediction. This lateral force pattern explicitly accounts for the higher mode effect, the dynamic characteristics of the structure, the supplemental damping, and the site specific seismic hazard. Therefore, the proposed design procedure is considered as a standalone method for the design of SBS equipped buildings.
A solid reactor core thermal model for nuclear thermal rockets
NASA Astrophysics Data System (ADS)
Rider, William J.; Cappiello, Michael W.; Liles, Dennis R.
1991-01-01
A Helium/Hydrogen Cooled Reactor Analysis (HERA) computer code has been developed. HERA has the ability to model arbitrary geometries in three dimensions, which allows the user to easily analyze reactor cores constructed of prismatic graphite elements. The code accounts for heat generation in the fuel, control rods, and other structures; conduction and radiation across gaps; convection to the coolant; and a variety of boundary conditions. The numerical solution scheme has been optimized for vector computers, making long transient analyses economical. Time integration is either explicit or implicit, which allows the use of the model to accurately calculate both short- or long-term transients with an efficient use of computer time. Both the basic spatial and temporal integration schemes have been benchmarked against analytical solutions.
Hierarchical ferroelectric and ferrotoroidic polarizations coexistent in nano-metamaterials
Shimada, Takahiro; Lich, Le Van; Nagano, Koyo; Wang, Jie; Kitamura, Takayuki
2015-01-01
Tailoring materials to obtain unique, or significantly enhanced material properties through rationally designed structures rather than chemical constituents is principle of metamaterial concept, which leads to the realization of remarkable optical and mechanical properties. Inspired by the recent progress in electromagnetic and mechanical metamaterials, here we introduce the concept of ferroelectric nano-metamaterials, and demonstrate through an experiment in silico with hierarchical nanostructures of ferroelectrics using sophisticated real-space phase-field techniques. This new concept enables variety of unusual and complex yet controllable domain patterns to be achieved, where the coexistence between hierarchical ferroelectric and ferrotoroidic polarizations establishes a new benchmark for exploration of complexity in spontaneous polarization ordering. The concept opens a novel route to effectively tailor domain configurations through the control of internal structure, facilitating access to stabilization and control of complex domain patterns that provide high potential for novel functionalities. A key design parameter to achieve such complex patterns is explored based on the parity of junctions that connect constituent nanostructures. We further highlight the variety of additional functionalities that are potentially obtained from ferroelectric nano-metamaterials, and provide promising perspectives for novel multifunctional devices. This study proposes an entirely new discipline of ferroelectric nano-metamaterials, further driving advances in metamaterials research. PMID:26424484
Schaub, Michael T.; Delvenne, Jean-Charles; Yaliraki, Sophia N.; Barahona, Mauricio
2012-01-01
In recent years, there has been a surge of interest in community detection algorithms for complex networks. A variety of computational heuristics, some with a long history, have been proposed for the identification of communities or, alternatively, of good graph partitions. In most cases, the algorithms maximize a particular objective function, thereby finding the ‘right’ split into communities. Although a thorough comparison of algorithms is still lacking, there has been an effort to design benchmarks, i.e., random graph models with known community structure against which algorithms can be evaluated. However, popular community detection methods and benchmarks normally assume an implicit notion of community based on clique-like subgraphs, a form of community structure that is not always characteristic of real networks. Specifically, networks that emerge from geometric constraints can have natural non clique-like substructures with large effective diameters, which can be interpreted as long-range communities. In this work, we show that long-range communities escape detection by popular methods, which are blinded by a restricted ‘field-of-view’ limit, an intrinsic upper scale on the communities they can detect. The field-of-view limit means that long-range communities tend to be overpartitioned. We show how by adopting a dynamical perspective towards community detection [1], [2], in which the evolution of a Markov process on the graph is used as a zooming lens over the structure of the network at all scales, one can detect both clique- or non clique-like communities without imposing an upper scale to the detection. Consequently, the performance of algorithms on inherently low-diameter, clique-like benchmarks may not always be indicative of equally good results in real networks with local, sparser connectivity. We illustrate our ideas with constructive examples and through the analysis of real-world networks from imaging, protein structures and the power grid, where a multiscale structure of non clique-like communities is revealed. PMID:22384178
2013-01-01
Background While a large body of work exists on comparing and benchmarking descriptors of molecular structures, a similar comparison of protein descriptor sets is lacking. Hence, in the current work a total of 13 amino acid descriptor sets have been benchmarked with respect to their ability of establishing bioactivity models. The descriptor sets included in the study are Z-scales (3 variants), VHSE, T-scales, ST-scales, MS-WHIM, FASGAI, BLOSUM, a novel protein descriptor set (termed ProtFP (4 variants)), and in addition we created and benchmarked three pairs of descriptor combinations. Prediction performance was evaluated in seven structure-activity benchmarks which comprise Angiotensin Converting Enzyme (ACE) dipeptidic inhibitor data, and three proteochemometric data sets, namely (1) GPCR ligands modeled against a GPCR panel, (2) enzyme inhibitors (NNRTIs) with associated bioactivities against a set of HIV enzyme mutants, and (3) enzyme inhibitors (PIs) with associated bioactivities on a large set of HIV enzyme mutants. Results The amino acid descriptor sets compared here show similar performance (<0.1 log units RMSE difference and <0.1 difference in MCC), while errors for individual proteins were in some cases found to be larger than those resulting from descriptor set differences ( > 0.3 log units RMSE difference and >0.7 difference in MCC). Combining different descriptor sets generally leads to better modeling performance than utilizing individual sets. The best performers were Z-scales (3) combined with ProtFP (Feature), or Z-Scales (3) combined with an average Z-Scale value for each target, while ProtFP (PCA8), ST-Scales, and ProtFP (Feature) rank last. Conclusions While amino acid descriptor sets capture different aspects of amino acids their ability to be used for bioactivity modeling is still – on average – surprisingly similar. Still, combining sets describing complementary information consistently leads to small but consistent improvement in modeling performance (average MCC 0.01 better, average RMSE 0.01 log units lower). Finally, performance differences exist between the targets compared thereby underlining that choosing an appropriate descriptor set is of fundamental for bioactivity modeling, both from the ligand- as well as the protein side. PMID:24059743
Jacquin, Hugo; Gilson, Amy; Shakhnovich, Eugene; Cocco, Simona; Monasson, Rémi
2016-05-01
Inverse statistical approaches to determine protein structure and function from Multiple Sequence Alignments (MSA) are emerging as powerful tools in computational biology. However the underlying assumptions of the relationship between the inferred effective Potts Hamiltonian and real protein structure and energetics remain untested so far. Here we use lattice protein model (LP) to benchmark those inverse statistical approaches. We build MSA of highly stable sequences in target LP structures, and infer the effective pairwise Potts Hamiltonians from those MSA. We find that inferred Potts Hamiltonians reproduce many important aspects of 'true' LP structures and energetics. Careful analysis reveals that effective pairwise couplings in inferred Potts Hamiltonians depend not only on the energetics of the native structure but also on competing folds; in particular, the coupling values reflect both positive design (stabilization of native conformation) and negative design (destabilization of competing folds). In addition to providing detailed structural information, the inferred Potts models used as protein Hamiltonian for design of new sequences are able to generate with high probability completely new sequences with the desired folds, which is not possible using independent-site models. Those are remarkable results as the effective LP Hamiltonians used to generate MSA are not simple pairwise models due to the competition between the folds. Our findings elucidate the reasons for the success of inverse approaches to the modelling of proteins from sequence data, and their limitations.
LYRA, a webserver for lymphocyte receptor structural modeling.
Klausen, Michael Schantz; Anderson, Mads Valdemar; Jespersen, Martin Closter; Nielsen, Morten; Marcatili, Paolo
2015-07-01
The accurate structural modeling of B- and T-cell receptors is fundamental to gain a detailed insight in the mechanisms underlying immunity and in developing new drugs and therapies. The LYRA (LYmphocyte Receptor Automated modeling) web server (http://www.cbs.dtu.dk/services/LYRA/) implements a complete and automated method for building of B- and T-cell receptor structural models starting from their amino acid sequence alone. The webserver is freely available and easy to use for non-specialists. Upon submission, LYRA automatically generates alignments using ad hoc profiles, predicts the structural class of each hypervariable loop, selects the best templates in an automatic fashion, and provides within minutes a complete 3D model that can be downloaded or inspected online. Experienced users can manually select or exclude template structures according to case specific information. LYRA is based on the canonical structure method, that in the last 30 years has been successfully used to generate antibody models of high accuracy, and in our benchmarks this approach proves to achieve similarly good results on TCR modeling, with a benchmarked average RMSD accuracy of 1.29 and 1.48 Å for B- and T-cell receptors, respectively. To the best of our knowledge, LYRA is the first automated server for the prediction of TCR structure. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Technical Reports Server (NTRS)
Radovcich, N. A.
1984-01-01
The design experience associated with a benchmark aeroelastic design of an out of production transport aircraft is discussed. Current work being performed on a high aspect ratio wing design is reported. The Preliminary Aeroelastic Design of Structures (PADS) system is briefly summarized and some operational aspects of generating the design in an automated aeroelastic design environment are discussed.
Lock, Nina; Jensen, Ellen M L; Mi, Jianli; Mamakhel, Aref; Norén, Katarina; Qingbo, Meng; Iversen, Bo B
2013-07-14
Metal functionalized nanoparticles potentially have improved properties e.g. in catalytic applications, but their precise structures are often very challenging to determine. Here we report a structural benchmark study based on tetragonal anatase TiO2 nanoparticles containing 0-2 wt% copper. The particles were synthesized by continuous flow synthesis under supercritical water-isopropanol conditions. Size determination using synchrotron PXRD, TEM, and X-ray total scattering reveals 5-7 nm monodisperse particles. The precise dopant structure and thermal stability of the highly crystalline powders were characterized by X-ray absorption spectroscopy and multi-temperature synchrotron PXRD (300-1000 K). The combined evidence reveals that copper is present as a dopant on the particle surfaces, most likely in an amorphous oxide or hydroxide shell. UV-VIS spectroscopy shows that copper presence at concentrations higher than 0.3 wt% lowers the band gap energy. The particles are unaffected by heating to 600 K, while growth and partial transformation to rutile TiO2 occur at higher temperatures. Anisotropic unit cell behavior of anatase is observed as a consequence of the particle growth (a decreases and c increases).
NASA Astrophysics Data System (ADS)
Yang, Xudong; Sun, Lingyu; Zhang, Cheng; Li, Lijun; Dai, Zongmiao; Xiong, Zhenkai
2018-03-01
The application of polymer composites as a substitution of metal is an effective approach to reduce vehicle weight. However, the final performance of composite structures is determined not only by the material types, structural designs and manufacturing process, but also by their mutual restrict. Hence, an integrated "material-structure-process-performance" method is proposed for the conceptual and detail design of composite components. The material selection is based on the principle of composite mechanics such as rule of mixture for laminate. The design of component geometry, dimension and stacking sequence is determined by parametric modeling and size optimization. The selection of process parameters are based on multi-physical field simulation. The stiffness and modal constraint conditions were obtained from the numerical analysis of metal benchmark under typical load conditions. The optimal design was found by multi-discipline optimization. Finally, the proposed method was validated by an application case of automotive hatchback using carbon fiber reinforced polymer. Compared with the metal benchmark, the weight of composite one reduces 38.8%, simultaneously, its torsion and bending stiffness increases 3.75% and 33.23%, respectively, and the first frequency also increases 44.78%.
The National Practice Benchmark for Oncology: 2015 Report for 2014 Data
Balch, Carla; Ogle, John D.
2016-01-01
The National Practice Benchmark (NPB) is a unique tool used to measure oncology practices against others across the country in a meaningful way despite variations in practice demographics, size, and setting. In today’s challenging economic environment, each practice positions service offerings and competitive advantages to attract patients. Although the data in the NPB report are primarily reported by community oncology practices, the business structure and arrangements with regional health care systems are also reflected in the benchmark report. The ability to produce detailed metrics is an accomplishment of excellence in business and clinical management. With these metrics, a practice should be able to measure and analyze its current business practices and make appropriate changes, if necessary. In this report, we build on the foundation initially established by Oncology Metrics (acquired by Flatiron Health in 2014) over years of data collection and refine definitions to deliver the NPB, which is uniquely meaningful in the oncology market. PMID:27006357
Docking and scoring with ICM: the benchmarking results and strategies for improvement
Neves, Marco A. C.; Totrov, Maxim; Abagyan, Ruben
2012-01-01
Flexible docking and scoring using the Internal Coordinate Mechanics software (ICM) was benchmarked for ligand binding mode prediction against the 85 co-crystal structures in the modified Astex data set. The ICM virtual ligand screening was tested against the 40 DUD target benchmarks and 11-target WOMBAT sets. The self-docking accuracy was evaluated for the top 1 and top 3 scoring poses at each ligand binding site with near native conformations below 2 Å RMSD found in 91% and 95% of the predictions, respectively. The virtual ligand screening using single rigid pocket conformations provided the median area under the ROC curves equal to 69.4 with 22.0% true positives recovered at 2% false positive rate. Significant improvements up to ROC AUC= 82.2 and ROC(2%)= 45.2 were achieved following our best practices for flexible pocket refinement and out-of-pocket binding rescore. The virtual screening can be further improved by considering multiple conformations of the target. PMID:22569591
Benchmarking a Soil Moisture Data Assimilation System for Agricultural Drought Monitoring
NASA Technical Reports Server (NTRS)
Hun, Eunjin; Crow, Wade T.; Holmes, Thomas; Bolten, John
2014-01-01
Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this need, this paper evaluates an LDAS for agricultural drought monitoring by benchmarking individual components of the system (i.e., a satellite soil moisture retrieval algorithm, a soil water balance model and a sequential data assimilation filter) against a series of linear models which perform the same function (i.e., have the same basic inputoutput structure) as the full system component. Benchmarking is based on the calculation of the lagged rank cross-correlation between the normalized difference vegetation index (NDVI) and soil moisture estimates acquired for various components of the system. Lagged soil moistureNDVI correlations obtained using individual LDAS components versus their linear analogs reveal the degree to which non-linearities andor complexities contained within each component actually contribute to the performance of the LDAS system as a whole. Here, a particular system based on surface soil moisture retrievals from the Land Parameter Retrieval Model (LPRM), a two-layer Palmer soil water balance model and an Ensemble Kalman filter (EnKF) is benchmarked. Results suggest significant room for improvement in each component of the system.
MECHANICAL DESIGN CRITERIA FOR INTERVERTEBRAL DISC TISSUE ENGINEERING
Nerurkar, Nandan L.; Elliott, Dawn M.; Mauck, Robert L.
2009-01-01
Due to the inability of current clinical practices to restore function to degenerated intervertebral discs, the arena of disc tissue engineering has received substantial attention in recent years. Despite tremendous growth and progress in this field, translation to clinical implementation has been hindered by a lack of well-defined functional benchmarks. Because successful replacement of the disc is contingent upon replication of some or all of its complex mechanical behaviour, it is critically important that disc mechanics be well characterized in order to establish discrete functional goals for tissue engineering. In this review, the key functional signatures of the intervertebral disc are discussed and used to propose a series of native tissue benchmarks to guide the development of engineered replacement tissues. These benchmarks include measures of mechanical function under tensile, compressive and shear deformations for the disc and its substructures. In some cases, important functional measures are identified that have yet to be measured in the native tissue. Ultimately, native tissue benchmark values are compared to measurements that have been made on engineered disc tissues, identifying measures where functional equivalence was achieved, and others where there remain opportunities for advancement. Several excellent reviews exist regarding disc composition and structure, as well as recent tissue engineering strategies; therefore this review will remain focused on the functional aspects of disc tissue engineering. PMID:20080239
Benchmarking and Self-Assessment in the Wine Industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galitsky, Christina; Radspieler, Anthony; Worrell, Ernst
2005-12-01
Not all industrial facilities have the staff or theopportunity to perform a detailed audit of their operations. The lack ofknowledge of energy efficiency opportunities provides an importantbarrier to improving efficiency. Benchmarking programs in the U.S. andabroad have shown to improve knowledge of the energy performance ofindustrial facilities and buildings and to fuel energy managementpractices. Benchmarking provides a fair way to compare the energyintensity of plants, while accounting for structural differences (e.g.,the mix of products produced, climate conditions) between differentfacilities. In California, the winemaking industry is not only one of theeconomic pillars of the economy; it is also a large energymore » consumer, witha considerable potential for energy-efficiency improvement. LawrenceBerkeley National Laboratory and Fetzer Vineyards developed the firstbenchmarking tool for the California wine industry called "BEST(Benchmarking and Energy and water Savings Tool) Winery". BEST Wineryenables a winery to compare its energy efficiency to a best practicereference winery. Besides overall performance, the tool enables the userto evaluate the impact of implementing efficiency measures. The toolfacilitates strategic planning of efficiency measures, based on theestimated impact of the measures, their costs and savings. The tool willraise awareness of current energy intensities and offer an efficient wayto evaluate the impact of future efficiency measures.« less
GoWeb: a semantic search engine for the life science web.
Dietze, Heiko; Schroeder, Michael
2009-10-01
Current search engines are keyword-based. Semantic technologies promise a next generation of semantic search engines, which will be able to answer questions. Current approaches either apply natural language processing to unstructured text or they assume the existence of structured statements over which they can reason. Here, we introduce a third approach, GoWeb, which combines classical keyword-based Web search with text-mining and ontologies to navigate large results sets and facilitate question answering. We evaluate GoWeb on three benchmarks of questions on genes and functions, on symptoms and diseases, and on proteins and diseases. The first benchmark is based on the BioCreAtivE 1 Task 2 and links 457 gene names with 1352 functions. GoWeb finds 58% of the functional GeneOntology annotations. The second benchmark is based on 26 case reports and links symptoms with diseases. GoWeb achieves 77% success rate improving an existing approach by nearly 20%. The third benchmark is based on 28 questions in the TREC genomics challenge and links proteins to diseases. GoWeb achieves a success rate of 79%. GoWeb's combination of classical Web search with text-mining and ontologies is a first step towards answering questions in the biomedical domain. GoWeb is online at: http://www.gopubmed.org/goweb.
Benchmarking reference services: step by step.
Buchanan, H S; Marshall, J G
1996-01-01
This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...
Code of Federal Regulations, 2014 CFR
2014-07-01
.... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...
Code of Federal Regulations, 2010 CFR
2010-07-01
.... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...
Code of Federal Regulations, 2013 CFR
2013-07-01
.... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...
Code of Federal Regulations, 2011 CFR
2011-07-01
.... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...
Develop applications based on android: Teacher Engagement Control of Health (TECH)
NASA Astrophysics Data System (ADS)
Sasmoko; Manalu, S. R.; Widhoyoko, S. A.; Indrianti, Y.; Suparto
2018-03-01
Physical and psychological condition of teachers is very important because it helped determine the realization of a positive school climate and productive so that they can run their profession optimally. This research is an advanced research on the design of ITEI application that able to see the profile of teacher’s engagement in Indonesia and to optimize the condition is needed an application that can detect the health of teachers both physically and psychologically. The research method used is the neuroresearch method combined with the development of IT system design for TECH which includes server design, database and android TECH application display. The study yielded 1) mental health benchmarks, 2) physical health benchmarks, and 3) the design of Android Application for Teacher Engagement Control of Health (TECH).
Does neighborhood size really cause the word length effect?
Guitard, Dominic; Saint-Aubin, Jean; Tehan, Gerald; Tolan, Anne
2018-02-01
In short-term serial recall, it is well-known that short words are remembered better than long words. This word length effect has been the cornerstone of the working memory model and a benchmark effect that all models of immediate memory should account for. Currently, there is no consensus as to what determines the word length effect. Jalbert and colleagues (Jalbert, Neath, Bireta, & Surprenant, 2011a; Jalbert, Neath, & Surprenant, 2011b) suggested that neighborhood size is one causal factor. In six experiments we systematically examined their suggestion. In Experiment 1, with an immediate serial recall task, multiple word lengths, and a large pool of words controlled for neighborhood size, the typical word length effect was present. In Experiments 2 and 3, with an order reconstruction task and words with either many or few neighbors, we observed the typical word length effect. In Experiment 4 we tested the hypothesis that the previous abolition of the word length effect when neighborhood size was controlled was due to a confounded factor: frequency of orthographic structure. As predicted, we reversed the word length effect when using short words with less frequent orthographic structures than the long words, as was done in both of Jalbert et al.'s studies. In Experiments 5 and 6, we again observed the typical word length effect, even if we controlled for neighborhood size and frequency of orthographic structure. Overall, the results were not consistent with the predictions of Jalbert et al. and clearly showed a large and reliable word length effect after controlling for neighborhood size.
Zheng, Heping; Shabalin, Ivan G.; Handing, Katarzyna B.; Bujnicki, Janusz M.; Minor, Wladek
2015-01-01
The ubiquitous presence of magnesium ions in RNA has long been recognized as a key factor governing RNA folding, and is crucial for many diverse functions of RNA molecules. In this work, Mg2+-binding architectures in RNA were systematically studied using a database of RNA crystal structures from the Protein Data Bank (PDB). Due to the abundance of poorly modeled or incorrectly identified Mg2+ ions, the set of all sites was comprehensively validated and filtered to identify a benchmark dataset of 15 334 ‘reliable’ RNA-bound Mg2+ sites. The normalized frequencies by which specific RNA atoms coordinate Mg2+ were derived for both the inner and outer coordination spheres. A hierarchical classification system of Mg2+ sites in RNA structures was designed and applied to the benchmark dataset, yielding a set of 41 types of inner-sphere and 95 types of outer-sphere coordinating patterns. This classification system has also been applied to describe six previously reported Mg2+-binding motifs and detect them in new RNA structures. Investigation of the most populous site types resulted in the identification of seven novel Mg2+-binding motifs, and all RNA structures in the PDB were screened for the presence of these motifs. PMID:25800744
An Integrated Development Environment for Adiabatic Quantum Programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; McCaskey, Alex; Bennink, Ryan S
2014-01-01
Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation enginemore » that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.« less
Benchmarking Memory Performance with the Data Cube Operator
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Shabanov, Leonid V.
2004-01-01
Data movement across a computer memory hierarchy and across computational grids is known to be a limiting factor for applications processing large data sets. We use the Data Cube Operator on an Arithmetic Data Set, called ADC, to benchmark capabilities of computers and of computational grids to handle large distributed data sets. We present a prototype implementation of a parallel algorithm for computation of the operatol: The algorithm follows a known approach for computing views from the smallest parent. The ADC stresses all levels of grid memory and storage by producing some of 2d views of an Arithmetic Data Set of d-tuples described by a small number of integers. We control data intensity of the ADC by selecting the tuple parameters, the sizes of the views, and the number of realized views. Benchmarking results of memory performance of a number of computer architectures and of a small computational grid are presented.
Passivity-based Robust Control of Aerospace Systems
NASA Technical Reports Server (NTRS)
Kelkar, Atul G.; Joshi, Suresh M. (Technical Monitor)
2000-01-01
This report provides a brief summary of the research work performed over the duration of the cooperative research agreement between NASA Langley Research Center and Kansas State University. The cooperative agreement which was originally for the duration the three years was extended by another year through no-cost extension in order to accomplish the goals of the project. The main objective of the research was to develop passivity-based robust control methodology for passive and non-passive aerospace systems. The focus of the first-year's research was limited to the investigation of passivity-based methods for the robust control of Linear Time-Invariant (LTI) single-input single-output (SISO), open-loop stable, minimum-phase non-passive systems. The second year's focus was mainly on extending the passivity-based methodology to a larger class of non-passive LTI systems which includes unstable and nonminimum phase SISO systems. For LTI non-passive systems, five different passification. methods were developed. The primary effort during the years three and four was on the development of passification methodology for MIMO systems, development of methods for checking robustness of passification, and developing synthesis techniques for passifying compensators. For passive LTI systems optimal synthesis procedure was also developed for the design of constant-gain positive real controllers. For nonlinear passive systems, numerical optimization-based technique was developed for the synthesis of constant as well as time-varying gain positive-real controllers. The passivity-based control design methodology developed during the duration of this project was demonstrated by its application to various benchmark examples. These example systems included longitudinal model of an F-18 High Alpha Research Vehicle (HARV) for pitch axis control, NASA's supersonic transport wind tunnel model, ACC benchmark model, 1-D acoustic duct model, piezo-actuated flexible link model, and NASA's Benchmark Active Controls Technology (BACT) Wing model. Some of the stability results for linear passive systems were also extended to nonlinear passive systems. Several publications and conference presentations resulted from this research.
Global structure of forked DNA in solution revealed by high-resolution single-molecule FRET.
Sabir, Tara; Schröder, Gunnar F; Toulmin, Anita; McGlynn, Peter; Magennis, Steven W
2011-02-09
Branched DNA structures play critical roles in DNA replication, repair, and recombination in addition to being key building blocks for DNA nanotechnology. Here we combine single-molecule multiparameter fluorescence detection and molecular dynamics simulations to give a general approach to global structure determination of branched DNA in solution. We reveal an open, planar structure of a forked DNA molecule with three duplex arms and demonstrate an ion-induced conformational change. This structure will serve as a benchmark for DNA-protein interaction studies.
Nuclear power plant digital system PRA pilot study with the dynamic flow-graph methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yau, M.; Motamed, M.; Guarro, S.
2006-07-01
Current Probabilistic Risk Assessment (PRA) methodology is well established in analyzing hardware and some of the key human interactions. However processes for analyzing the software functions of digital systems within a plant PRA framework, and accounting for the digital system contribution to the overall risk are not generally available nor are they well understood and established. A recent study reviewed a number of methodologies that have potential applicability to modeling and analyzing digital systems within a PRA framework. This study identified the Dynamic Flow-graph Methodology (DFM) and the Markov Methodology as the most promising tools. As a result of thismore » study, a task was defined under the framework of a collaborative agreement between the U.S. Nuclear Regulatory Commission (NRC) and the Ohio State Univ. (OSU). The objective of this task is to set up benchmark systems representative of digital systems used in nuclear power plants and to evaluate DFM and the Markov methodology with these benchmark systems. The first benchmark system is a typical Pressurized Water Reactor (PWR) Steam Generator (SG) Feedwater System (FWS) level control system based on an earlier ASCA work with the U.S. NRC 2, upgraded with modern control laws. ASCA, Inc. is currently under contract to OSU to apply DFM to this benchmark system. The goal is to investigate the feasibility of using DFM to analyze and quantify digital system risk, and to integrate the DFM analytical results back into the plant event tree/fault tree PRA model. (authors)« less
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1996-01-01
This paper describes the formulation of a model of the dynamic behavior of the Benchmark Active Controls Technology (BACT) wind-tunnel model for application to design and analysis of flutter suppression controllers. The model is formed by combining the equations of motion for the BACT wind-tunnel model with actuator models and a model of wind-tunnel turbulence. The primary focus of this paper is the development of the equations of motion from first principles using Lagrange's equations and the principle of virtual work. A numerical form of the model is generated using values for parameters obtained from both experiment and analysis. A unique aspect of the BACT wind-tunnel model is that it has upper- and lower-surface spoilers for active control. Comparisons with experimental frequency responses and other data show excellent agreement and suggest that simple coefficient-based aerodynamics are sufficient to accurately characterize the aeroelastic response of the BACT wind-tunnel model. The equations of motion developed herein have been used to assist the design and analysis of a number of flutter suppression controllers that have been successfully implemented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Ran; Feldman, David; Margolis, Robert
NREL has been modeling U.S. photovoltaic (PV) system costs since 2009. This year, our report benchmarks costs of U.S. solar PV for residential, commercial, and utility-scale systems built in the first quarter of 2017 (Q1 2017). Costs are represented from the perspective of the developer/installer, thus all hardware costs represent the price at which components are purchased by the developer/installer, not accounting for preexisting supply agreements or other contracts. Importantly, the benchmark this year (2017) also represents the sales price paid to the installer; therefore, it includes profit in the cost of the hardware, along with the profit the installer/developermore » receives, as a separate cost category. However, it does not include any additional net profit, such as a developer fee or price gross-up, which are common in the marketplace. We adopt this approach owing to the wide variation in developer profits in all three sectors, where project pricing is highly dependent on region and project specifics such as local retail electricity rate structures, local rebate and incentive structures, competitive environment, and overall project or deal structures.« less
Wildenhain, Jan; Spitzer, Michaela; Dolma, Sonam; Jarvik, Nick; White, Rachel; Roy, Marcia; Griffiths, Emma; Bellows, David S.; Wright, Gerard D.; Tyers, Mike
2016-01-01
The network structure of biological systems suggests that effective therapeutic intervention may require combinations of agents that act synergistically. However, a dearth of systematic chemical combination datasets have limited the development of predictive algorithms for chemical synergism. Here, we report two large datasets of linked chemical-genetic and chemical-chemical interactions in the budding yeast Saccharomyces cerevisiae. We screened 5,518 unique compounds against 242 diverse yeast gene deletion strains to generate an extended chemical-genetic matrix (CGM) of 492,126 chemical-gene interaction measurements. This CGM dataset contained 1,434 genotype-specific inhibitors, termed cryptagens. We selected 128 structurally diverse cryptagens and tested all pairwise combinations to generate a benchmark dataset of 8,128 pairwise chemical-chemical interaction tests for synergy prediction, termed the cryptagen matrix (CM). An accompanying database resource called ChemGRID was developed to enable analysis, visualisation and downloads of all data. The CGM and CM datasets will facilitate the benchmarking of computational approaches for synergy prediction, as well as chemical structure-activity relationship models for anti-fungal drug discovery. PMID:27874849
Augmented neural networks and problem structure-based heuristics for the bin-packing problem
NASA Astrophysics Data System (ADS)
Kasap, Nihat; Agarwal, Anurag
2012-08-01
In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.
Sousa, Filipa L; Parente, Daniel J; Shis, David L; Hessman, Jacob A; Chazelle, Allen; Bennett, Matthew R; Teichmann, Sarah A; Swint-Kruse, Liskin
2016-02-22
Protein families evolve functional variation by accumulating point mutations at functionally important amino acid positions. Homologs in the LacI/GalR family of transcription regulators have evolved to bind diverse DNA sequences and allosteric regulatory molecules. In addition to playing key roles in bacterial metabolism, these proteins have been widely used as a model family for benchmarking structural and functional prediction algorithms. We have collected manually curated sequence alignments for >3000 sequences, in vivo phenotypic and biochemical data for >5750 LacI/GalR mutational variants, and noncovalent residue contact networks for 65 LacI/GalR homolog structures. Using this rich data resource, we compared the noncovalent residue contact networks of the LacI/GalR subfamilies to design and experimentally validate an allosteric mutant of a synthetic LacI/GalR repressor for use in biotechnology. The AlloRep database (freely available at www.AlloRep.org) is a key resource for future evolutionary studies of LacI/GalR homologs and for benchmarking computational predictions of functional change. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Machnes, S.; Institute for Theoretical Physics, University of Ulm, D-89069 Ulm; Sander, U.
2011-08-15
For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions aremore » pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.« less
Control strategies for nitrous oxide emissions reduction on wastewater treatment plants operation.
Santín, I; Barbu, M; Pedret, C; Vilanova, R
2017-11-15
The present paper focused on reducing greenhouse gases emissions in wastewater treatment plants operation by application of suitable control strategies. Specifically, the objective is to reduce nitrous oxide emissions during the nitrification process. Incomplete nitrification in the aerobic tanks can lead to an accumulation of nitrite that triggers the nitrous oxide emissions. In order to avoid the peaks of nitrous oxide emissions, this paper proposes a cascade control configuration by manipulating the dissolved oxygen set-points in the aerobic tanks. This control strategy is combined with ammonia cascade control already applied in the literature. This is performed with the objective to take also into account effluent pollutants and operational costs. In addition, other greenhouse gases emissions sources are also evaluated. Results have been obtained by simulation, using a modified version of Benchmark Simulation Model no. 2, which takes into account greenhouse gases emissions. This is called Benchmark Simulation Model no. 2 Gas. The results show that the proposed control strategies are able to reduce by 29.86% of nitrous oxide emissions compared to the default control strategy, while maintaining a satisfactory trade-off between water quality and costs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Yoga for military service personnel with PTSD: A single arm study.
Johnston, Jennifer M; Minami, Takuya; Greenwald, Deborah; Li, Chieh; Reinhardt, Kristen; Khalsa, Sat Bir S
2015-11-01
This study evaluated the effects of yoga on posttraumatic stress disorder (PTSD) symptoms, resilience, and mindfulness in military personnel. Participants completing the yoga intervention were 12 current or former military personnel who met the Diagnostic and Statistical Manual for Mental Disorders-Fourth Edition-Text Revision (DSM-IV-TR) diagnostic criteria for PTSD. Results were also benchmarked against other military intervention studies of PTSD using the Clinician Administered PTSD Scale (CAPS; Blake et al., 2000) as an outcome measure. Results of within-subject analyses supported the study's primary hypothesis that yoga would reduce PTSD symptoms (d = 0.768; t = 2.822; p = .009) but did not support the hypothesis that yoga would significantly increase mindfulness (d = 0.392; t = -0.9500; p = .181) and resilience (d = 0.270; t = -1.220; p = .124) in this population. Benchmarking results indicated that, as compared with the aggregated treatment benchmark (d = 1.074) obtained from published clinical trials, the current study's treatment effect (d = 0.768) was visibly lower, and compared with the waitlist control benchmark (d = 0.156), the treatment effect in the current study was visibly higher. (c) 2015 APA, all rights reserved).
van de Streek, Jacco; Neumann, Marcus A
2014-12-01
In 2010 we energy-minimized 225 high-quality single-crystal (SX) structures with dispersion-corrected density functional theory (DFT-D) to establish a quantitative benchmark. For the current paper, 215 organic crystal structures determined from X-ray powder diffraction (XRPD) data and published in an IUCr journal were energy-minimized with DFT-D and compared to the SX benchmark. The on average slightly less accurate atomic coordinates of XRPD structures do lead to systematically higher root mean square Cartesian displacement (RMSCD) values upon energy minimization than for SX structures, but the RMSCD value is still a good indicator for the detection of structures that deserve a closer look. The upper RMSCD limit for a correct structure must be increased from 0.25 Å for SX structures to 0.35 Å for XRPD structures; the grey area must be extended from 0.30 to 0.40 Å. Based on the energy minimizations, three structures are re-refined to give more precise atomic coordinates. For six structures our calculations provide the missing positions for the H atoms, for five structures they provide corrected positions for some H atoms. Seven crystal structures showed a minor error for a non-H atom. For five structures the energy minimizations suggest a higher space-group symmetry. For the 225 SX structures, the only deviations observed upon energy minimization were three minor H-atom related issues. Preferred orientation is the most important cause of problems. A preferred-orientation correction is the only correction where the experimental data are modified to fit the model. We conclude that molecular crystal structures determined from powder diffraction data that are published in IUCr journals are of high quality, with less than 4% containing an error in a non-H atom.
van de Streek, Jacco; Neumann, Marcus A.
2014-01-01
In 2010 we energy-minimized 225 high-quality single-crystal (SX) structures with dispersion-corrected density functional theory (DFT-D) to establish a quantitative benchmark. For the current paper, 215 organic crystal structures determined from X-ray powder diffraction (XRPD) data and published in an IUCr journal were energy-minimized with DFT-D and compared to the SX benchmark. The on average slightly less accurate atomic coordinates of XRPD structures do lead to systematically higher root mean square Cartesian displacement (RMSCD) values upon energy minimization than for SX structures, but the RMSCD value is still a good indicator for the detection of structures that deserve a closer look. The upper RMSCD limit for a correct structure must be increased from 0.25 Å for SX structures to 0.35 Å for XRPD structures; the grey area must be extended from 0.30 to 0.40 Å. Based on the energy minimizations, three structures are re-refined to give more precise atomic coordinates. For six structures our calculations provide the missing positions for the H atoms, for five structures they provide corrected positions for some H atoms. Seven crystal structures showed a minor error for a non-H atom. For five structures the energy minimizations suggest a higher space-group symmetry. For the 225 SX structures, the only deviations observed upon energy minimization were three minor H-atom related issues. Preferred orientation is the most important cause of problems. A preferred-orientation correction is the only correction where the experimental data are modified to fit the model. We conclude that molecular crystal structures determined from powder diffraction data that are published in IUCr journals are of high quality, with less than 4% containing an error in a non-H atom. PMID:25449625
Benchmarking protein-protein interface predictions: why you should care about protein size.
Martin, Juliette
2014-07-01
A number of predictive methods have been developed to predict protein-protein binding sites. Each new method is traditionally benchmarked using sets of protein structures of various sizes, and global statistics are used to assess the quality of the prediction. Little attention has been paid to the potential bias due to protein size on these statistics. Indeed, small proteins involve proportionally more residues at interfaces than large ones. If a predictive method is biased toward small proteins, this can lead to an over-estimation of its performance. Here, we investigate the bias due to the size effect when benchmarking protein-protein interface prediction on the widely used docking benchmark 4.0. First, we simulate random scores that favor small proteins over large ones. Instead of the 0.5 AUC (Area Under the Curve) value expected by chance, these biased scores result in an AUC equal to 0.6 using hypergeometric distributions, and up to 0.65 using constant scores. We then use real prediction results to illustrate how to detect the size bias by shuffling, and subsequently correct it using a simple conversion of the scores into normalized ranks. In addition, we investigate the scores produced by eight published methods and show that they are all affected by the size effect, which can change their relative ranking. The size effect also has an impact on linear combination scores by modifying the relative contributions of each method. In the future, systematic corrections should be applied when benchmarking predictive methods using data sets with mixed protein sizes. © 2014 Wiley Periodicals, Inc.
Multirate flutter suppression system design for the Benchmark Active Controls Technology Wing
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.
1994-01-01
To study the effectiveness of various control system design methodologies, the NASA Langley Research Center initiated the Benchmark Active Controls Project. In this project, the various methodologies will be applied to design a flutter suppression system for the Benchmark Active Controls Technology (BACT) Wing (also called the PAPA wing). Eventually, the designs will be implemented in hardware and tested on the BACT wing in a wind tunnel. This report describes a project at the University of Washington to design a multirate flutter suppression system for the BACT wing. The objective of the project was two fold. First, to develop a methodology for designing robust multirate compensators, and second, to demonstrate the methodology by applying it to the design of a multirate flutter suppression system for the BACT wing. The contributions of this project are (1) development of an algorithm for synthesizing robust low order multirate control laws (the algorithm is capable of synthesizing a single compensator which stabilizes both the nominal plant and multiple plant perturbations; (2) development of a multirate design methodology, and supporting software, for modeling, analyzing and synthesizing multirate compensators; and (3) design of a multirate flutter suppression system for NASA's BACT wing which satisfies the specified design criteria. This report describes each of these contributions in detail. Section 2.0 discusses our design methodology. Section 3.0 details the results of our multirate flutter suppression system design for the BACT wing. Finally, Section 4.0 presents our conclusions and suggestions for future research. The body of the report focuses primarily on the results. The associated theoretical background appears in the three technical papers that are included as Attachments 1-3. Attachment 4 is a user's manual for the software that is key to our design methodology.
Oncology practice trends from the national practice benchmark.
Barr, Thomas R; Towle, Elaine L
2012-09-01
In 2011, we made predictions on the basis of data from the National Practice Benchmark (NPB) reports from 2005 through 2010. With the new 2011 data in hand, we have revised last year's predictions and projected for the next 3 years. In addition, we make some new predictions that will be tracked in future benchmarking surveys. We also outline a conceptual framework for contemplating these data based on an ecological model of the oncology delivery system. The 2011 NPB data are consistent with last year's prediction of a decrease in the operating margins necessary to sustain a community oncology practice. With the new data in, we now predict these reductions to occur more slowly than previously forecast. We note an ease to the squeeze observed in last year's trend analysis, which will allow more time for practices to adapt their business models for survival and offer the best of these practices an opportunity to invest earnings into operations to prepare for the inevitable shift away from historic payment methodology for clinical service. This year, survey respondents reported changes in business structure, first measured in the 2010 data, indicating an increase in the percentage of respondents who believe that change is coming soon, but the majority still have confidence in the viability of their existing business structure. Although oncology practices are in for a bumpy ride, things are looking less dire this year for practices participating in our survey.
HDOCK: a web server for protein–protein and protein–DNA/RNA docking based on a hybrid strategy
Yan, Yumeng; Zhang, Di; Zhou, Pei; Li, Botong
2017-01-01
Abstract Protein–protein and protein–DNA/RNA interactions play a fundamental role in a variety of biological processes. Determining the complex structures of these interactions is valuable, in which molecular docking has played an important role. To automatically make use of the binding information from the PDB in docking, here we have presented HDOCK, a novel web server of our hybrid docking algorithm of template-based modeling and free docking, in which cases with misleading templates can be rescued by the free docking protocol. The server supports protein–protein and protein–DNA/RNA docking and accepts both sequence and structure inputs for proteins. The docking process is fast and consumes about 10–20 min for a docking run. Tested on the cases with weakly homologous complexes of <30% sequence identity from five docking benchmarks, the HDOCK pipeline tied with template-based modeling on the protein–protein and protein–DNA benchmarks and performed better than template-based modeling on the three protein–RNA benchmarks when the top 10 predictions were considered. The performance of HDOCK became better when more predictions were considered. Combining the results of HDOCK and template-based modeling by ranking first of the template-based model further improved the predictive power of the server. The HDOCK web server is available at http://hdock.phys.hust.edu.cn/. PMID:28521030
Designing and benchmarking the MULTICOM protein structure prediction system
2013-01-01
Background Predicting protein structure from sequence is one of the most significant and challenging problems in bioinformatics. Numerous bioinformatics techniques and tools have been developed to tackle almost every aspect of protein structure prediction ranging from structural feature prediction, template identification and query-template alignment to structure sampling, model quality assessment, and model refinement. How to synergistically select, integrate and improve the strengths of the complementary techniques at each prediction stage and build a high-performance system is becoming a critical issue for constructing a successful, competitive protein structure predictor. Results Over the past several years, we have constructed a standalone protein structure prediction system MULTICOM that combines multiple sources of information and complementary methods at all five stages of the protein structure prediction process including template identification, template combination, model generation, model assessment, and model refinement. The system was blindly tested during the ninth Critical Assessment of Techniques for Protein Structure Prediction (CASP9) in 2010 and yielded very good performance. In addition to studying the overall performance on the CASP9 benchmark, we thoroughly investigated the performance and contributions of each component at each stage of prediction. Conclusions Our comprehensive and comparative study not only provides useful and practical insights about how to select, improve, and integrate complementary methods to build a cutting-edge protein structure prediction system but also identifies a few new sources of information that may help improve the design of a protein structure prediction system. Several components used in the MULTICOM system are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:23442819
Chaput, Ludovic; Martinez-Sanz, Juan; Saettel, Nicolas; Mouawad, Liliane
2016-01-01
In a structure-based virtual screening, the choice of the docking program is essential for the success of a hit identification. Benchmarks are meant to help in guiding this choice, especially when undertaken on a large variety of protein targets. Here, the performance of four popular virtual screening programs, Gold, Glide, Surflex and FlexX, is compared using the Directory of Useful Decoys-Enhanced database (DUD-E), which includes 102 targets with an average of 224 ligands per target and 50 decoys per ligand, generated to avoid biases in the benchmarking. Then, a relationship between these program performances and the properties of the targets or the small molecules was investigated. The comparison was based on two metrics, with three different parameters each. The BEDROC scores with α = 80.5, indicated that, on the overall database, Glide succeeded (score > 0.5) for 30 targets, Gold for 27, FlexX for 14 and Surflex for 11. The performance did not depend on the hydrophobicity nor the openness of the protein cavities, neither on the families to which the proteins belong. However, despite the care in the construction of the DUD-E database, the small differences that remain between the actives and the decoys likely explain the successes of Gold, Surflex and FlexX. Moreover, the similarity between the actives of a target and its crystal structure ligand seems to be at the basis of the good performance of Glide. When all targets with significant biases are removed from the benchmarking, a subset of 47 targets remains, for which Glide succeeded for only 5 targets, Gold for 4 and FlexX and Surflex for 2. The performance dramatic drop of all four programs when the biases are removed shows that we should beware of virtual screening benchmarks, because good performances may be due to wrong reasons. Therefore, benchmarking would hardly provide guidelines for virtual screening experiments, despite the tendency that is maintained, i.e., Glide and Gold display better performance than FlexX and Surflex. We recommend to always use several programs and combine their results. Graphical AbstractSummary of the results obtained by virtual screening with the four programs, Glide, Gold, Surflex and FlexX, on the 102 targets of the DUD-E database. The percentage of targets with successful results, i.e., with BDEROC(α = 80.5) > 0.5, when the entire database is considered are in Blue, and when targets with biased chemical libraries are removed are in Red.
NASA Astrophysics Data System (ADS)
Rimov, A. A.; Chukanova, T. I.; Trofimov, Yu. V.
2016-12-01
Data on the comparative analysis variants of the quality of power installations (benchmarking) applied in the power industry is systematized. It is shown that the most efficient variant of implementation of the benchmarking technique is the analysis of statistical distributions of the indicators in the composed homogenous group of the uniform power installations. The benchmarking technique aimed at revealing the available reserves on improvement of the reliability and heat efficiency indicators of the power installations of the thermal power plants is developed in the furtherance of this approach. The technique provides a possibility of reliable comparison of the quality of the power installations in their homogenous group limited by the number and adoption of the adequate decision on improving some or other technical characteristics of this power installation. The technique provides structuring of the list of the comparison indicators and internal factors affecting them represented according to the requirements of the sectoral standards and taking into account the price formation characteristics in the Russian power industry. The mentioned structuring ensures traceability of the reasons of deviation of the internal influencing factors from the specified values. The starting point for further detail analysis of the delay of the certain power installation indicators from the best practice expressed in the specific money equivalent is positioning of this power installation on distribution of the key indicator being a convolution of the comparison indicators. The distribution of the key indicator is simulated by the Monte-Carlo method after receiving the actual distributions of the comparison indicators: specific lost profit due to the short supply of electric energy and short delivery of power, specific cost of losses due to the nonoptimal expenditures for repairs, and specific cost of excess fuel equivalent consumption. The quality loss indicators are developed facilitating the analysis of the benchmarking results permitting to represent the quality loss of this power installation in the form of the difference between the actual value of the key indicator or comparison indicator and the best quartile of the existing distribution. The uncertainty of the obtained values of the quality loss indicators was evaluated by transforming the standard uncertainties of the input values into the expanded uncertainties of the output values with the confidence level of 95%. The efficiency of the technique is demonstrated in terms of benchmarking of the main thermal and mechanical equipment of the extraction power-generating units T-250 and power installations of the thermal power plants with the main steam pressure 130 atm.
Noise Reduction Retrofit for a "New Look" Flexible Transit Bus Service Bulletin
DOT National Transportation Integrated Search
1980-09-01
This document presents instructions on how to apply a noise treatment to a contemporary city transit bus without extensive structural alteration. Baseline bus configuration, noise ratings, and performance benchmarks are presented for a Flexible 111DC...
2006-06-01
response (time domain) structural vibration model for mistuned rotor bladed disk based on the efficient SNM model has been developed. The vi- bration...airfoil and 3D wing, unsteady vortex shedding of a stationary cylinder, induced vibration of a cylinder, forced vibration of a pitching airfoil, induced... vibration and flutter boundary of 2D NACA 64A010 transonic airfoil, 3D plate wing structural response. The predicted results agree well with benchmark
Cavitation, Flow Structure and Turbulence in the Tip Region of a Rotor Blade
NASA Technical Reports Server (NTRS)
Wu, H.; Miorini, R.; Soranna, F.; Katz, J.; Michael, T.; Jessup, S.
2010-01-01
Objectives: Measure the flow structure and turbulence within a Naval, axial waterjet pump. Create a database for benchmarking and validation of parallel computational efforts. Address flow and turbulence modeling issues that are unique to this complex environment. Measure and model flow phenomena affecting cavitation within the pump and its effect on pump performance. This presentation focuses on cavitation phenomena and associated flow structure in the tip region of a rotor blade.
Innately Split Model for Job-shop Scheduling Problem
NASA Astrophysics Data System (ADS)
Ikeda, Kokolo; Kobayashi, Sigenobu
Job-shop Scheduling Problem (JSP) is one of the most difficult benchmark problems. GA approaches often fail searching the global optimum because of the deception UV-structure of JSPs. In this paper, we introduce a novel framework model of GA, Innately Split Model (ISM) which prevents UV-phenomenon, and discuss on its power particularly. Next we analyze the structure of JSPs with the help of the UV-structure hypothesys, and finally we show ISM's excellent performance on JSP.
Adaptive time stepping for fluid-structure interaction solvers
Mayr, M.; Wall, W. A.; Gee, M. W.
2017-12-22
In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less
Adaptive time stepping for fluid-structure interaction solvers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayr, M.; Wall, W. A.; Gee, M. W.
In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less
Active control of bright electron beams with RF optics for femtosecond microscopy
Williams, J.; Zhou, F.; Sun, T.; ...
2017-08-01
A frontier challenge in implementing femtosecond electron microscopy is to gain precise optical control of intense beams to mitigate collective space charge effects for significantly improving the throughput. In this paper, we explore the flexible uses of an RF cavity as a longitudinal lens in a high-intensity beam column for condensing the electron beams both temporally and spectrally, relevant to the design of ultrafast electron microscopy. Through the introduction of a novel atomic grating approach for characterization of electron bunch phase space and control optics, we elucidate the principles for predicting and controlling the phase space dynamics to reach optimalmore » compressions at various electron densities and generating conditions. We provide strategies to identify high-brightness modes, achieving ~100 fs and ~1 eV resolutions with 10 6 electrons per bunch, and establish the scaling of performance for different bunch charges. These results benchmark the sensitivity and resolution from the fundamental beam brightness perspective and also validate the adaptive optics concept to enable delicate control of the density-dependent phase space structures to optimize the performance, including delivering ultrashort, monochromatic, high-dose, or coherent electron bunches.« less
Active control of bright electron beams with RF optics for femtosecond microscopy
Williams, J.; Zhou, F.; Sun, T.; Tao, Z.; Chang, K.; Makino, K.; Berz, M.; Duxbury, P. M.; Ruan, C.-Y.
2017-01-01
A frontier challenge in implementing femtosecond electron microscopy is to gain precise optical control of intense beams to mitigate collective space charge effects for significantly improving the throughput. Here, we explore the flexible uses of an RF cavity as a longitudinal lens in a high-intensity beam column for condensing the electron beams both temporally and spectrally, relevant to the design of ultrafast electron microscopy. Through the introduction of a novel atomic grating approach for characterization of electron bunch phase space and control optics, we elucidate the principles for predicting and controlling the phase space dynamics to reach optimal compressions at various electron densities and generating conditions. We provide strategies to identify high-brightness modes, achieving ∼100 fs and ∼1 eV resolutions with 106 electrons per bunch, and establish the scaling of performance for different bunch charges. These results benchmark the sensitivity and resolution from the fundamental beam brightness perspective and also validate the adaptive optics concept to enable delicate control of the density-dependent phase space structures to optimize the performance, including delivering ultrashort, monochromatic, high-dose, or coherent electron bunches. PMID:28868325
Limitations of Community College Benchmarking and Benchmarks
ERIC Educational Resources Information Center
Bers, Trudy H.
2006-01-01
This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.
Transonic Flutter Suppression Control Law Design, Analysis and Wind Tunnel Results
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1999-01-01
The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.
Transonic Flutter Suppression Control Law Design, Analysis and Wind-Tunnel Results
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1999-01-01
The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1999-01-01
The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.
Benchmarking and Hardware-In-The-Loop Operation of a ...
Engine Performance evaluation in support of LD MTE. EPA used elements of its ALPHA model to apply hardware-in-the-loop (HIL) controls to the SKYACTIV engine test setup to better understand how the engine would operate in a chassis test after combined with future leading edge technologies, advanced high-efficiency transmission, reduced mass, and reduced roadload. Predict future vehicle performance with Atkinson engine. As part of its technology assessment for the upcoming midterm evaluation of the 2017-2025 LD vehicle GHG emissions regulation, EPA has been benchmarking engines and transmissions to generate inputs for use in its ALPHA model
Principles for Predicting RNA Secondary Structure Design Difficulty.
Anderson-Lee, Jeff; Fisker, Eli; Kosaraju, Vineet; Wu, Michelle; Kong, Justin; Lee, Jeehyung; Lee, Minjae; Zada, Mathew; Treuille, Adrien; Das, Rhiju
2016-02-27
Designing RNAs that form specific secondary structures is enabling better understanding and control of living systems through RNA-guided silencing, genome editing and protein organization. Little is known, however, about which RNA secondary structures might be tractable for downstream sequence design, increasing the time and expense of design efforts due to inefficient secondary structure choices. Here, we present insights into specific structural features that increase the difficulty of finding sequences that fold into a target RNA secondary structure, summarizing the design efforts of tens of thousands of human participants and three automated algorithms (RNAInverse, INFO-RNA and RNA-SSD) in the Eterna massive open laboratory. Subsequent tests through three independent RNA design algorithms (NUPACK, DSS-Opt and MODENA) confirmed the hypothesized importance of several features in determining design difficulty, including sequence length, mean stem length, symmetry and specific difficult-to-design motifs such as zigzags. Based on these results, we have compiled an Eterna100 benchmark of 100 secondary structure design challenges that span a large range in design difficulty to help test future efforts. Our in silico results suggest new routes for improving computational RNA design methods and for extending these insights to assess "designability" of single RNA structures, as well as of switches for in vitro and in vivo applications. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Treatment planning for spinal radiosurgery : A competitive multiplatform benchmark challenge.
Moustakis, Christos; Chan, Mark K H; Kim, Jinkoo; Nilsson, Joakim; Bergman, Alanah; Bichay, Tewfik J; Palazon Cano, Isabel; Cilla, Savino; Deodato, Francesco; Doro, Raffaela; Dunst, Jürgen; Eich, Hans Theodor; Fau, Pierre; Fong, Ming; Haverkamp, Uwe; Heinze, Simon; Hildebrandt, Guido; Imhoff, Detlef; de Klerck, Erik; Köhn, Janett; Lambrecht, Ulrike; Loutfi-Krauss, Britta; Ebrahimi, Fatemeh; Masi, Laura; Mayville, Alan H; Mestrovic, Ante; Milder, Maaike; Morganti, Alessio G; Rades, Dirk; Ramm, Ulla; Rödel, Claus; Siebert, Frank-Andre; den Toom, Wilhelm; Wang, Lei; Wurster, Stefan; Schweikard, Achim; Soltys, Scott G; Ryu, Samuel; Blanck, Oliver
2018-05-25
To investigate the quality of treatment plans of spinal radiosurgery derived from different planning and delivery systems. The comparisons include robotic delivery and intensity modulated arc therapy (IMAT) approaches. Multiple centers with equal systems were used to reduce a bias based on individual's planning abilities. The study used a series of three complex spine lesions to maximize the difference in plan quality among the various approaches. Internationally recognized experts in the field of treatment planning and spinal radiosurgery from 12 centers with various treatment planning systems participated. For a complex spinal lesion, the results were compared against a previously published benchmark plan derived for CyberKnife radiosurgery (CKRS) using circular cones only. For two additional cases, one with multiple small lesions infiltrating three vertebrae and a single vertebra lesion treated with integrated boost, the results were compared against a benchmark plan generated using a best practice guideline for CKRS. All plans were rated based on a previously established ranking system. All 12 centers could reach equality (n = 4) or outperform (n = 8) the benchmark plan. For the multiple lesions and the single vertebra lesion plan only 5 and 3 of the 12 centers, respectively, reached equality or outperformed the best practice benchmark plan. However, the absolute differences in target and critical structure dosimetry were small and strongly planner-dependent rather than system-dependent. Overall, gantry-based IMAT with simple planning techniques (two coplanar arcs) produced faster treatments and significantly outperformed static gantry intensity modulated radiation therapy (IMRT) and multileaf collimator (MLC) or non-MLC CKRS treatment plan quality regardless of the system (mean rank out of 4 was 1.2 vs. 3.1, p = 0.002). High plan quality for complex spinal radiosurgery was achieved among all systems and all participating centers in this planning challenge. This study concludes that simple IMAT techniques can generate significantly better plan quality compared to previous established CKRS benchmarks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Townsley, Dean M.; Miles, Broxton J.; Timmes, F. X.
2016-07-01
We refine our previously introduced parameterized model for explosive carbon–oxygen fusion during thermonuclear Type Ia supernovae (SNe Ia) by adding corrections to post-processing of recorded Lagrangian fluid-element histories to obtain more accurate isotopic yields. Deflagration and detonation products are verified for propagation in a medium of uniform density. A new method is introduced for reconstructing the temperature–density history within the artificially thick model deflagration front. We obtain better than 5% consistency between the electron capture computed by the burning model and yields from post-processing. For detonations, we compare to a benchmark calculation of the structure of driven steady-state planar detonationsmore » performed with a large nuclear reaction network and error-controlled integration. We verify that, for steady-state planar detonations down to a density of 5 × 10{sup 6} g cm{sup −3}, our post-processing matches the major abundances in the benchmark solution typically to better than 10% for times greater than 0.01 s after the passage of the shock front. As a test case to demonstrate the method, presented here with post-processing for the first time, we perform a two-dimensional simulation of a SN Ia in the scenario of a Chandrasekhar-mass deflagration–detonation transition (DDT). We find that reconstruction of deflagration tracks leads to slightly more complete silicon burning than without reconstruction. The resulting abundance structure of the ejecta is consistent with inferences from spectroscopic studies of observed SNe Ia. We confirm the absence of a central region of stable Fe-group material for the multi-dimensional DDT scenario. Detailed isotopic yields are tabulated and change only modestly when using deflagration reconstruction.« less
Web Site Design Benchmarking within Industry Groups.
ERIC Educational Resources Information Center
Kim, Sung-Eon; Shaw, Thomas; Schneider, Helmut
2003-01-01
Discussion of electronic commerce focuses on Web site evaluation criteria and applies them to different industry groups in Korea. Defines six categories of Web site evaluation criteria: business function, corporate credibility, contents reliability, Web site attractiveness, systematic structure, and navigation; and discusses differences between…
Organizational Alignment Supporting Distance Education in Post-Secondary Institutions.
ERIC Educational Resources Information Center
Prestera, Gustavo E.; Moller, Leslie A.
2001-01-01
Applies an established model of organizational alignment to distance education in postsecondary institutions and recommends performance-oriented approaches to support growth by analyzing goals, structure, and management practices across the organization. Presents performance improvement strategies such as benchmarking and documenting workflows,…
Development of a sensor coordinated kinematic model for neural network controller training
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1990-01-01
A robotic benchmark problem useful for evaluating alternative neural network controllers is presented. Specifically, it derives two camera models and the kinematic equations of a multiple degree of freedom manipulator whose end effector is under observation. The mapping developed include forward and inverse translations from binocular images to 3-D target position and the inverse kinematics of mapping point positions into manipulator commands in joint space. Implementation is detailed for a three degree of freedom manipulator with one revolute joint at the base and two prismatic joints on the arms. The example is restricted to operate within a unit cube with arm links of 0.6 and 0.4 units respectively. The development is presented in the context of more complex simulations and a logical path for extension of the benchmark to higher degree of freedom manipulators is presented.
Platinum adlayered ruthenium nanoparticles, method for preparing, and uses thereof
Tong, YuYe; Du, Bingchen
2015-08-11
A superior, industrially scalable one-pot ethylene glycol-based wet chemistry method to prepare platinum-adlayered ruthenium nanoparticles has been developed that offers an exquisite control of the platinum packing density of the adlayers and effectively prevents sintering of the nanoparticles during the deposition process. The wet chemistry based method for the controlled deposition of submonolayer platinum is advantageous in terms of processing and maximizing the use of platinum and can, in principle, be scaled up straightforwardly to an industrial level. The reactivity of the Pt(31)-Ru sample was about 150% higher than that of the industrial benchmark PtRu (1:1) alloy sample but with 3.5 times less platinum loading. Using the Pt(31)-Ru nanoparticles would lower the electrode material cost compared to using the industrial benchmark alloy nanoparticles for direct methanol fuel cell applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mollerach, R.; Leszczynski, F.; Fink, J.
2006-07-01
In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure-vessel design with 451 vertical coolant channels, and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update calculation methods and models (cell, supercell and reactor) was recently carried out coveringmore » cell, supercell (control rod) and core calculations. As a validation of the new models some benchmark comparisons were done with Monte Carlo calculations with MCNP5. This paper presents comparisons of cell and supercell benchmark problems based on a slightly idealized model of the Atucha-I core obtained with the WIMS-D5 and DRAGON codes with MCNP5 results. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, and more symmetric than Atucha-II Cell parameters compared include cell k-infinity, relative power levels of the different rings of fuel rods, and some two-group macroscopic cross sections. Supercell comparisons include supercell k-infinity changes due to the control rods (tubes) of steel and hafnium. (authors)« less
Assessing validity of observational intervention studies - the Benchmarking Controlled Trials.
Malmivaara, Antti
2016-09-01
Benchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations. To create and pilot test a checklist for appraising methodological validity of a BCT. The checklist was created by extracting the most essential elements from the comprehensive set of criteria in the previous paper on BCTs. Also checklists and scientific papers on observational studies and respective systematic reviews were utilized. Ten BCTs published in the Lancet and in the New England Journal of Medicine were used to assess feasibility of the created checklist. The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies. The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies. However, the piloted checklist should be validated in further studies. Key messages Benchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations. This paper presents a checklist for appraising methodological validity of BCTs and pilot-tests the checklist with ten BCTs published in leading medical journals. The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies. The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies.
Multisensor benchmark data for riot control
NASA Astrophysics Data System (ADS)
Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter
2008-10-01
Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.
SPH modeling of fluid-structure interaction
NASA Astrophysics Data System (ADS)
Han, Luhui; Hu, Xiangyu
2018-02-01
This work concerns numerical modeling of fluid-structure interaction (FSI) problems in a uniform smoothed particle hydrodynamics (SPH) framework. It combines a transport-velocity SPH scheme, advancing fluid motions, with a total Lagrangian SPH formulation dealing with the structure deformations. Since both fluid and solid governing equations are solved in SPH framework, while coupling becomes straightforward, the momentum conservation of the FSI system is satisfied strictly. A well-known FSI benchmark test case has been performed to validate the modeling and to demonstrate its potential.
Ellis, Judith
2006-07-01
The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.
Monitoring land subsidence in Sacramento Valley, California, using GPS
Blodgett, J.C.; Ikehara, M.E.; Williams, Gary E.
1990-01-01
Land subsidence measurement is usually based on a comparison of bench-mark elevations surveyed at different times. These bench marks, established for mapping or the national vertical control network, are not necessarily suitable for measuring land subsidence. Also, many bench marks have been destroyed or are unstable. Conventional releveling of the study area would be costly and would require several years to complete. Differences of as much as 3.9 ft between recent leveling and published bench-mark elevations have been documented at seven locations in the Sacramento Valley. Estimates of land subsidence less than about 0.3 ft are questionable because elevation data are based on leveling and adjustment procedures that occured over many years. A new vertical control network based on the Global Positioning System (GPS) provides highly accurate vertical control data at relatively low costs, and the survey points can be placed where needed to obtain adequate areal coverage of the area affected by land subsidence.
A broken promise: microbiome differential abundance methods do not control the false discovery rate.
Hawinkel, Stijn; Mattiello, Federico; Bijnens, Luc; Thas, Olivier
2017-08-22
High-throughput sequencing technologies allow easy characterization of the human microbiome, but the statistical methods to analyze microbiome data are still in their infancy. Differential abundance methods aim at detecting associations between the abundances of bacterial species and subject grouping factors. The results of such methods are important to identify the microbiome as a prognostic or diagnostic biomarker or to demonstrate efficacy of prodrug or antibiotic drugs. Because of a lack of benchmarking studies in the microbiome field, no consensus exists on the performance of the statistical methods. We have compared a large number of popular methods through extensive parametric and nonparametric simulation as well as real data shuffling algorithms. The results are consistent over the different approaches and all point to an alarming excess of false discoveries. This raises great doubts about the reliability of discoveries in past studies and imperils reproducibility of microbiome experiments. To further improve method benchmarking, we introduce a new simulation tool that allows to generate correlated count data following any univariate count distribution; the correlation structure may be inferred from real data. Most simulation studies discard the correlation between species, but our results indicate that this correlation can negatively affect the performance of statistical methods. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A Comparison of Coverage Restrictions for Biopharmaceuticals and Medical Procedures.
Chambers, James; Pope, Elle; Bungay, Kathy; Cohen, Joshua; Ciarametaro, Michael; Dubois, Robert; Neumann, Peter J
2018-04-01
Differences in payer evaluation and coverage of pharmaceuticals and medical procedures suggest that coverage may differ for medications and procedures independent of their clinical benefit. We hypothesized that coverage for medications is more restricted than corresponding coverage for nonmedication interventions. We included top-selling medications and highly utilized procedures. For each intervention-indication pair, we classified value in terms of cost-effectiveness (incremental cost per quality-adjusted life-year), as reported by the Tufts Medical Center Cost-Effectiveness Analysis Registry. For each intervention-indication pair and for each of 10 large payers, we classified coverage, when available, as either "more restrictive" or as "not more restrictive," compared with a benchmark. The benchmark reflected the US Food and Drug Administration label information, when available, or pertinent clinical guidelines. We compared coverage policies and the benchmark in terms of step edits and clinical restrictions. Finally, we regressed coverage restrictiveness against intervention type (medication or nonmedication), controlling for value (cost-effectiveness more or less favorable than a designated threshold). We identified 392 medication and 185 procedure coverage decisions. A total of 26.3% of the medication coverage and 38.4% of the procedure coverage decisions were more restrictive than their corresponding benchmarks. After controlling for value, the odds of being more restrictive were 42% lower for medications than for procedures. Including unfavorable tier placement in the definition of "more restrictive" greatly increased the proportion of medication coverage decisions classified as "more restrictive" and reversed our findings. Therapy access depends on factors other than cost and clinical benefit, suggesting potential health care system inefficiency. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madduri, Kamesh; Ediger, David; Jiang, Karl
2009-02-15
We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 millionmore » vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madduri, Kamesh; Ediger, David; Jiang, Karl
2009-05-29
We present a new lock-free parallel algorithm for computing betweenness centrality of massive small-world networks. With minor changes to the data structures, our algorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in the HPCS SSCA#2 Graph Analysis benchmark, which has been extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the ThreadStorm processor, and a single-socket Sun multicore server with the UltraSparc T2 processor.more » For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Risner, J.M.; Wiarda, D.; Miller, T.M.
2011-07-01
The U.S. Nuclear Regulatory Commission's Regulatory Guide 1.190 states that calculational methods used to estimate reactor pressure vessel (RPV) fluence should use the latest version of the evaluated nuclear data file (ENDF). The VITAMIN-B6 fine-group library and BUGLE-96 broad-group library, which are widely used for RPV fluence calculations, were generated using ENDF/B-VI.3 data, which was the most current data when Regulatory Guide 1.190 was issued. We have developed new fine-group (VITAMIN-B7) and broad-group (BUGLE-B7) libraries based on ENDF/B-VII.0. These new libraries, which were processed using the AMPX code system, maintain the same group structures as the VITAMIN-B6 and BUGLE-96 libraries.more » Verification and validation of the new libraries were accomplished using diagnostic checks in AMPX, 'unit tests' for each element in VITAMIN-B7, and a diverse set of benchmark experiments including critical evaluations for fast and thermal systems, a set of experimental benchmarks that are used for SCALE regression tests, and three RPV fluence benchmarks. The benchmark evaluation results demonstrate that VITAMIN-B7 and BUGLE-B7 are appropriate for use in RPV fluence calculations and meet the calculational uncertainty criterion in Regulatory Guide 1.190. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.
An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less
Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.; ...
2016-01-19
An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less
Mechanical design criteria for intervertebral disc tissue engineering.
Nerurkar, Nandan L; Elliott, Dawn M; Mauck, Robert L
2010-04-19
Due to the inability of current clinical practices to restore function to degenerated intervertebral discs, the arena of disc tissue engineering has received substantial attention in recent years. Despite tremendous growth and progress in this field, translation to clinical implementation has been hindered by a lack of well-defined functional benchmarks. Because successful replacement of the disc is contingent upon replication of some or all of its complex mechanical behaviors, it is critically important that disc mechanics be well characterized in order to establish discrete functional goals for tissue engineering. In this review, the key functional signatures of the intervertebral disc are discussed and used to propose a series of native tissue benchmarks to guide the development of engineered replacement tissues. These benchmarks include measures of mechanical function under tensile, compressive, and shear deformations for the disc and its substructures. In some cases, important functional measures are identified that have yet to be measured in the native tissue. Ultimately, native tissue benchmark values are compared to measurements that have been made on engineered disc tissues, identifying where functional equivalence was achieved, and where there remain opportunities for advancement. Several excellent reviews exist regarding disc composition and structure, as well as recent tissue engineering strategies; therefore this review will remain focused on the functional aspects of disc tissue engineering. Copyright 2009 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Risner, Joel M; Wiarda, Dorothea; Miller, Thomas Martin
2011-01-01
The U.S. Nuclear Regulatory Commission s Regulatory Guide 1.190 states that calculational methods used to estimate reactor pressure vessel (RPV) fluence should use the latest version of the Evaluated Nuclear Data File (ENDF). The VITAMIN-B6 fine-group library and BUGLE-96 broad-group library, which are widely used for RPV fluence calculations, were generated using ENDF/B-VI data, which was the most current data when Regulatory Guide 1.190 was issued. We have developed new fine-group (VITAMIN-B7) and broad-group (BUGLE-B7) libraries based on ENDF/B-VII. These new libraries, which were processed using the AMPX code system, maintain the same group structures as the VITAMIN-B6 and BUGLE-96more » libraries. Verification and validation of the new libraries was accomplished using diagnostic checks in AMPX, unit tests for each element in VITAMIN-B7, and a diverse set of benchmark experiments including critical evaluations for fast and thermal systems, a set of experimental benchmarks that are used for SCALE regression tests, and three RPV fluence benchmarks. The benchmark evaluation results demonstrate that VITAMIN-B7 and BUGLE-B7 are appropriate for use in LWR shielding applications, and meet the calculational uncertainty criterion in Regulatory Guide 1.190.« less
Can Humans Fly Action Understanding with Multiple Classes of Actors
2015-06-08
recognition using structure from motion point clouds. In European Conference on Computer Vision, 2008. [5] R. Caruana. Multitask learning. Machine Learning...tonomous driving ? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. [12] L. Gorelick, M. Blank
NASA Astrophysics Data System (ADS)
Mancho, Ana M.; Wiggins, Stephen; Curbelo, Jezabel; Mendoza, Carolina
2013-11-01
Lagrangian descriptors are a recent technique which reveals geometrical structures in phase space and which are valid for aperiodically time dependent dynamical systems. We discuss a general methodology for constructing them and we discuss a ``heuristic argument'' that explains why this method is successful. We support this argument by explicit calculations on a benchmark problem. Several other benchmark examples are considered that allow us to assess the performance of Lagrangian descriptors with both finite time Lyapunov exponents (FTLEs) and finite time averages of certain components of the vector field (``time averages''). In all cases Lagrangian descriptors are shown to be both more accurate and computationally efficient than these methods. We thank CESGA for computing facilities. This research was supported by MINECO grants: MTM2011-26696, I-Math C3-0104, ICMAT Severo Ochoa project SEV-2011-0087, and CSIC grant OCEANTECH. SW acknowledges the support of the ONR (Grant No. N00014-01-1-0769).
Applying Quantum Monte Carlo to the Electronic Structure Problem
NASA Astrophysics Data System (ADS)
Powell, Andrew D.; Dawes, Richard
2016-06-01
Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).
Nema, Vijay; Pal, Sudhir Kumar
2013-01-01
Aim: This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)2-V2, Modweb were used for the comparison and model generation. Results: Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. Conclusion: This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure. PMID:24023424
Supply network configuration—A benchmarking problem
NASA Astrophysics Data System (ADS)
Brandenburg, Marcus
2018-03-01
Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.
NASA Astrophysics Data System (ADS)
Pulkkinen, A. A.; Bernabeu, E.; Weigel, R. S.; Kelbert, A.; Rigler, E. J.; Bedrosian, P.; Love, J. J.
2017-12-01
Development of realistic storm scenarios that can be played through the exposed systems is one of the key requirements for carrying out quantitative space weather hazards assessments. In the geomagnetically induced currents (GIC) and power grids context, these scenarios have to quantify the spatiotemporal evolution of the geoelectric field that drives the potentially hazardous currents in the system. In response to the Federal Energy Regulatory Commission (FERC) order 779, a team of scientists and engineers that worked under the auspices of North American Electric Reliability Corporation (NERC), has developed extreme geomagnetic storm and geoelectric field benchmark(s) that use various scaling factors that account for geomagnetic latitude and ground structure of the locations of interest. These benchmarks, together with the information generated in the National Space Weather Action Plan, are the foundation for the hazards assessments that the industry will be carrying out in response to the FERC order and under the auspices of the National Science and Technology Council. While the scaling factors developed in the past work were based on the best available information, there is now significant new information available for parts of the U.S. pertaining to the ground response to external geomagnetic field excitation. The significant new information includes the results magnetotelluric surveys that have been conducted over the past few years across the contiguous US and results from previous surveys that have been made available in a combined online database. In this paper, we distill this new information in the framework of the NERC benchmark and in terms of updated ground response scaling factors thereby allowing straightforward utilization in the hazard assessments. We also outline the path forward for improving the overall extreme event benchmark scenario(s) including generalization of the storm waveforms and geoelectric field spatial patterns.
Majuru, Batsirai; Jagals, Paul; Hunter, Paul R
2012-10-01
Although a number of studies have reported on water supply improvements, few have simultaneously taken into account the reliability of the water services. The study aimed to assess whether upgrading water supply systems in small rural communities improved access, availability and potability of water by assessing the water services against selected benchmarks from the World Health Organisation and South African Department of Water Affairs, and to determine the impact of unreliability on the services. These benchmarks were applied in three rural communities in Limpopo, South Africa where rudimentary water supply services were being upgraded to basic services. Data were collected through structured interviews, observations and measurement, and multi-level linear regression models were used to assess the impact of water service upgrades on key outcome measures of distance to source, daily per capita water quantity and Escherichia coli count. When the basic system was operational, 72% of households met the minimum benchmarks for distance and water quantity, but only 8% met both enhanced benchmarks. During non-operational periods of the basic service, daily per capita water consumption decreased by 5.19l (p<0.001, 95% CI 4.06-6.31) and distances to water sources were 639 m further (p ≤ 0.001, 95% CI 560-718). Although both rudimentary and basic systems delivered water that met potability criteria at the sources, the quality of stored water sampled in the home was still unacceptable throughout the various service levels. These results show that basic water services can make substantial improvements to water access, availability, potability, but only if such services are reliable. Copyright © 2012 Elsevier B.V. All rights reserved.
Multi-Complementary Model for Long-Term Tracking
Zhang, Deng; Zhang, Junchang; Xia, Chenyang
2018-01-01
In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed. PMID:29425170
Boyce, Maria B; Browne, John P; Greenhalgh, Joanne
2014-06-27
The use of patient-reported outcome measures (PROMs) to provide healthcare professionals with peer benchmarked feedback is growing. However, there is little evidence on the opinions of professionals on the value of this information in practice. The purpose of this research is to explore surgeon's experiences of receiving peer benchmarked PROMs feedback and to examine whether this information led to changes in their practice. This qualitative research employed a Framework approach. Semi-structured interviews were undertaken with surgeons who received peer benchmarked PROMs feedback. The participants included eleven consultant orthopaedic surgeons in the Republic of Ireland. Five themes were identified: conceptual, methodological, practical, attitudinal, and impact. A typology was developed based on the attitudinal and impact themes from which three distinct groups emerged. 'Advocates' had positive attitudes towards PROMs and confirmed that the information promoted a self-reflective process. 'Converts' were uncertain about the value of PROMs, which reduced their inclination to use the data. 'Sceptics' had negative attitudes towards PROMs and claimed that the information had no impact on their behaviour. The conceptual, methodological and practical factors were linked to the typology. Surgeons had mixed opinions on the value of peer benchmarked PROMs data. Many appreciated the feedback as it reassured them that their practice was similar to their peers. However, PROMs information alone was considered insufficient to help identify opportunities for quality improvements. The reasons for the observed reluctance of participants to embrace PROMs can be categorised into conceptual, methodological, and practical factors. Policy makers and researchers need to increase professionals' awareness of the numerous purposes and benefits of using PROMs, challenge the current methods to measure performance using PROMs, and reduce the burden of data collection and information dissemination on routine practice.
International health IT benchmarking: learning from cross-country comparisons.
Zelmer, Jennifer; Ronchi, Elettra; Hyppönen, Hannele; Lupiáñez-Villanueva, Francisco; Codagnone, Cristiano; Nøhr, Christian; Huebner, Ursula; Fazzalari, Anne; Adler-Milstein, Julia
2017-03-01
To pilot benchmark measures of health information and communication technology (ICT) availability and use to facilitate cross-country learning. A prior Organization for Economic Cooperation and Development-led effort involving 30 countries selected and defined functionality-based measures for availability and use of electronic health records, health information exchange, personal health records, and telehealth. In this pilot, an Organization for Economic Cooperation and Development Working Group compiled results for 38 countries for a subset of measures with broad coverage using new and/or adapted country-specific or multinational surveys and other sources from 2012 to 2015. We also synthesized country learnings to inform future benchmarking. While electronic records are widely used to store and manage patient information at the point of care-all but 2 pilot countries reported use by at least half of primary care physicians; many had rates above 75%-patient information exchange across organizations/settings is less common. Large variations in the availability and use of telehealth and personal health records also exist. Pilot participation demonstrated interest in cross-national benchmarking. Using the most comparable measures available to date, it showed substantial diversity in health ICT availability and use in all domains. The project also identified methodological considerations (e.g., structural and health systems issues that can affect measurement) important for future comparisons. While health policies and priorities differ, many nations aim to increase access, quality, and/or efficiency of care through effective ICT use. By identifying variations and describing key contextual factors, benchmarking offers the potential to facilitate cross-national learning and accelerate the progress of individual countries. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Combining Rosetta with molecular dynamics (MD): A benchmark of the MD-based ensemble protein design.
Ludwiczak, Jan; Jarmula, Adam; Dunin-Horkawicz, Stanislaw
2018-07-01
Computational protein design is a set of procedures for computing amino acid sequences that will fold into a specified structure. Rosetta Design, a commonly used software for protein design, allows for the effective identification of sequences compatible with a given backbone structure, while molecular dynamics (MD) simulations can thoroughly sample near-native conformations. We benchmarked a procedure in which Rosetta design is started on MD-derived structural ensembles and showed that such a combined approach generates 20-30% more diverse sequences than currently available methods with only a slight increase in computation time. Importantly, the increase in diversity is achieved without a loss in the quality of the designed sequences assessed by their resemblance to natural sequences. We demonstrate that the MD-based procedure is also applicable to de novo design tasks started from backbone structures without any sequence information. In addition, we implemented a protocol that can be used to assess the stability of designed models and to select the best candidates for experimental validation. In sum our results demonstrate that the MD ensemble-based flexible backbone design can be a viable method for protein design, especially for tasks that require a large pool of diverse sequences. Copyright © 2018 Elsevier Inc. All rights reserved.
Deterministic Modeling of the High Temperature Test Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortensi, J.; Cogliati, J. J.; Pope, M. A.
2010-06-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is usedmore » in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less
76 FR 19257 - National Cancer Control Month, 2011
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
... Department of Health and Human Services, is tasked with outlining national objectives and benchmarks to... family member or friend, and too many of us understand the terrible toll of this disease. In memory of...
Our Profession Is Changing--Whether We Like It or Not.
ERIC Educational Resources Information Center
Eddison, Betty
1997-01-01
Discusses outsourcing in the information industry. Highlights include information technology; special libraries; outsourcing in Australia; public librarians; benchmarking and quality control; online searching; and outsourcing as a threat to information professionals. (LRW)
2010-01-01
Changes to the glycosylation profile on HIV gp120 can influence viral pathogenesis and alter AIDS disease progression. The characterization of glycosylation differences at the sequence level is inadequate as the placement of carbohydrates is structurally complex. However, no structural framework is available to date for the study of HIV disease progression. In this study, we propose a novel machine-learning based framework for the prediction of AIDS disease progression in three stages (RP, SP, and LTNP) using the HIV structural gp120 profile. This new intelligent framework proves to be accurate and provides an important benchmark for predicting AIDS disease progression computationally. The model is trained using a novel HIV gp120 glycosylation structural profile to detect possible stages of AIDS disease progression for the target sequences of HIV+ individuals. The performance of the proposed model was compared to seven existing different machine-learning models on newly proposed gp120-Benchmark_1 dataset in terms of error-rate (MSE), accuracy (CCI), stability (STD), and complexity (TBM). The novel framework showed better predictive performance with 67.82% CCI, 30.21 MSE, 0.8 STD, and 2.62 TBM on the three stages of AIDS disease progression of 50 HIV+ individuals. This framework is an invaluable bioinformatics tool that will be useful to the clinical assessment of viral pathogenesis. PMID:21143806
Ellis, D W; Srigley, J
2016-01-01
Key quality parameters in diagnostic pathology include timeliness, accuracy, completeness, conformance with current agreed standards, consistency and clarity in communication. In this review, we argue that with worldwide developments in eHealth and big data, generally, there are two further, often overlooked, parameters if our reports are to be fit for purpose. Firstly, population-level studies have clearly demonstrated the value of providing timely structured reporting data in standardised electronic format as part of system-wide quality improvement programmes. Moreover, when combined with multiple health data sources through eHealth and data linkage, structured pathology reports become central to population-level quality monitoring, benchmarking, interventions and benefit analyses in public health management. Secondly, population-level studies, particularly for benchmarking, require a single agreed international and evidence-based standard to ensure interoperability and comparability. This has been taken for granted in tumour classification and staging for many years, yet international standardisation of cancer datasets is only now underway through the International Collaboration on Cancer Reporting (ICCR). In this review, we present evidence supporting the role of structured pathology reporting in quality improvement for both clinical care and population-level health management. Although this review of available evidence largely relates to structured reporting of cancer, it is clear that the same principles can be applied throughout anatomical pathology generally, as they are elsewhere in the health system.
NASA Technical Reports Server (NTRS)
Bell, Michael A.
1999-01-01
Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.
Optimal orientation in flows: providing a benchmark for animal movement strategies.
McLaren, James D; Shamoun-Baranes, Judy; Dokter, Adriaan M; Klaassen, Raymond H G; Bouten, Willem
2014-10-06
Animal movements in air and water can be strongly affected by experienced flow. While various flow-orientation strategies have been proposed and observed, their performance in variable flow conditions remains unclear. We apply control theory to establish a benchmark for time-minimizing (optimal) orientation. We then define optimal orientation for movement in steady flow patterns and, using dynamic wind data, for short-distance mass movements of thrushes (Turdus sp.) and 6000 km non-stop migratory flights by great snipes, Gallinago media. Relative to the optimal benchmark, we assess the efficiency (travel speed) and reliability (success rate) of three generic orientation strategies: full compensation for lateral drift, vector orientation (single-heading movement) and goal orientation (continually heading towards the goal). Optimal orientation is characterized by detours to regions of high flow support, especially when flow speeds approach and exceed the animal's self-propelled speed. In strong predictable flow (short distance thrush flights), vector orientation adjusted to flow on departure is nearly optimal, whereas for unpredictable flow (inter-continental snipe flights), only goal orientation was near-optimally reliable and efficient. Optimal orientation provides a benchmark for assessing efficiency of responses to complex flow conditions, thereby offering insight into adaptive flow-orientation across taxa in the light of flow strength, predictability and navigation capacity.
Optimal orientation in flows: providing a benchmark for animal movement strategies
McLaren, James D.; Shamoun-Baranes, Judy; Dokter, Adriaan M.; Klaassen, Raymond H. G.; Bouten, Willem
2014-01-01
Animal movements in air and water can be strongly affected by experienced flow. While various flow-orientation strategies have been proposed and observed, their performance in variable flow conditions remains unclear. We apply control theory to establish a benchmark for time-minimizing (optimal) orientation. We then define optimal orientation for movement in steady flow patterns and, using dynamic wind data, for short-distance mass movements of thrushes (Turdus sp.) and 6000 km non-stop migratory flights by great snipes, Gallinago media. Relative to the optimal benchmark, we assess the efficiency (travel speed) and reliability (success rate) of three generic orientation strategies: full compensation for lateral drift, vector orientation (single-heading movement) and goal orientation (continually heading towards the goal). Optimal orientation is characterized by detours to regions of high flow support, especially when flow speeds approach and exceed the animal's self-propelled speed. In strong predictable flow (short distance thrush flights), vector orientation adjusted to flow on departure is nearly optimal, whereas for unpredictable flow (inter-continental snipe flights), only goal orientation was near-optimally reliable and efficient. Optimal orientation provides a benchmark for assessing efficiency of responses to complex flow conditions, thereby offering insight into adaptive flow-orientation across taxa in the light of flow strength, predictability and navigation capacity. PMID:25056213
Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator
NASA Astrophysics Data System (ADS)
Chitarin, G.; Agostinetti, P.; Gallo, A.; Marconato, N.; Nakano, H.; Serianni, G.; Takeiri, Y.; Tsumori, K.
2011-09-01
For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of the BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.
Radiation Coupling with the FUN3D Unstructured-Grid CFD Code
NASA Technical Reports Server (NTRS)
Wood, William A.
2012-01-01
The HARA radiation code is fully-coupled to the FUN3D unstructured-grid CFD code for the purpose of simulating high-energy hypersonic flows. The radiation energy source terms and surface heat transfer, under the tangent slab approximation, are included within the fluid dynamic ow solver. The Fire II flight test, at the Mach-31 1643-second trajectory point, is used as a demonstration case. Comparisons are made with an existing structured-grid capability, the LAURA/HARA coupling. The radiative surface heat transfer rates from the present approach match the benchmark values within 6%. Although radiation coupling is the focus of the present work, convective surface heat transfer rates are also reported, and are seen to vary depending upon the choice of mesh connectivity and FUN3D ux reconstruction algorithm. On a tetrahedral-element mesh the convective heating matches the benchmark at the stagnation point, but under-predicts by 15% on the Fire II shoulder. Conversely, on a mixed-element mesh the convective heating over-predicts at the stagnation point by 20%, but matches the benchmark away from the stagnation region.
Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Haoqiang; anMey, Dieter; Hatay, Ferhat F.
2003-01-01
With the advent of parallel hardware and software technologies users are faced with the challenge to choose a programming paradigm best suited for the underlying computer architecture. With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors (SMP), parallel programming techniques have evolved to support parallelism beyond a single level. Which programming paradigm is the best will depend on the nature of the given problem, the hardware architecture, and the available software. In this study we will compare different programming paradigms for the parallelization of a selected benchmark application on a cluster of SMP nodes. We compare the timings of different implementations of the same CFD benchmark application employing the same numerical algorithm on a cluster of Sun Fire SMP nodes. The rest of the paper is structured as follows: In section 2 we briefly discuss the programming models under consideration. We describe our compute platform in section 3. The different implementations of our benchmark code are described in section 4 and the performance results are presented in section 5. We conclude our study in section 6.
Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chitarin, G.; University of Padova, Dept. of Management and Engineering, strad. S. Nicola, 36100 Vicenza; Agostinetti, P.
2011-09-26
For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of themore » BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.« less
Additive Manufacturing of Thermoplastic Matrix Composites Using Ultrasonics
NASA Astrophysics Data System (ADS)
Olson, Meghan
Advanced composite materials have great potential for facilitating energy efficient product design and their manufacture if improvements are made to current composite manufacturing processes. This thesis focuses on the development of a novel manufacturing process for thermoplastic composite structures entitled Laser-Ultrasonic Additive Manufacturing ('LUAM'), which is intended to combine the benefits of laser processing technology, developed by Automated Dynamics Inc., with ultrasonic bonding technology that is used commercially for unreinforced polymers. These technologies used together have the potential to significantly reduce the energy consumption and void content of thermoplastic composites made using Automated Fiber Placement (AFP). To develop LUAM in a methodical manner with minimal risk, a staged approach was devised whereby coupon-level mechanical testing and prototyping utilizing existing equipment was accomplished. Four key tasks have been identified for this effort: Benchmarking, Ultrasonic Compaction, Laser Assisted Ultrasonic Compaction, and Demonstration and Characterization of LUAM. This thesis specifically addresses Tasks 1 and 2, i.e. Benchmarking and Ultrasonic Compaction, respectively. Task 1, fabricating test specimens using two traditional processes (autoclave and thermal press) and testing structural performance and dimensional accuracy, provide results of a benchmarking study by which the performance of all future phases will be gauged. Task 2, fabricating test specimens using a non-traditional process (ultrasonic conpaction) and evaluating in a similar fashion, explores the the role of ultrasonic processing parameters using three different thermoplastic composite materials. Further development of LUAM, although beyond the scope of this thesis, will combine laser and ultrasonic technology and eventually demonstrate a working system.
Oncology Practice Trends From the National Practice Benchmark
Barr, Thomas R.; Towle, Elaine L.
2012-01-01
In 2011, we made predictions on the basis of data from the National Practice Benchmark (NPB) reports from 2005 through 2010. With the new 2011 data in hand, we have revised last year's predictions and projected for the next 3 years. In addition, we make some new predictions that will be tracked in future benchmarking surveys. We also outline a conceptual framework for contemplating these data based on an ecological model of the oncology delivery system. The 2011 NPB data are consistent with last year's prediction of a decrease in the operating margins necessary to sustain a community oncology practice. With the new data in, we now predict these reductions to occur more slowly than previously forecast. We note an ease to the squeeze observed in last year's trend analysis, which will allow more time for practices to adapt their business models for survival and offer the best of these practices an opportunity to invest earnings into operations to prepare for the inevitable shift away from historic payment methodology for clinical service. This year, survey respondents reported changes in business structure, first measured in the 2010 data, indicating an increase in the percentage of respondents who believe that change is coming soon, but the majority still have confidence in the viability of their existing business structure. Although oncology practices are in for a bumpy ride, things are looking less dire this year for practices participating in our survey. PMID:23277766
HDOCK: a web server for protein-protein and protein-DNA/RNA docking based on a hybrid strategy.
Yan, Yumeng; Zhang, Di; Zhou, Pei; Li, Botong; Huang, Sheng-You
2017-07-03
Protein-protein and protein-DNA/RNA interactions play a fundamental role in a variety of biological processes. Determining the complex structures of these interactions is valuable, in which molecular docking has played an important role. To automatically make use of the binding information from the PDB in docking, here we have presented HDOCK, a novel web server of our hybrid docking algorithm of template-based modeling and free docking, in which cases with misleading templates can be rescued by the free docking protocol. The server supports protein-protein and protein-DNA/RNA docking and accepts both sequence and structure inputs for proteins. The docking process is fast and consumes about 10-20 min for a docking run. Tested on the cases with weakly homologous complexes of <30% sequence identity from five docking benchmarks, the HDOCK pipeline tied with template-based modeling on the protein-protein and protein-DNA benchmarks and performed better than template-based modeling on the three protein-RNA benchmarks when the top 10 predictions were considered. The performance of HDOCK became better when more predictions were considered. Combining the results of HDOCK and template-based modeling by ranking first of the template-based model further improved the predictive power of the server. The HDOCK web server is available at http://hdock.phys.hust.edu.cn/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W. II; Tsao, C.L.
1996-06-01
This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less
Blurring out hydrogen: The dynamical structure of teflic acid
NASA Astrophysics Data System (ADS)
Herbers, S.; Obenchain, D. A.; Kraus, P.; Wachsmuth, D.; Grabow, J.-U.
2018-05-01
The microwave spectra of 10 teflic acid isotopologues were recorded in the frequency range of 3-25 GHz using supersonic jet-expansion Fourier transform microwave spectroscopy. Despite being asymmetric in its equilibrium structure, the delocalization of the hydrogen atom leads to a symmetric top vibrational ground state structure. In this work, we present the zero point structure obtained from the experimental rotational constants and an approach to determine the semi-experimental equilibrium structure aided by ab initio data. The Te-O bond length determined in the equilibrium structure is accurate to the picometer and can be used as a benchmark for computational methods treating relativistic effects.
Optimization of the Manufacturing Process of Conical Shell Structures Using Prepreg Laminatees
NASA Astrophysics Data System (ADS)
Khakimova, Regina; Zimmermann, Rolf; Burau, Florian; Siebert, Marc; Arbelo, Mariano; Castro, Saullo; Degenhardt, Richard
2014-06-01
The design and manufacture of an unstiffened composite conical structure which is a scaled-down version of the Ariane 5 Midlife Evolution Equipment Bay Structure is presented. For such benchmarking structures the fiber orientation error is critical and then the manufacturing process becomes a big challenge. The paper therefore is focused on the implementation of a tailoring study and on the manufacturing process. The conical structure will be tested to validate a new design approach.This study contributes to the European Union (EU) project DESICOS, whose aim is to develop less conservative design guidelines for imperfection sensitive thin-walled structures.
Saul, Katherine R.; Hu, Xiao; Goehler, Craig M.; Vidt, Meghan E.; Daly, Melissa; Velisar, Anca; Murray, Wendy M.
2014-01-01
Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms. PMID:24995410
Saul, Katherine R; Hu, Xiao; Goehler, Craig M; Vidt, Meghan E; Daly, Melissa; Velisar, Anca; Murray, Wendy M
2015-01-01
Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms.
Benchmarking Outcomes in the Critically Injured Burn Patient
Klein, Matthew B.; Goverman, Jeremy; Hayden, Douglas L.; Fagan, Shawn P.; McDonald-Smith, Grace P.; Alexander, Andrew K.; Gamelli, Richard L.; Gibran, Nicole S.; Finnerty, Celeste C.; Jeschke, Marc G.; Arnoldo, Brett; Wispelwey, Bram; Mindrinos, Michael N.; Xiao, Wenzhong; Honari, Shari E.; Mason, Philip H.; Schoenfeld, David A.; Herndon, David N.; Tompkins, Ronald G.
2014-01-01
Objective To determine and compare outcomes with accepted benchmarks in burn care at six academic burn centers. Background Since the 1960s, U.S. morbidity and mortality rates have declined tremendously for burn patients, likely related to improvements in surgical and critical care treatment. We describe the baseline patient characteristics and well-defined outcomes for major burn injuries. Methods We followed 300 adults and 241 children from 2003–2009 through hospitalization using standard operating procedures developed at study onset. We created an extensive database on patient and injury characteristics, anatomic and physiological derangement, clinical treatment, and outcomes. These data were compared with existing benchmarks in burn care. Results Study patients were critically injured as demonstrated by mean %TBSA (41.2±18.3 for adults and 57.8±18.2 for children) and presence of inhalation injury in 38% of the adults and 54.8% of the children. Mortality in adults was 14.1% for those less than 55 years old and 38.5% for those age ≥55 years. Mortality in patients less than 17 years old was 7.9%. Overall, the multiple organ failure rate was 27%. When controlling for age and %TBSA, presence of inhalation injury was not significant. Conclusions This study provides the current benchmark for major burn patients. Mortality rates, notwithstanding significant % TBSA and presence of inhalation injury, have significantly declined compared to previous benchmarks. Modern day surgical and medically intensive management has markedly improved to the point where we can expect patients less than 55 years old with severe burn injuries and inhalation injury to survive these devastating conditions. PMID:24722222
Stratification of unresponsive patients by an independently validated index of brain complexity
Casarotto, Silvia; Comanducci, Angela; Rosanova, Mario; Sarasso, Simone; Fecchio, Matteo; Napolitani, Martino; Pigorini, Andrea; G. Casali, Adenauer; Trimarchi, Pietro D.; Boly, Melanie; Gosseries, Olivia; Bodart, Olivier; Curto, Francesco; Landi, Cristina; Mariotti, Maurizio; Devalle, Guya; Laureys, Steven; Tononi, Giulio
2016-01-01
Objective Validating objective, brain‐based indices of consciousness in behaviorally unresponsive patients represents a challenge due to the impossibility of obtaining independent evidence through subjective reports. Here we address this problem by first validating a promising metric of consciousness—the Perturbational Complexity Index (PCI)—in a benchmark population who could confirm the presence or absence of consciousness through subjective reports, and then applying the same index to patients with disorders of consciousness (DOCs). Methods The benchmark population encompassed 150 healthy controls and communicative brain‐injured subjects in various states of conscious wakefulness, disconnected consciousness, and unconsciousness. Receiver operating characteristic curve analysis was performed to define an optimal cutoff for discriminating between the conscious and unconscious conditions. This cutoff was then applied to a cohort of noncommunicative DOC patients (38 in a minimally conscious state [MCS] and 43 in a vegetative state [VS]). Results We found an empirical cutoff that discriminated with 100% sensitivity and specificity between the conscious and the unconscious conditions in the benchmark population. This cutoff resulted in a sensitivity of 94.7% in detecting MCS and allowed the identification of a number of unresponsive VS patients (9 of 43) with high values of PCI, overlapping with the distribution of the benchmark conscious condition. Interpretation Given its high sensitivity and specificity in the benchmark and MCS population, PCI offers a reliable, independently validated stratification of unresponsive patients that has important physiopathological and therapeutic implications. In particular, the high‐PCI subgroup of VS patients may retain a capacity for consciousness that is not expressed in behavior. Ann Neurol 2016;80:718–729 PMID:27717082
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Sterbentz, James W.; Snoj, Luka
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
ERIC Educational Resources Information Center
Eaneman, Paulette S.; And Others
These materials are part of the Project Benchmark series designed to teach secondary students about our legal concepts and systems. This unit focuses on the structure and procedures of the civil court systems. The materials outline common law heritage, kinds of cases, jurisdiction, civil pretrial procedure, trial procedure, and a sample automobile…
Benchmarking in emergency health systems.
Kennedy, Marcus P; Allen, Jacqueline; Allen, Greg
2002-12-01
This paper discusses the role of benchmarking as a component of quality management. It describes the historical background of benchmarking, its competitive origin and the requirement in today's health environment for a more collaborative approach. The classical 'functional and generic' types of benchmarking are discussed with a suggestion to adopt a different terminology that describes the purpose and practicalities of benchmarking. Benchmarking is not without risks. The consequence of inappropriate focus and the need for a balanced overview of process is explored. The competition that is intrinsic to benchmarking is questioned and the negative impact it may have on improvement strategies in poorly performing organizations is recognized. The difficulty in achieving cross-organizational validity in benchmarking is emphasized, as is the need to scrutinize benchmarking measures. The cost effectiveness of benchmarking projects is questioned and the concept of 'best value, best practice' in an environment of fixed resources is examined.
NASA Technical Reports Server (NTRS)
Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)
1993-01-01
A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.
A stable partitioned FSI algorithm for incompressible flow and deforming beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, L., E-mail: lil19@rpi.edu; Henshaw, W.D., E-mail: henshw@rpi.edu; Banks, J.W., E-mail: banksj3@rpi.edu
2016-05-01
An added-mass partitioned (AMP) algorithm is described for solving fluid–structure interaction (FSI) problems coupling incompressible flows with thin elastic structures undergoing finite deformations. The new AMP scheme is fully second-order accurate and stable, without sub-time-step iterations, even for very light structures when added-mass effects are strong. The fluid, governed by the incompressible Navier–Stokes equations, is solved in velocity-pressure form using a fractional-step method; large deformations are treated with a mixed Eulerian-Lagrangian approach on deforming composite grids. The motion of the thin structure is governed by a generalized Euler–Bernoulli beam model, and these equations are solved in a Lagrangian frame usingmore » two approaches, one based on finite differences and the other on finite elements. The key AMP interface condition is a generalized Robin (mixed) condition on the fluid pressure. This condition, which is derived at a continuous level, has no adjustable parameters and is applied at the discrete level to couple the partitioned domain solvers. Special treatment of the AMP condition is required to couple the finite-element beam solver with the finite-difference-based fluid solver, and two coupling approaches are described. A normal-mode stability analysis is performed for a linearized model problem involving a beam separating two fluid domains, and it is shown that the AMP scheme is stable independent of the ratio of the mass of the fluid to that of the structure. A traditional partitioned (TP) scheme using a Dirichlet–Neumann coupling for the same model problem is shown to be unconditionally unstable if the added mass of the fluid is too large. A series of benchmark problems of increasing complexity are considered to illustrate the behavior of the AMP algorithm, and to compare the behavior with that of the TP scheme. The results of all these benchmark problems verify the stability and accuracy of the AMP scheme. Results for one benchmark problem modeling blood flow in a deforming artery are also compared with corresponding results available in the literature.« less
[Management of human resources, materials, and organization processes in radioprotection].
Coppola, V
1999-06-01
The radiologist must learn to face daily management responsibilities and therefore he/she needs the relevant knowledge. Aside from the mechanisms of management accounting, which differ only slightly from similar analysis methods used in other centers, the managing radiologist (the person in charge) is directly responsible for planning, organizing, coordinating and controlling radiation protection, a major discipline characterizing diagnostic imaging. We will provide some practical management hints, keeping in mind that radiation protection must not be considered a simple (or annoying) technical task, but rather an extraordinary positive element for the radiologist's cultural differentiation and professional identity. The managing radiologist can use the theory and practice of management techniques successfully applied in business, customizing them to the ethics and economics of health care. Meeting the users' needs must obviously prevail on balancing the budget from both a logical and an accounting viewpoints, since non-profit organizations are involved. In radiological practice, distinguishing the management of human from structural resources (direct funding is not presently available) permits to use internal benchmarking for the former and controlled acquisition and planned replacement of technologies in the latter, obviously after evaluation of specific indicators and according to the relevant laws and technical guidelines. Managing human resources means safeguarding the patient, the operator and the population, which can be achieved or improved using benchmarking in a diagnostic imaging department. The references for best practice will be set per tabulas based on the relevant laws and (inter)national guidelines. The physical-technical and bureaucratic-administrative factors involved will be considered as process indices to evaluate the gap from normal standards. Among the different elements involved in managing structural resources, the appropriate acquisition of a piece of radiological equipment is important from both a radiation protection and an economic viewpoints. In the acquisition process, the first and the last steps (technology assessment and planned replacement, respectively) are specifically important for the radiologist and play a major role in global management. In both cases the radiologist must be able to lay out autonomous and objective working projects, also using evaluation algorithms.
Benchmarking and Performance Measurement.
ERIC Educational Resources Information Center
Town, J. Stephen
This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
Muhonen, J T; Laucht, A; Simmons, S; Dehollain, J P; Kalra, R; Hudson, F E; Freer, S; Itoh, K M; Jamieson, D N; McCallum, J C; Dzurak, A S; Morello, A
2015-04-22
Building upon the demonstration of coherent control and single-shot readout of the electron and nuclear spins of individual (31)P atoms in silicon, we present here a systematic experimental estimate of quantum gate fidelities using randomized benchmarking of 1-qubit gates in the Clifford group. We apply this analysis to the electron and the ionized (31)P nucleus of a single P donor in isotopically purified (28)Si. We find average gate fidelities of 99.95% for the electron and 99.99% for the nuclear spin. These values are above certain error correction thresholds and demonstrate the potential of donor-based quantum computing in silicon. By studying the influence of the shape and power of the control pulses, we find evidence that the present limitation to the gate fidelity is mostly related to the external hardware and not the intrinsic behaviour of the qubit.
Bench-marking effects in the blaming of professionals for incidents of aggression and assault.
Carifio, J; Lanza, M
1994-01-01
This study compared all possible orders of responding to three vignettes describing incidents between a male patient and a female nurse in which the nurse is mildly assaulted, severely assaulted, or verbally abused by the patient (the control condition). Subjects were 32 female senior-year nursing students and 28 practicing nurses. It was found that response levels to a given vignette could predict a respondent's response to the other vignettes. Also, a significant "bench-marking" effect was found: if a subject responded to the mild assault vignette first, the subject's overall response pattern best fit the general nonlinear assignment-of-blame pattern observed, but if the subject responded to the severe assault or control vignette first, this vignette set a bench mark for responding from which the subject's subsequent responses did not deviate greatly, which slightly distorted the subject's V-shaped nonlinear response pattern.
Modification and benchmarking of SKYSHINE-III for use with ISFSI cask arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertel, N.E.; Napolitano, D.G.
1997-12-01
Dry cask storage arrays are becoming more and more common at nuclear power plants in the United States. Title 10 of the Code of Federal Regulations, Part 72, limits doses at the controlled area boundary of these independent spent-fuel storage installations (ISFSI) to 0.25 mSv (25 mrem)/yr. The minimum controlled area boundaries of such a facility are determined by cask array dose calculations, which include direct radiation and radiation scattered by the atmosphere, also known as skyshine. NAC International (NAC) uses SKYSHINE-III to calculate the gamma-ray and neutron dose rates as a function of distance from ISFSI arrays. In thismore » paper, we present modifications to the SKYSHINE-III that more explicitly model cask arrays. In addition, we have benchmarked the radiation transport methods used in SKYSHINE-III against {sup 60}Co gamma-ray experiments and MCNP neutron calculations.« less
NASA Technical Reports Server (NTRS)
Lockard, David P.
2011-01-01
Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.
Web-client based distributed generalization and geoprocessing
Wolf, E.B.; Howe, K.
2009-01-01
Generalization and geoprocessing operations on geospatial information were once the domain of complex software running on high-performance workstations. Currently, these computationally intensive processes are the domain of desktop applications. Recent efforts have been made to move geoprocessing operations server-side in a distributed, web accessible environment. This paper initiates research into portable client-side generalization and geoprocessing operations as part of a larger effort in user-centered design for the US Geological Survey's The National Map. An implementation of the Ramer-Douglas-Peucker (RDP) line simplification algorithm was created in the open source OpenLayers geoweb client. This algorithm implementation was benchmarked using differing data structures and browser platforms. The implementation and results of the benchmarks are discussed in the general context of client-side geoprocessing. (Abstract).
Using 171,173Yb(d,p) to benchmark a surrogate reaction for neutron capture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hatarik, R; Bersntein, L; Burke, J
2008-08-08
Neutron capture cross sections on unstable nuclei are important for many applications in nuclear structure and astrophysics. Measuring these cross sections directly is a major challenge and often impossible. An indirect approach for measuring these cross sections is the surrogate reaction method, which makes it possible to relate the desired cross section to a cross section of an alternate reaction that proceeds through the same compound nucleus. To benchmark the validity of using the (d,p{gamma}) reaction as a surrogate for (n,{gamma}), the {sup 171,173}Yb(d,p{gamma}) reactions were measured with the goal to reproduce the known [1] neutron capture cross section ratiosmore » of these nuclei.« less
Ulmer, Candice Z; Ragland, Jared M; Koelmel, Jeremy P; Heckert, Alan; Jones, Christina M; Garrett, Timothy J; Yost, Richard A; Bowden, John A
2017-12-19
As advances in analytical separation techniques, mass spectrometry instrumentation, and data processing platforms continue to spur growth in the lipidomics field, more structurally unique lipid species are detected and annotated. The lipidomics community is in need of benchmark reference values to assess the validity of various lipidomics workflows in providing accurate quantitative measurements across the diverse lipidome. LipidQC addresses the harmonization challenge in lipid quantitation by providing a semiautomated process, independent of analytical platform, for visual comparison of experimental results of National Institute of Standards and Technology Standard Reference Material (SRM) 1950, "Metabolites in Frozen Human Plasma", against benchmark consensus mean concentrations derived from the NIST Lipidomics Interlaboratory Comparison Exercise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W., II
1993-01-01
One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less
The KMAT: Benchmarking Knowledge Management.
ERIC Educational Resources Information Center
de Jager, Martha
Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…
Mapping monomeric threading to protein-protein structure prediction.
Guerler, Aysam; Govindarajoo, Brandon; Zhang, Yang
2013-03-25
The key step of template-based protein-protein structure prediction is the recognition of complexes from experimental structure libraries that have similar quaternary fold. Maintaining two monomer and dimer structure libraries is however laborious, and inappropriate library construction can degrade template recognition coverage. We propose a novel strategy SPRING to identify complexes by mapping monomeric threading alignments to protein-protein interactions based on the original oligomer entries in the PDB, which does not rely on library construction and increases the efficiency and quality of complex template recognitions. SPRING is tested on 1838 nonhomologous protein complexes which can recognize correct quaternary template structures with a TM score >0.5 in 1115 cases after excluding homologous proteins. The average TM score of the first model is 60% and 17% higher than that by HHsearch and COTH, respectively, while the number of targets with an interface RMSD <2.5 Å by SPRING is 134% and 167% higher than these competing methods. SPRING is controlled with ZDOCK on 77 docking benchmark proteins. Although the relative performance of SPRING and ZDOCK depends on the level of homology filters, a combination of the two methods can result in a significantly higher model quality than ZDOCK at all homology thresholds. These data demonstrate a new efficient approach to quaternary structure recognition that is ready to use for genome-scale modeling of protein-protein interactions due to the high speed and accuracy.
Modeling the atomistic growth behavior of gold nanoparticles in solution
NASA Astrophysics Data System (ADS)
Turner, C. Heath; Lei, Yu; Bao, Yuping
2016-04-01
The properties of gold nanoparticles strongly depend on their three-dimensional atomic structure, leading to an increased emphasis on controlling and predicting nanoparticle structural evolution during the synthesis process. In order to provide this atomistic-level insight and establish a link to the experimentally-observed growth behavior, a kinetic Monte Carlo simulation (KMC) approach is developed for capturing Au nanoparticle growth characteristics. The advantage of this approach is that, compared to traditional molecular dynamics simulations, the atomistic nanoparticle structural evolution can be tracked on time scales that approach the actual experiments. This has enabled several different comparisons against experimental benchmarks, and it has helped transition the KMC simulations from a hypothetical toy model into a more experimentally-relevant test-bed. The model is initially parameterized by performing a series of automated comparisons of Au nanoparticle growth curves versus the experimental observations, and then the refined model allows for detailed structural analysis of the nanoparticle growth behavior. Although the Au nanoparticles are roughly spherical, the maximum/minimum dimensions deviate from the average by approximately 12.5%, which is consistent with the corresponding experiments. Also, a surface texture analysis highlights the changes in the surface structure as a function of time. While the nanoparticles show similar surface structures throughout the growth process, there can be some significant differences during the initial growth at different synthesis conditions.
Benchmarking of surgical complications in gynaecological oncology: prospective multicentre study.
Burnell, M; Iyer, R; Gentry-Maharaj, A; Nordin, A; Liston, R; Manchanda, R; Das, N; Gornall, R; Beardmore-Gray, A; Hillaby, K; Leeson, S; Linder, A; Lopes, A; Meechan, D; Mould, T; Nevin, J; Olaitan, A; Rufford, B; Shanbhag, S; Thackeray, A; Wood, N; Reynolds, K; Ryan, A; Menon, U
2016-12-01
To explore the impact of risk-adjustment on surgical complication rates (CRs) for benchmarking gynaecological oncology centres. Prospective cohort study. Ten UK accredited gynaecological oncology centres. Women undergoing major surgery on a gynaecological oncology operating list. Patient co-morbidity, surgical procedures and intra-operative (IntraOp) complications were recorded contemporaneously by surgeons for 2948 major surgical procedures. Postoperative (PostOp) complications were collected from hospitals and patients. Risk-prediction models for IntraOp and PostOp complications were created using penalised (lasso) logistic regression using over 30 potential patient/surgical risk factors. Observed and risk-adjusted IntraOp and PostOp CRs for individual hospitals were calculated. Benchmarking using colour-coded funnel plots and observed-to-expected ratios was undertaken. Overall, IntraOp CR was 4.7% (95% CI 4.0-5.6) and PostOp CR was 25.7% (95% CI 23.7-28.2). The observed CRs for all hospitals were under the upper 95% control limit for both IntraOp and PostOp funnel plots. Risk-adjustment and use of observed-to-expected ratio resulted in one hospital moving to the >95-98% CI (red) band for IntraOp CRs. Use of only hospital-reported data for PostOp CRs would have resulted in one hospital being unfairly allocated to the red band. There was little concordance between IntraOp and PostOp CRs. The funnel plots and overall IntraOp (≈5%) and PostOp (≈26%) CRs could be used for benchmarking gynaecological oncology centres. Hospital benchmarking using risk-adjusted CRs allows fairer institutional comparison. IntraOp and PostOp CRs are best assessed separately. As hospital under-reporting is common for postoperative complications, use of patient-reported outcomes is important. Risk-adjusted benchmarking of surgical complications for ten UK gynaecological oncology centres allows fairer comparison. © 2016 Royal College of Obstetricians and Gynaecologists.
On the Helix Propensity in Generalized Born Solvent Descriptions of Modeling the Dark Proteome
2017-01-10
benchmarks of conformational sampling methods and their all-atom force fields plus solvent descriptions to accurately model structural transitions on a...atom simulations of proteins is the replacement of explicit water interactions with a continuum description of treating implicitly the bulk physical... structure was reported by Amarasinghe and coworkers (Leung et al., 2015) of the Ebola nucleoprotein NP in complex with a 28-residue peptide extracted
NASA Technical Reports Server (NTRS)
Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.
1991-01-01
A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.
2-D Circulation Control Airfoil Benchmark Experiments Intended for CFD Code Validation
NASA Technical Reports Server (NTRS)
Englar, Robert J.; Jones, Gregory S.; Allan, Brian G.; Lin, Johb C.
2009-01-01
A current NASA Research Announcement (NRA) project being conducted by Georgia Tech Research Institute (GTRI) personnel and NASA collaborators includes the development of Circulation Control (CC) blown airfoils to improve subsonic aircraft high-lift and cruise performance. The emphasis of this program is the development of CC active flow control concepts for both high-lift augmentation, drag control, and cruise efficiency. A collaboration in this project includes work by NASA research engineers, whereas CFD validation and flow physics experimental research are part of NASA s systematic approach to developing design and optimization tools for CC applications to fixed-wing aircraft. The design space for CESTOL type aircraft is focusing on geometries that depend on advanced flow control technologies that include Circulation Control aerodynamics. The ability to consistently predict advanced aircraft performance requires improvements in design tools to include these advanced concepts. Validation of these tools will be based on experimental methods applied to complex flows that go beyond conventional aircraft modeling techniques. This paper focuses on recent/ongoing benchmark high-lift experiments and CFD efforts intended to provide 2-D CFD validation data sets related to NASA s Cruise Efficient Short Take Off and Landing (CESTOL) study. Both the experimental data and related CFD predictions are discussed.
Using Toyota's A3 Thinking for Analyzing MBA Business Cases
ERIC Educational Resources Information Center
Anderson, Joe S.; Morgan, James N.; Williams, Susan K.
2011-01-01
A3 Thinking is fundamental to Toyota's benchmark management philosophy and to their lean production system. It is used to solve problems, gain agreement, mentor team members, and lead organizational improvements. A structured problem-solving approach, A3 Thinking builds improvement opportunities through experience. We used "The Toyota…
Object Recognition Memory and the Rodent Hippocampus
ERIC Educational Resources Information Center
Broadbent, Nicola J.; Gaskin, Stephane; Squire, Larry R.; Clark, Robert E.
2010-01-01
In rodents, the novel object recognition task (NOR) has become a benchmark task for assessing recognition memory. Yet, despite its widespread use, a consensus has not developed about which brain structures are important for task performance. We assessed both the anterograde and retrograde effects of hippocampal lesions on performance in the NOR…
Engineering Education as a Complex System
ERIC Educational Resources Information Center
Gattie, David K.; Kellam, Nadia N.; Schramski, John R.; Walther, Joachim
2011-01-01
This paper presents a theoretical basis for cultivating engineering education as a complex system that will prepare students to think critically and make decisions with regard to poorly understood, ill-structured issues. Integral to this theoretical basis is a solution space construct developed and presented as a benchmark for evaluating…
Selecting Peer Institutions with IPEDS and Other Nationally Available Data
ERIC Educational Resources Information Center
Carrigan, Sarah D.
2012-01-01
The process of identifying and selecting peers for a college or university is one of this volume's definitions for "benchmarking": "a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to identify opportunities for…
Solving Boltzmann and Fokker-Planck Equations Using Sparse Representation
2011-05-31
material science. We have com- puted the electronic structure of 2D quantum dot system, and compared the efficiency with the benchmark software OCTOPUS . For...one self-consistent iteration step with 512 electrons, OCTOPUS costs 1091 sec, and selected inversion costs 9.76 sec. The algorithm exhibits
Cascades/Aleutian Play Fairway Analysis: Data and Map Files
Lisa Shevenell
2015-11-15
Contains Excel data files used to quantifiably rank the geothermal potential of each of the young volcanic centers of the Cascade and Aleutian Arcs using world power production volcanic centers as benchmarks. Also contains shapefiles used in play fairway analysis with power plant, volcano, geochemistry and structural data.
7 CFR 1717.1204 - Policies and conditions applicable to settlements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... and action plans by the members to change their operations, management, and organizational structure... to meet its financial obligations will be based on analyses and documentation by RUS of the borrower... based on comparisons with benchmark electric utilities; and (H) The accuracy and completeness of the...
Test Cases for the Benchmark Active Controls: Spoiler and Control Surface Oscillations and Flutter
NASA Technical Reports Server (NTRS)
Bennett, Robert M.; Scott, Robert C.; Wieseman, Carol D.
2000-01-01
As a portion of the Benchmark Models Program at NASA Langley, a simple generic model was developed for active controls research and was called BACT for Benchmark Active Controls Technology model. This model was based on the previously-tested Benchmark Models rectangular wing with the NACA 0012 airfoil section that was mounted on the Pitch and Plunge Apparatus (PAPA) for flutter testing. The BACT model had an upper surface spoiler, a lower surface spoiler, and a trailing edge control surface for use in flutter suppression and dynamic response excitation. Previous experience with flutter suppression indicated a need for measured control surface aerodynamics for accurate control law design. Three different types of flutter instability boundaries had also been determined for the NACA 0012/PAPA model, a classical flutter boundary, a transonic stall flutter boundary at angle of attack, and a plunge instability near M = 0.9. Therefore an extensive set of steady and control surface oscillation data was generated spanning the range of the three types of instabilities. This information was subsequently used to design control laws to suppress each flutter instability. There have been three tests of the BACT model. The objective of the first test, TDT Test 485, was to generate a data set of steady and unsteady control surface effectiveness data, and to determine the open loop dynamic characteristics of the control systems including the actuators. Unsteady pressures, loads, and transfer functions were measured. The other two tests, TDT Test 502 and TDT Test 5 18, were primarily oriented towards active controls research, but some data supplementary to the first test were obtained. Dynamic response of the flexible system to control surface excitation and open loop flutter characteristics were determined during Test 502. Loads were not measured during the last two tests. During these tests, a database of over 3000 data sets was obtained. A reasonably extensive subset of the data sets from the first two tests have been chosen for Test Cases for computational comparisons concentrating on static conditions and cases with harmonically oscillating control surfaces. Several flutter Test Cases from both tests have also been included. Some aerodynamic comparisons with the BACT data have been made using computational fluid dynamics codes at the Navier-Stokes level (and in the accompanying chapter SC). Some mechanical and active control studies have been presented. In this report several Test Cases are selected to illustrate trends for a variety of different conditions with emphasis on transonic flow effects. Cases for static angles of attack, static trailing-edge and upper-surface spoiler deflections are included for a range of conditions near those for the oscillation cases. Cases for trailing-edge control and upper-surface spoiler oscillations for a range of Mach numbers, angle of attack, and static control deflections are included. Cases for all three types of flutter instability are selected. In addition some cases are included for dynamic response measurements during forced oscillations of the controls on the flexible mount. An overview of the model and tests is given, and the standard formulary for these data is listed. Some sample data and sample results of calculations are presented. Only the static pressures and the first harmonic real and imaginary parts of the pressures are included in the data for the Test Cases, but digitized time histories have been archived. The data for the Test Cases are also available as separate electronic files.
Assessing validity of observational intervention studies – the Benchmarking Controlled Trials
Malmivaara, Antti
2016-01-01
Abstract Background: Benchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations. Aims: To create and pilot test a checklist for appraising methodological validity of a BCT. Methods: The checklist was created by extracting the most essential elements from the comprehensive set of criteria in the previous paper on BCTs. Also checklists and scientific papers on observational studies and respective systematic reviews were utilized. Ten BCTs published in the Lancet and in the New England Journal of Medicine were used to assess feasibility of the created checklist. Results: The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies. Conclusions: The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies. However, the piloted checklist should be validated in further studies.Key messagesBenchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations.This paper presents a checklist for appraising methodological validity of BCTs and pilot-tests the checklist with ten BCTs published in leading medical journals. The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies.The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies. PMID:27238631
42 CFR 440.335 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...
42 CFR 440.335 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...
Standley, Daron M; Toh, Hiroyuki; Nakamura, Haruki
2008-09-01
A method to functionally annotate structural genomics targets, based on a novel structural alignment scoring function, is proposed. In the proposed score, position-specific scoring matrices are used to weight structurally aligned residue pairs to highlight evolutionarily conserved motifs. The functional form of the score is first optimized for discriminating domains belonging to the same Pfam family from domains belonging to different families but the same CATH or SCOP superfamily. In the optimization stage, we consider four standard weighting functions as well as our own, the "maximum substitution probability," and combinations of these functions. The optimized score achieves an area of 0.87 under the receiver-operating characteristic curve with respect to identifying Pfam families within a sequence-unique benchmark set of domain pairs. Confidence measures are then derived from the benchmark distribution of true-positive scores. The alignment method is next applied to the task of functionally annotating 230 query proteins released to the public as part of the Protein 3000 structural genomics project in Japan. Of these queries, 78 were found to align to templates with the same Pfam family as the query or had sequence identities > or = 30%. Another 49 queries were found to match more distantly related templates. Within this group, the template predicted by our method to be the closest functional relative was often not the most structurally similar. Several nontrivial cases are discussed in detail. Finally, 103 queries matched templates at the fold level, but not the family or superfamily level, and remain functionally uncharacterized. 2008 Wiley-Liss, Inc.
Code of Federal Regulations, 2014 CFR
2014-10-01
... adjustments made pursuant to the benchmark standards described in § 156.110 of this subchapter. Benefit design... this subchapter. Enrollee satisfaction survey vendor means an organization that has relevant survey administration experience (for example, CAHPS® surveys), organizational survey capacity, and quality control...
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...
Valente, Thomas W; Chou, Chich Ping; Pentz, Mary Ann
2007-05-01
We examined the effect of community coalition network structure on the effectiveness of an intervention designed to accelerate the adoption of evidence-based substance abuse prevention programs. At baseline, 24 cities were matched and randomly assigned to 3 conditions (control, satellite TV training, and training plus technical assistance). We surveyed 415 community leaders at baseline and 406 at 18-month follow-up about their attitudes and practices toward substance abuse prevention programs. Network structure was measured by asking leaders whom in their coalition they turned to for advice about prevention programs. The outcome was a scale with 4 subscales: coalition function, planning, achievement of benchmarks, and progress in prevention activities. We used multiple linear regression and path analysis to test hypotheses. Intervention had a significant effect on decreasing the density of coalition networks. The change in density subsequently increased adoption of evidence-based practices. Optimal community network structures for the adoption of public health programs are unknown, but it should not be assumed that increasing network density or centralization are appropriate goals. Lower-density networks may be more efficient for organizing evidence-based prevention programs in communities.
Reconstruction of network topology using status-time-series data
NASA Astrophysics Data System (ADS)
Pandey, Pradumn Kumar; Badarla, Venkataramana
2018-01-01
Uncovering the heterogeneous connection pattern of a networked system from the available status-time-series (STS) data of a dynamical process on the network is of great interest in network science and known as a reverse engineering problem. Dynamical processes on a network are affected by the structure of the network. The dependency between the diffusion dynamics and structure of the network can be utilized to retrieve the connection pattern from the diffusion data. Information of the network structure can help to devise the control of dynamics on the network. In this paper, we consider the problem of network reconstruction from the available status-time-series (STS) data using matrix analysis. The proposed method of network reconstruction from the STS data is tested successfully under susceptible-infected-susceptible (SIS) diffusion dynamics on real-world and computer-generated benchmark networks. High accuracy and efficiency of the proposed reconstruction procedure from the status-time-series data define the novelty of the method. Our proposed method outperforms compressed sensing theory (CST) based method of network reconstruction using STS data. Further, the same procedure of network reconstruction is applied to the weighted networks. The ordering of the edges in the weighted networks is identified with high accuracy.
Efficient Online Learning Algorithms Based on LSTM Neural Networks.
Ergen, Tolga; Kozat, Suleyman Serdar
2017-09-13
We investigate online nonlinear regression and introduce novel regression structures based on the long short term memory (LSTM) networks. For the introduced structures, we also provide highly efficient and effective online training methods. To train these novel LSTM-based structures, we put the underlying architecture in a state space form and introduce highly efficient and effective particle filtering (PF)-based updates. We also provide stochastic gradient descent and extended Kalman filter-based updates. Our PF-based training method guarantees convergence to the optimal parameter estimation in the mean square error sense provided that we have a sufficient number of particles and satisfy certain technical conditions. More importantly, we achieve this performance with a computational complexity in the order of the first-order gradient-based methods by controlling the number of particles. Since our approach is generic, we also introduce a gated recurrent unit (GRU)-based approach by directly replacing the LSTM architecture with the GRU architecture, where we demonstrate the superiority of our LSTM-based approach in the sequential prediction task via different real life data sets. In addition, the experimental results illustrate significant performance improvements achieved by the introduced algorithms with respect to the conventional methods over several different benchmark real life data sets.
Stochastic metallic-glass cellular structures exhibiting benchmark strength.
Demetriou, Marios D; Veazey, Chris; Harmon, John S; Schramm, Joseph P; Johnson, William L
2008-10-03
By identifying the key characteristic "structural scales" that dictate the resistance of a porous metallic glass against buckling and fracture, stochastic highly porous metallic-glass structures are designed capable of yielding plastically and inheriting the high plastic yield strength of the amorphous metal. The strengths attainable by the present foams appear to equal or exceed those by highly engineered metal foams such as Ti-6Al-4V or ferrous-metal foams at comparable levels of porosity, placing the present metallic-glass foams among the strongest foams known to date.
A Review of Flood Loss Models as Basis for Harmonization and Benchmarking
Kreibich, Heidi; Franco, Guillermo; Marechal, David
2016-01-01
Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss–or flood vulnerability–relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents an approach for a quantitative comparison of disparate models via the reduction to the joint input variables of all models. Harmonization of models for benchmarking and comparison requires profound insight into the model structures, mechanisms and underlying assumptions. Possibilities and challenges are discussed that exist in model harmonization and the application of the inventory in a benchmarking framework. PMID:27454604
A Review of Flood Loss Models as Basis for Harmonization and Benchmarking.
Gerl, Tina; Kreibich, Heidi; Franco, Guillermo; Marechal, David; Schröter, Kai
2016-01-01
Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss-or flood vulnerability-relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents an approach for a quantitative comparison of disparate models via the reduction to the joint input variables of all models. Harmonization of models for benchmarking and comparison requires profound insight into the model structures, mechanisms and underlying assumptions. Possibilities and challenges are discussed that exist in model harmonization and the application of the inventory in a benchmarking framework.
NASA Astrophysics Data System (ADS)
Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.
2015-06-01
A new anisotropic hr-adaptive mesh technique has been applied to modelling of multiscale transport phenomena, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been setup for two-dimensional (2-D) transport phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes.
Nuclear spin noise in the central spin model
NASA Astrophysics Data System (ADS)
Fröhling, Nina; Anders, Frithjof B.; Glazov, Mikhail
2018-05-01
We study theoretically the fluctuations of the nuclear spins in quantum dots employing the central spin model which accounts for the hyperfine interaction of the nuclei with the electron spin. These fluctuations are calculated both with an analytical approach using homogeneous hyperfine couplings (box model) and with a numerical simulation using a distribution of hyperfine coupling constants. The approaches are in good agreement. The box model serves as a benchmark with low computational cost that explains the basic features of the nuclear spin noise well. We also demonstrate that the nuclear spin noise spectra comprise a two-peak structure centered at the nuclear Zeeman frequency in high magnetic fields with the shape of the spectrum controlled by the distribution of the hyperfine constants. This allows for direct access to this distribution function through nuclear spin noise spectroscopy.
Improve strategic supplier performance using DMAIC to develop a Quality Improvement Plan
NASA Astrophysics Data System (ADS)
Jardim, Kevin P.
Supplier performance that meets the requirements of the customer has long plagued quality professionals. Despite the vast efforts by organizations to improve supplier performance, little has been done to standardize the plan to improve performance. This project presents a guideline and problem-solving strategy using a Define, Measure, Analyze, Improve, and Control (DMAIC) structured tool that will assist in the management and improvement of supplier performance. An analysis of benchmarked Quality Improvement Plans indicated that this topic needs more focus on how to accomplish improved supplier performance. This project is part of a growing body of supplier continuous improvement efforts. With the input of Zodiac Aerospace quality professionals this project's results provide a solution to Quality Improvement Plans and show objective evidence of its benefits. This project contributes to the future research on similar topics.
Ibrahim, Tamer M; Bauer, Matthias R; Boeckler, Frank M
2015-01-01
Structure-based virtual screening techniques can help to identify new lead structures and complement other screening approaches in drug discovery. Prior to docking, the data (protein crystal structures and ligands) should be prepared with great attention to molecular and chemical details. Using a subset of 18 diverse targets from the recently introduced DEKOIS 2.0 benchmark set library, we found differences in the virtual screening performance of two popular docking tools (GOLD and Glide) when employing two different commercial packages (e.g. MOE and Maestro) for preparing input data. We systematically investigated the possible factors that can be responsible for the found differences in selected sets. For the Angiotensin-I-converting enzyme dataset, preparation of the bioactive molecules clearly exerted the highest influence on VS performance compared to preparation of the decoys or the target structure. The major contributing factors were different protonation states, molecular flexibility, and differences in the input conformation (particularly for cyclic moieties) of bioactives. In addition, score normalization strategies eliminated the biased docking scores shown by GOLD (ChemPLP) for the larger bioactives and produced a better performance. Generalizing these normalization strategies on the 18 DEKOIS 2.0 sets, improved the performances for the majority of GOLD (ChemPLP) docking, while it showed detrimental performances for the majority of Glide (SP) docking. In conclusion, we exemplify herein possible issues particularly during the preparation stage of molecular data and demonstrate to which extent these issues can cause perturbations in the virtual screening performance. We provide insights into what problems can occur and should be avoided, when generating benchmarks to characterize the virtual screening performance. Particularly, careful selection of an appropriate molecular preparation setup for the bioactive set and the use of score normalization for docking with GOLD (ChemPLP) appear to have a great importance for the screening performance. For virtual screening campaigns, we recommend to invest time and effort into including alternative preparation workflows into the generation of the master library, even at the cost of including multiple representations of each molecule. Graphical AbstractUsing DEKOIS 2.0 benchmark sets in structure-based virtual screening to probe the impact of molecular preparation and score normalization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W. II; Mabrey, J.B.
1994-07-01
This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronicmore » Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.« less
Benchmarking Model Variants in Development of a Hardware-in-the-Loop Simulation System
NASA Technical Reports Server (NTRS)
Aretskin-Hariton, Eliot D.; Zinnecker, Alicia M.; Kratz, Jonathan L.; Culley, Dennis E.; Thomas, George L.
2016-01-01
Distributed engine control architecture presents a significant increase in complexity over traditional implementations when viewed from the perspective of system simulation and hardware design and test. Even if the overall function of the control scheme remains the same, the hardware implementation can have a significant effect on the overall system performance due to differences in the creation and flow of data between control elements. A Hardware-in-the-Loop (HIL) simulation system is under development at NASA Glenn Research Center that enables the exploration of these hardware dependent issues. The system is based on, but not limited to, the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). This paper describes the step-by-step conversion from the self-contained baseline model to the hardware in the loop model, and the validation of each step. As the control model hardware fidelity was improved during HIL system development, benchmarking simulations were performed to verify that engine system performance characteristics remained the same. The results demonstrate the goal of the effort; the new HIL configurations have similar functionality and performance compared to the baseline C-MAPSS40k system.
Raising Quality and Achievement. A College Guide to Benchmarking.
ERIC Educational Resources Information Center
Owen, Jane
This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…
Benchmarking in Education: Tech Prep, a Case in Point. IEE Brief Number 8.
ERIC Educational Resources Information Center
Inger, Morton
Benchmarking is a process by which organizations compare their practices, processes, and outcomes to standards of excellence in a systematic way. The benchmarking process entails the following essential steps: determining what to benchmark and establishing internal baseline data; identifying the benchmark; determining how that standard has been…
Benchmarks: The Development of a New Approach to Student Evaluation.
ERIC Educational Resources Information Center
Larter, Sylvia
The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…
Structural weights analysis of advanced aerospace vehicles using finite element analysis
NASA Technical Reports Server (NTRS)
Bush, Lance B.; Lentz, Christopher A.; Rehder, John J.; Naftel, J. Chris; Cerro, Jeffrey A.
1989-01-01
A conceptual/preliminary level structural design system has been developed for structural integrity analysis and weight estimation of advanced space transportation vehicles. The system includes a three-dimensional interactive geometry modeler, a finite element pre- and post-processor, a finite element analyzer, and a structural sizing program. Inputs to the system include the geometry, surface temperature, material constants, construction methods, and aerodynamic and inertial loads. The results are a sized vehicle structure capable of withstanding the static loads incurred during assembly, transportation, operations, and missions, and a corresponding structural weight. An analysis of the Space Shuttle external tank is included in this paper as a validation and benchmark case of the system.
Identifying key genes in glaucoma based on a benchmarked dataset and the gene regulatory network.
Chen, Xi; Wang, Qiao-Ling; Zhang, Meng-Hui
2017-10-01
The current study aimed to identify key genes in glaucoma based on a benchmarked dataset and gene regulatory network (GRN). Local and global noise was added to the gene expression dataset to produce a benchmarked dataset. Differentially-expressed genes (DEGs) between patients with glaucoma and normal controls were identified utilizing the Linear Models for Microarray Data (Limma) package based on benchmarked dataset. A total of 5 GRN inference methods, including Zscore, GeneNet, context likelihood of relatedness (CLR) algorithm, Partial Correlation coefficient with Information Theory (PCIT) and GEne Network Inference with Ensemble of Trees (Genie3) were evaluated using receiver operating characteristic (ROC) and precision and recall (PR) curves. The interference method with the best performance was selected to construct the GRN. Subsequently, topological centrality (degree, closeness and betweenness) was conducted to identify key genes in the GRN of glaucoma. Finally, the key genes were validated by performing reverse transcription-quantitative polymerase chain reaction (RT-qPCR). A total of 176 DEGs were detected from the benchmarked dataset. The ROC and PR curves of the 5 methods were analyzed and it was determined that Genie3 had a clear advantage over the other methods; thus, Genie3 was used to construct the GRN. Following topological centrality analysis, 14 key genes for glaucoma were identified, including IL6 , EPHA2 and GSTT1 and 5 of these 14 key genes were validated by RT-qPCR. Therefore, the current study identified 14 key genes in glaucoma, which may be potential biomarkers to use in the diagnosis of glaucoma and aid in identifying the molecular mechanism of this disease.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.
2014-03-01
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess
2013-03-01
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess
2013-03-01
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
Benchmarking by HbA1c in a national diabetes quality register--does measurement bias matter?
Carlsen, Siri; Thue, Geir; Cooper, John Graham; Røraas, Thomas; Gøransson, Lasse Gunnar; Løvaas, Karianne; Sandberg, Sverre
2015-08-01
Bias in HbA1c measurement could give a wrong impression of the standard of care when benchmarking diabetes care. The aim of this study was to evaluate how measurement bias in HbA1c results may influence the benchmarking process performed by a national diabetes register. Using data from 2012 from the Norwegian Diabetes Register for Adults, we included HbA1c results from 3584 patients with type 1 diabetes attending 13 hospital clinics, and 1366 patients with type 2 diabetes attending 18 GP offices. Correction factors for HbA1c were obtained by comparing the results of the hospital laboratories'/GP offices' external quality assurance scheme with the target value from a reference method. Compared with the uncorrected yearly median HbA1c values for hospital clinics and GP offices, EQA corrected HbA1c values were within ±0.2% (2 mmol/mol) for all but one hospital clinic whose value was reduced by 0.4% (4 mmol/mol). Three hospital clinics reduced the proportion of patients with poor glycemic control, one by 9% and two by 4%. For most participants in our study, correcting for measurement bias had little effect on the yearly median HbA1c value or the percentage of patients achieving glycemic goals. However, at three hospital clinics correcting for measurement bias had an important effect on HbA1c benchmarking results especially with regard to percentages of patients achieving glycemic targets. The analytical quality of HbA1c should be taken into account when comparing benchmarking results.
Observation time scale, free-energy landscapes, and molecular symmetry
Wales, David J.; Salamon, Peter
2014-01-01
When structures that interconvert on a given time scale are lumped together, the corresponding free-energy surface becomes a function of the observation time. This view is equivalent to grouping structures that are connected by free-energy barriers below a certain threshold. We illustrate this time dependence for some benchmark systems, namely atomic clusters and alanine dipeptide, highlighting the connections to broken ergodicity, local equilibrium, and “feasible” symmetry operations of the molecular Hamiltonian. PMID:24374625
NASA Astrophysics Data System (ADS)
Franke, R.
2016-11-01
In many networks discovered in biology, medicine, neuroscience and other disciplines special properties like a certain degree distribution and hierarchical cluster structure (also called communities) can be observed as general organizing principles. Detecting the cluster structure of an unknown network promises to identify functional subdivisions, hierarchy and interactions on a mesoscale. It is not trivial choosing an appropriate detection algorithm because there are multiple network, cluster and algorithmic properties to be considered. Edges can be weighted and/or directed, clusters overlap or build a hierarchy in several ways. Algorithms differ not only in runtime, memory requirements but also in allowed network and cluster properties. They are based on a specific definition of what a cluster is, too. On the one hand, a comprehensive network creation model is needed to build a large variety of benchmark networks with different reasonable structures to compare algorithms. On the other hand, if a cluster structure is already known, it is desirable to separate effects of this structure from other network properties. This can be done with null model networks that mimic an observed cluster structure to improve statistics on other network features. A third important application is the general study of properties in networks with different cluster structures, possibly evolving over time. Currently there are good benchmark and creation models available. But what is left is a precise sandbox model to build hierarchical, overlapping and directed clusters for undirected or directed, binary or weighted complex random networks on basis of a sophisticated blueprint. This gap shall be closed by the model CHIMERA (Cluster Hierarchy Interconnection Model for Evaluation, Research and Analysis) which will be introduced and described here for the first time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savi, Daniel, E-mail: d.savi@umweltchemie.ch; Kasser, Ueli; Ott, Thomas
Highlights: • We’ve analysed data on the dismantling of electronic and electrical appliances. • Ten years of mass balance data of more than recycling companies have been considered. • Percentages of dismantled batteries, capacitors and PWB have been studied. • Threshold values and benchmarks for batteries and capacitors have been identified. • No benchmark for the dismantling of printed wiring boards should be set. - Abstract: The article compiles and analyses sample data for toxic components removed from waste electronic and electrical equipment (WEEE) from more than 30 recycling companies in Switzerland over the past ten years. According to Europeanmore » and Swiss legislation, toxic components like batteries, capacitors and printed wiring boards have to be removed from WEEE. The control bodies of the Swiss take back schemes have been monitoring the activities of WEEE recyclers in Switzerland for about 15 years. All recyclers have to provide annual mass balance data for every year of operation. From this data, percentage shares of removed batteries and capacitors are calculated in relation to the amount of each respective WEEE category treated. A rationale is developed, why such an indicator should not be calculated for printed wiring boards. The distributions of these de-pollution indicators are analysed and their suitability for defining lower threshold values and benchmarks for the depollution of WEEE is discussed. Recommendations for benchmarks and threshold values for the removal of capacitors and batteries are given.« less
Bailey, Tessa S; Dollard, Maureen F; Richards, Penny A M
2015-01-01
Despite decades of research from around the world now permeating occupational health and safety (OHS) legislation and guidelines, there remains a lack of tools to guide practice. Our main goal was to establish benchmark levels of psychosocial safety climate (PSC) that would signify risk of job strain (jobs with high demands and low control) and depression in organizations. First, to justify our focus on PSC, using interview data from Australian employees matched at 2 time points 12 months apart (n = 1081), we verified PSC as a significant leading predictor of job strain and in turn depression. Next, using 2 additional data sets (n = 2097 and n = 1043) we determined benchmarks of organizational PSC (range 12-60) for low-risk (PSC at 41 or above) and high-risk (PSC at 37 or below) of employee job strain and depressive symptoms. Finally, using the newly created benchmarks we estimated the population attributable risk (PAR) and found that improving PSC in organizations to above 37 could reduce 14% of job strain and 16% of depressive symptoms in the working population. The results provide national standards that organizations and regulatory agencies can utilize to promote safer working environments and lower the risk of harm to employee mental health. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Performance Monitoring of Distributed Data Processing Systems
NASA Technical Reports Server (NTRS)
Ojha, Anand K.
2000-01-01
Test and checkout systems are essential components in ensuring safety and reliability of aircraft and related systems for space missions. A variety of systems, developed over several years, are in use at the NASA/KSC. Many of these systems are configured as distributed data processing systems with the functionality spread over several multiprocessor nodes interconnected through networks. To be cost-effective, a system should take the least amount of resource and perform a given testing task in the least amount of time. There are two aspects of performance evaluation: monitoring and benchmarking. While monitoring is valuable to system administrators in operating and maintaining, benchmarking is important in designing and upgrading computer-based systems. These two aspects of performance evaluation are the foci of this project. This paper first discusses various issues related to software, hardware, and hybrid performance monitoring as applicable to distributed systems, and specifically to the TCMS (Test Control and Monitoring System). Next, a comparison of several probing instructions are made to show that the hybrid monitoring technique developed by the NIST (National Institutes for Standards and Technology) is the least intrusive and takes only one-fourth of the time taken by software monitoring probes. In the rest of the paper, issues related to benchmarking a distributed system have been discussed and finally a prescription for developing a micro-benchmark for the TCMS has been provided.
NASA Astrophysics Data System (ADS)
Liu, Lei; Li, Zhi-Guo; Dai, Jia-Yu; Chen, Qi-Feng; Chen, Xiang-Rong
2018-06-01
Comprehensive knowledge of physical properties such as equation of state (EOS), proton exchange, dynamic structures, diffusion coefficients, and viscosities of hydrogen-deuterium mixtures with densities from 0.1 to 5 g /cm3 and temperatures from 1 to 50 kK has been presented via quantum molecular dynamics (QMD) simulations. The existing multi-shock experimental EOS provides an important benchmark to evaluate exchange-correlation functionals. The comparison of simulations with experiments indicates that a nonlocal van der Waals density functional (vdW-DF1) produces excellent results. Fraction analysis of molecules using a weighted integral over pair distribution functions was performed. A dissociation diagram together with a boundary where the proton exchange (H2+D2⇌2 HD ) occurs was generated, which shows evidence that the HD molecules form as the H2 and D2 molecules are almost 50% dissociated. The mechanism of proton exchange can be interpreted as a process of dissociation followed by recombination. The ionic structures at extreme conditions were analyzed by the effective coordination number model. High-order cluster, circle, and chain structures can be founded in the strongly coupled warm dense regime. The present QMD diffusion coefficient and viscosity can be used to benchmark two analytical one-component plasma (OCP) models: the Coulomb and Yukawa OCP models.
Sub-Saharan Africa Report, No. 2830
1983-08-12
proceeds abroad and earn in- come. This scheme would require suffi- cient forex reserves. It would provide a counter revenue which could be set...also as- sisted our credit rating. Forex controls Your Money: Is there a benchmark gold price for the lifting of foreign exchange controls? Dc...first and lest it before tak- ing the next step. Your Money: The lift- ing of forex controls could lead to a vola- tile exchange rate. De Loor
Performance effects of irregular communications patterns on massively parallel multiprocessors
NASA Technical Reports Server (NTRS)
Saltz, Joel; Petiton, Serge; Berryman, Harry; Rifkin, Adam
1991-01-01
A detailed study of the performance effects of irregular communications patterns on the CM-2 was conducted. The communications capabilities of the CM-2 were characterized under a variety of controlled conditions. In the process of carrying out the performance evaluation, extensive use was made of a parameterized synthetic mesh. In addition, timings with unstructured meshes generated for aerodynamic codes and a set of sparse matrices with banded patterns on non-zeroes were performed. This benchmarking suite stresses the communications capabilities of the CM-2 in a range of different ways. Benchmark results demonstrate that it is possible to make effective use of much of the massive concurrency available in the communications network.
HS06 Benchmark for an ARM Server
NASA Astrophysics Data System (ADS)
Kluth, Stefan
2014-06-01
We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.
PMLB: a large benchmark suite for machine learning evaluation and comparison.
Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H
2017-01-01
The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.
The General Concept of Benchmarking and Its Application in Higher Education in Europe
ERIC Educational Resources Information Center
Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna
2009-01-01
The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…
Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback
Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie
2017-01-01
An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes. PMID:28208781
Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback.
Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie
2017-02-09
An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.
Potential of mean force for electrical conductivity of dense plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starrett, C. E.
The electrical conductivity in dense plasmas can be calculated with the relaxation-time approximation provided that the interaction potential between the scattering electron and the ion is known. To date there has been considerable uncertainty as to the best way to define this interaction potential so that it correctly includes the effects of ionic structure, screening by electrons and partial ionization. The current approximations lead to significantly different results with varying levels of agreement when compared to bench-mark calculations and experiments. Here, we present a new way to define this potential, drawing on ideas from classical fluid theory to define amore » potential of mean force. This new potential results in significantly improved agreement with experiments and bench-mark calculations, and includes all the aforementioned physics self-consistently.« less
Conceptual Soundness, Metric Development, Benchmarking, and Targeting for PATH Subprogram Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosey. G.; Doris, E.; Coggeshall, C.
The objective of this study is to evaluate the conceptual soundness of the U.S. Department of Housing and Urban Development (HUD) Partnership for Advancing Technology in Housing (PATH) program's revised goals and establish and apply a framework to identify and recommend metrics that are the most useful for measuring PATH's progress. This report provides an evaluative review of PATH's revised goals, outlines a structured method for identifying and selecting metrics, proposes metrics and benchmarks for a sampling of individual PATH programs, and discusses other metrics that potentially could be developed that may add value to the evaluation process. The frameworkmore » and individual program metrics can be used for ongoing management improvement efforts and to inform broader program-level metrics for government reporting requirements.« less
Potential of mean force for electrical conductivity of dense plasmas
Starrett, C. E.
2017-09-28
The electrical conductivity in dense plasmas can be calculated with the relaxation-time approximation provided that the interaction potential between the scattering electron and the ion is known. To date there has been considerable uncertainty as to the best way to define this interaction potential so that it correctly includes the effects of ionic structure, screening by electrons and partial ionization. The current approximations lead to significantly different results with varying levels of agreement when compared to bench-mark calculations and experiments. Here, we present a new way to define this potential, drawing on ideas from classical fluid theory to define amore » potential of mean force. This new potential results in significantly improved agreement with experiments and bench-mark calculations, and includes all the aforementioned physics self-consistently.« less
Potential of mean force for electrical conductivity of dense plasmas
NASA Astrophysics Data System (ADS)
Starrett, C. E.
2017-12-01
The electrical conductivity in dense plasmas can be calculated with the relaxation-time approximation provided that the interaction potential between the scattering electron and the ion is known. To date there has been considerable uncertainty as to the best way to define this interaction potential so that it correctly includes the effects of ionic structure, screening by electrons and partial ionization. Current approximations lead to significantly different results with varying levels of agreement when compared to bench-mark calculations and experiments. We present a new way to define this potential, drawing on ideas from classical fluid theory to define a potential of mean force. This new potential results in significantly improved agreement with experiments and bench-mark calculations, and includes all the aforementioned physics self-consistently.
NASA Astrophysics Data System (ADS)
Job, Joshua; Wang, Zhihui; Rønnow, Troels; Troyer, Matthias; Lidar, Daniel
2014-03-01
We report on experimental work benchmarking the performance of the D-Wave Two programmable annealer on its native Ising problem, and a comparison to available classical algorithms. In this talk we will focus on the comparison with an algorithm originally proposed and implemented by Alex Selby. This algorithm uses dynamic programming to repeatedly optimize over randomly selected maximal induced trees of the problem graph starting from a random initial state. If one is looking for a quantum advantage over classical algorithms, one should compare to classical algorithms which are designed and optimized to maximally take advantage of the structure of the type of problem one is using for the comparison. In that light, this classical algorithm should serve as a good gauge for any potential quantum speedup for the D-Wave Two.
Data Intensive Systems (DIS) Benchmark Performance Summary
2003-08-01
models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures
ERIC Educational Resources Information Center
Paradise, Andrew
2016-01-01
Building on the inaugural survey conducted three years prior, the 2015 CASE Community College Alumni Relations survey collected additional insightful data on staffing, structure, communications, engagement, and fundraising. This white paper features key data on alumni relations programs at community colleges across the United States. The paper…
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
NASA Technical Reports Server (NTRS)
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2016-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused 1 by model inputs from uncertainty due to model structural error. We extend this method with a large-sample approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
NASA Astrophysics Data System (ADS)
Salmaso, Veronica; Sturlese, Mattia; Cuzzolin, Alberto; Moro, Stefano
2018-01-01
Molecular docking is a powerful tool in the field of computer-aided molecular design. In particular, it is the technique of choice for the prediction of a ligand pose within its target binding site. A multitude of docking methods is available nowadays, whose performance may vary depending on the data set. Therefore, some non-trivial choices should be made before starting a docking simulation. In the same framework, the selection of the target structure to use could be challenging, since the number of available experimental structures is increasing. Both issues have been explored within this work. The pose prediction of a pool of 36 compounds provided by D3R Grand Challenge 2 organizers was preceded by a pipeline to choose the best protein/docking-method couple for each blind ligand. An integrated benchmark approach including ligand shape comparison and cross-docking evaluations was implemented inside our DockBench software. The results are encouraging and show that bringing attention to the choice of the docking simulation fundamental components improves the results of the binding mode predictions.
Bias-Free Chemically Diverse Test Sets from Machine Learning.
Swann, Ellen T; Fernandez, Michael; Coote, Michelle L; Barnard, Amanda S
2017-08-14
Current benchmarking methods in quantum chemistry rely on databases that are built using a chemist's intuition. It is not fully understood how diverse or representative these databases truly are. Multivariate statistical techniques like archetypal analysis and K-means clustering have previously been used to summarize large sets of nanoparticles however molecules are more diverse and not as easily characterized by descriptors. In this work, we compare three sets of descriptors based on the one-, two-, and three-dimensional structure of a molecule. Using data from the NIST Computational Chemistry Comparison and Benchmark Database and machine learning techniques, we demonstrate the functional relationship between these structural descriptors and the electronic energy of molecules. Archetypes and prototypes found with topological or Coulomb matrix descriptors can be used to identify smaller, statistically significant test sets that better capture the diversity of chemical space. We apply this same method to find a diverse subset of organic molecules to demonstrate how the methods can easily be reapplied to individual research projects. Finally, we use our bias-free test sets to assess the performance of density functional theory and quantum Monte Carlo methods.
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2018-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a “large-sample” approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances. PMID:29697706
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions.
Nearing, Grey S; Mocko, David M; Peters-Lidard, Christa D; Kumar, Sujay V; Xia, Youlong
2016-03-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a "large-sample" approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
NASA Astrophysics Data System (ADS)
Hasegawa, Hiroshi; Sonnerup, Bengt U. Ã.-.; Nakamura, Takuma K. M.
2010-11-01
First results are presented of a method, developed by Sonnerup and Hasegawa (2010), for analyzing time evolution of magnetohydrostatic Grad-Shafranov (GS) equilibria, using data recorded by an observing probe as it traverses a quasi-static, two-dimensional (2D), magnetic-field/plasma structure. The method recovers spatial initial values used in the classical GS reconstruction for an interval before and after the time of actual measurements, by advancing them backward and forward in time based on a set of equations for an incompressible plasma; the consequence is generation of multiple GS maps or a movie of the 2D field structure. The method is successfully benchmarked by use of a 2D magnetohydrodynamic simulation of time-dependent magnetic reconnection, and then is applied to a flux transfer event (FTE) seen by the Cluster spacecraft at the dayside high-latitude magnetopause. The application shows that the field lines constituting the FTE flux rope were contracting toward its center as a result of modest convective flow in the region around the core of the flux rope.
Benchmarking reference services: an introduction.
Marshall, J G; Buchanan, H S
1995-01-01
Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.
Benchmarking an unstructured grid sediment model in an energetic estuary
Lopez, Jesse E.; Baptista, António M.
2016-12-14
A sediment model coupled to the hydrodynamic model SELFE is validated against a benchmark combining a set of idealized tests and an application to a field-data rich energetic estuary. After sensitivity studies, model results for the idealized tests largely agree with previously reported results from other models in addition to analytical, semi-analytical, or laboratory results. Results of suspended sediment in an open channel test with fixed bottom are sensitive to turbulence closure and treatment for hydrodynamic bottom boundary. Results for the migration of a trench are very sensitive to critical stress and erosion rate, but largely insensitive to turbulence closure.more » The model is able to qualitatively represent sediment dynamics associated with estuarine turbidity maxima in an idealized estuary. Applied to the Columbia River estuary, the model qualitatively captures sediment dynamics observed by fixed stations and shipborne profiles. Representation of the vertical structure of suspended sediment degrades when stratification is underpredicted. Across all tests, skill metrics of suspended sediments lag those of hydrodynamics even when qualitatively representing dynamics. The benchmark is fully documented in an openly available repository to encourage unambiguous comparisons against other models.« less
Benchmarking Ligand-Based Virtual High-Throughput Screening with the PubChem Database
Butkiewicz, Mariusz; Lowe, Edward W.; Mueller, Ralf; Mendenhall, Jeffrey L.; Teixeira, Pedro L.; Weaver, C. David; Meiler, Jens
2013-01-01
With the rapidly increasing availability of High-Throughput Screening (HTS) data in the public domain, such as the PubChem database, methods for ligand-based computer-aided drug discovery (LB-CADD) have the potential to accelerate and reduce the cost of probe development and drug discovery efforts in academia. We assemble nine data sets from realistic HTS campaigns representing major families of drug target proteins for benchmarking LB-CADD methods. Each data set is public domain through PubChem and carefully collated through confirmation screens validating active compounds. These data sets provide the foundation for benchmarking a new cheminformatics framework BCL::ChemInfo, which is freely available for non-commercial use. Quantitative structure activity relationship (QSAR) models are built using Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Decision Trees (DTs), and Kohonen networks (KNs). Problem-specific descriptor optimization protocols are assessed including Sequential Feature Forward Selection (SFFS) and various information content measures. Measures of predictive power and confidence are evaluated through cross-validation, and a consensus prediction scheme is tested that combines orthogonal machine learning algorithms into a single predictor. Enrichments ranging from 15 to 101 for a TPR cutoff of 25% are observed. PMID:23299552
Achieving excellence in veterans healthcare--a balanced scorecard approach.
Biro, Lawrence A; Moreland, Michael E; Cowgill, David E
2003-01-01
This article provides healthcare administrators and managers with a framework and model for developing a balanced scorecard and demonstrates the remarkable success of this process, which brings focus to leadership decisions about the allocation of resources. This scorecard was developed as a top management tool designed to structure multiple priorities of a large, complex, integrated healthcare system and to establish benchmarks to measure success in achieving targets for performance in identified areas. Significant benefits and positive results were derived from the implementation of the balanced scorecard, based upon benchmarks considered to be critical success factors. The network's chief executive officer and top leadership team set and articulated the network's primary operating principles: quality and efficiency in the provision of comprehensive healthcare and support services. Under the weighted benchmarks of the balanced scorecard, the facilities in the network were mandated to adhere to one non-negotiable tenet: providing care that is second to none. The balanced scorecard approach to leadership continuously ensures that this is the primary goal and focal point for all activity within the network. To that end, systems are always in place to ensure that the network is fully successful on all performance measures relating to quality.
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Hopkins, Deborah; Datuin, Marvin; Warchol, Mark; Warchol, Lyudmila; Forsyth, David S.; Buynak, Charlie; Lindgren, Eric A.
2017-02-01
For model benchmark studies, the accuracy of the model is typically evaluated based on the change in response relative to a selected reference signal. The use of a side drilled hole (SDH) in a plate was investigated as a reference signal for angled beam shear wave inspection for aircraft structure inspections of fastener sites. Systematic studies were performed with varying SDH depth and size, and varying the ultrasonic probe frequency, focal depth, and probe height. Increased error was observed with the simulation of angled shear wave beams in the near-field. Even more significant, asymmetry in real probes and the inherent sensitivity of signals in the near-field to subtle test conditions were found to provide a greater challenge with achieving model agreement. To achieve quality model benchmark results for this problem, it is critical to carefully align the probe with the part geometry, to verify symmetry in probe response, and ideally avoid using reference signals from the near-field response. Suggested reference signals for angled beam shear wave inspections include using the `through hole' corner specular reflection signal and the full skip' signal off of the far wall from the side drilled hole.
Adaptive firefly algorithm: parameter analysis and its application.
Cheung, Ngaam J; Ding, Xue-Ming; Shen, Hong-Bin
2014-01-01
As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm - adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem - protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise.
Adaptive Firefly Algorithm: Parameter Analysis and its Application
Shen, Hong-Bin
2014-01-01
As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm — adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem — protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise. PMID:25397812
Taking the Battle Upstream: Towards a Benchmarking Role for NATO
2012-09-01
Benchmark.........................................................................................14 Figure 8. World Bank Benchmarking Work on Quality...Search of a Benchmarking Theory for the Public Sector.” 16 Figure 8. World Bank Benchmarking Work on Quality of Governance One of the most...the Ministries of Defense in the countries in which it works ). Another interesting innovation is that for comparison purposes, McKinsey categorized
ERIC Educational Resources Information Center
Kent State Univ., OH. Ohio Literacy Resource Center.
This document is intended to show the relationship between Ohio's Standards and Competencies, Equipped for the Future's (EFF's) Standards and Components of Performance, and Ohio's Revised Benchmarks. The document is divided into three parts, with Part 1 covering mathematics instruction, Part 2 covering reading instruction, and Part 3 covering…
BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets(SoTC)
Background: Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to estimate acceptable exposure...
Sudden jump in VAP spurs QI to cut rate to near zero.
2005-07-01
New device allows staff to brush ventilated patients' teeth three times a day. Just-in-time training for teams includes reorientation to cause/effect diagrams, control charts. Benchmarking data helps earn trust, business support of high-volume payer.
BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets (STC symposium)
Background: Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to estimate acceptable exposure...
How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction
NASA Astrophysics Data System (ADS)
Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.
2015-03-01
The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.
Benchmarking and audit of breast units improves quality of care
van Dam, P.A.; Verkinderen, L.; Hauspy, J.; Vermeulen, P.; Dirix, L.; Huizing, M.; Altintas, S.; Papadimitriou, K.; Peeters, M.; Tjalma, W.
2013-01-01
Quality Indicators (QIs) are measures of health care quality that make use of readily available hospital inpatient administrative data. Assessment quality of care can be performed on different levels: national, regional, on a hospital basis or on an individual basis. It can be a mandatory or voluntary system. In all cases development of an adequate database for data extraction, and feedback of the findings is of paramount importance. In the present paper we performed a Medline search on “QIs and breast cancer” and “benchmarking and breast cancer care”, and we have added some data from personal experience. The current data clearly show that the use of QIs for breast cancer care, regular internal and external audit of performance of breast units, and benchmarking are effective to improve quality of care. Adherence to guidelines improves markedly (particularly regarding adjuvant treatment) and there are data emerging showing that this results in a better outcome. As quality assurance benefits patients, it will be a challenge for the medical and hospital community to develop affordable quality control systems, which are not leading to excessive workload. PMID:24753926
Savi, Daniel; Kasser, Ueli; Ott, Thomas
2013-12-01
The article compiles and analyses sample data for toxic components removed from waste electronic and electrical equipment (WEEE) from more than 30 recycling companies in Switzerland over the past ten years. According to European and Swiss legislation, toxic components like batteries, capacitors and printed wiring boards have to be removed from WEEE. The control bodies of the Swiss take back schemes have been monitoring the activities of WEEE recyclers in Switzerland for about 15 years. All recyclers have to provide annual mass balance data for every year of operation. From this data, percentage shares of removed batteries and capacitors are calculated in relation to the amount of each respective WEEE category treated. A rationale is developed, why such an indicator should not be calculated for printed wiring boards. The distributions of these de-pollution indicators are analysed and their suitability for defining lower threshold values and benchmarks for the depollution of WEEE is discussed. Recommendations for benchmarks and threshold values for the removal of capacitors and batteries are given. Copyright © 2013 Elsevier Ltd. All rights reserved.
A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059
The Surge Capacity for People in Emergencies (SCOPE) study in Australasian hospitals.
Traub, Matthias; Bradt, David A; Joseph, Anthony P
2007-04-16
To measure physical assets in Australasian hospitals required for the management of mass casualties as a result of terrorism or natural disasters. A cross-sectional survey of Australian and New Zealand hospitals. All emergency department directors of Australasian College for Emergency Medicine (ACEM)-accredited hospitals, as well as private and non-ACEM accredited emergency departments staffed by ACEM Fellows in metropolitan Sydney. Numbers of operating theatres, intensive care unit (ICU) beds and x-ray machines; state of preparedness using benchmarks defined by the Centers for Disease Control and Prevention in the United States. We found that 61%-82% of critically injured patients would not have immediate access to operative care, 34%-70% would have delayed access to an ICU bed, and 42% of the less critically injured would have delayed access to x-ray facilities. Our study demonstrates that physical assets in Australasian public hospitals do not meet US hospital preparedness benchmarks for mass casualty incidents. We recommend national agreement on disaster preparedness benchmarks and periodic publication of hospital performance indicators to enhance disaster preparedness.
Mandic, D. P.; Ryan, K.; Basu, B.; Pakrashi, V.
2016-01-01
Although vibration monitoring is a popular method to monitor and assess dynamic structures, quantification of linearity or nonlinearity of the dynamic responses remains a challenging problem. We investigate the delay vector variance (DVV) method in this regard in a comprehensive manner to establish the degree to which a change in signal nonlinearity can be related to system nonlinearity and how a change in system parameters affects the nonlinearity in the dynamic response of the system. A wide range of theoretical situations are considered in this regard using a single degree of freedom (SDOF) system to obtain numerical benchmarks. A number of experiments are then carried out using a physical SDOF model in the laboratory. Finally, a composite wind turbine blade is tested for different excitations and the dynamic responses are measured at a number of points to extend the investigation to continuum structures. The dynamic responses were measured using accelerometers, strain gauges and a Laser Doppler vibrometer. This comprehensive study creates a numerical and experimental benchmark for structurally dynamical systems where output-only information is typically available, especially in the context of DVV. The study also allows for comparative analysis between different systems driven by the similar input. PMID:26909175
Osborne, Candice L; Petersson, Christina; Graham, James E; Meyer, Walter J; Simeonsson, Rune J; Suman, Oscar E; Ottenbacher, Kenneth J
2016-11-01
To link, classify and describe the content of the Multicenter Benchmarking Study Burn Outcomes Questionnaires (BOQ) using the International Classification of Functioning, Disability and Health (ICF) to determine if the information garnered provides researchers with the data necessary to develop a comprehensive understanding of life after burns. Two ICF linking experts used a standardized linking technique endorsed by the World Health Organization to link all BOQ concepts to the ICF. Linking results were analyzed to determine the comprehensiveness of each of the five measures. The activities and participation component was most frequently addressed followed by the body functions component. Environmental factors are not extensively covered and body structures are not addressed. ICF chapter and category distribution were skewed and varied between assessments. The majority of BOQ items are of the health status perspective. BOQ item composition could be improved with a more even distribution of pertinent ICF topics. Assessment authors may consider addressing the impact of environmental factors on participation. Including body structure concepts would allow investigators to track structural deformation and/or developmental delay. Generally speaking, this data should not be used to examine quality of life outcomes. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.
Determinants of success in Shared Savings Programs: An analysis of ACO and market characteristics.
Ouayogodé, Mariétou H; Colla, Carrie H; Lewis, Valerie A
2017-03-01
Medicare's Accountable Care Organization (ACO) programs introduced shared savings to traditional Medicare, which allow providers who reduce health care costs for their patients to retain a percentage of the savings they generate. To examine ACO and market factors associated with superior financial performance in Medicare ACO programs. We obtained financial performance data from the Centers for Medicare and Medicaid Services (CMS); we derived market-level characteristics from Medicare claims; and we collected ACO characteristics from the National Survey of ACOs for 215 ACOs. We examined the association between ACO financial performance and ACO provider composition, leadership structure, beneficiary characteristics, risk bearing experience, quality and process improvement capabilities, physician performance management, market competition, CMS-assigned financial benchmark, and ACO contract start date. We examined two outcomes from Medicare ACOs' first performance year: savings per Medicare beneficiary and earning shared savings payments (a dichotomous variable). When modeling the ACO ability to save and earn shared savings payments, we estimated positive regression coefficients for a greater proportion of primary care providers in the ACO, more practicing physicians on the governing board, physician leadership, active engagement in reducing hospital re-admissions, a greater proportion of disabled Medicare beneficiaries assigned to the ACO, financial incentives offered to physicians, a larger financial benchmark, and greater ACO market penetration. No characteristic of organizational structure was significantly associated with both outcomes of savings per beneficiary and likelihood of achieving shared savings. ACO prior experience with risk-bearing contracts was positively correlated with savings and significantly increased the likelihood of receiving shared savings payments. In the first year, performance is quite heterogeneous, yet organizational structure does not consistently predict performance. Organizations with large financial benchmarks at baseline have greater opportunities to achieve savings. Findings on prior risk bearing suggest that ACOs learn over time under risk-bearing contracts. Given the lack of predictive power for organizational characteristics, CMS should continue to encourage diversity in organizational structures for ACO participants, and provide alternative funding and risk bearing mechanisms to continue to allow a diverse group of organizations to participate. III. Copyright © 2016 Elsevier Inc. All rights reserved.
Determinants of Success in Shared Savings Programs: An Analysis of ACO and Market Characteristics
Colla, Carrie H.; Lewis, Valerie A.
2016-01-01
Background Medicare’s Accountable Care Organization (ACO) programs introduced shared savings to traditional Medicare, which allow providers who reduce health care costs for their patients to retain a percentage of the savings they generate. Objective To examine ACO and market factors associated with superior financial performance in Medicare ACO programs. Methods We obtained financial performance data from the Centers for Medicare and Medicaid Services (CMS); we derived market-level characteristics from Medicare claims; and we collected ACO characteristics from the National Survey of ACOs for 215 ACOs. We examined the association between ACO financial performance and ACO provider composition, leadership structure, beneficiary characteristics, risk bearing experience, quality and process improvement capabilities, physician performance management, market competition, CMS-assigned financial benchmark, and ACO contract start date. We examined two outcomes from Medicare ACOs’ first performance year: savings per Medicare beneficiary and earning shared savings payments (a dichotomous variable). Results When modeling the ACO ability to save and earn shared savings payments, we estimated positive regression coefficients for a greater proportion of primary care providers in the ACO, more practicing physicians on the governing board, physician leadership, active engagement in reducing hospital re-admissions, a greater proportion of disabled Medicare beneficiaries assigned to the ACO, financial incentives offered to physicians, a larger financial benchmark, and greater ACO market penetration. No characteristic of organizational structure was significantly associated with both outcomes of savings per beneficiary and likelihood of achieving shared savings. ACO prior experience with risk-bearing contracts was positively correlated with savings and significantly increased the likelihood of receiving shared savings payments. Conclusions In the first year performance is quite heterogeneous, yet organizational structure does not consistently predict performance. Organizations with large financial benchmarks at baseline have greater opportunities to achieve savings. Findings on prior risk bearing suggest that ACOs learn over time under risk-bearing contracts. Implications Given the lack of predictive power for organizational characteristics, CMS should continue to encourage diversity in organizational structures for ACO participants, and provide alternative funding and risk bearing mechanisms to continue to allow a diverse group of organizations to participate. Level of evidence III PMID:27687917
Bess, John D.; Fujimoto, Nozomu
2014-10-09
Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less
A health risk benchmark for the neurologic effects of styrene: comparison with NOAEL/LOAEL approach.
Rabovsky, J; Fowles, J; Hill, M D; Lewis, D C
2001-02-01
Benchmark dose (BMD) analysis was used to estimate an inhalation benchmark concentration for styrene neurotoxicity. Quantal data on neuropsychologic test results from styrene-exposed workers [Mutti et al. (1984). American Journal of Industrial Medicine, 5, 275-286] were used to quantify neurotoxicity, defined as the percent of tested workers who responded abnormally to > or = 1, > or = 2, or > or = 3 out of a battery of eight tests. Exposure was based on previously published results on mean urinary mandelic- and phenylglyoxylic acid levels in the workers, converted to air styrene levels (15, 44, 74, or 115 ppm). Nonstyrene-exposed workers from the same region served as a control group. Maximum-likelihood estimates (MLEs) and BMDs at 5 and 10% response levels of the exposed population were obtained from log-normal analysis of the quantal data. The highest MLE was 9 ppm (BMD = 4 ppm) styrene and represents abnormal responses to > or = 3 tests by 10% of the exposed population. The most health-protective MLE was 2 ppm styrene (BMD = 0.3 ppm) and represents abnormal responses to > or = 1 test by 5% of the exposed population. A no observed adverse effect level/lowest observed adverse effect level (NOAEL/LOAEL) analysis of the same quantal data showed workers in all styrene exposure groups responded abnormally to > or = 1, > or = 2, or > or = 3 tests, compared to controls, and the LOAEL was 15 ppm. A comparison of the BMD and NOAEL/LOAEL analyses suggests that at air styrene levels below the LOAEL, a segment of the worker population may be adversely affected. The benchmark approach will be useful for styrene noncancer risk assessment purposes by providing a more accurate estimate of potential risk that should, in turn, help to reduce the uncertainty that is a common problem in setting exposure levels.
A suite of benchmark and challenge problems for enhanced geothermal systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Mark; Fu, Pengcheng; McClure, Mark
A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilitiesmore » to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners. We present the suite of benchmark and challenge problems developed for the GTO-CCS, providing problem descriptions and sample solutions.« less
Benditz, Achim; Greimel, Felix; Auer, Patrick; Zeman, Florian; Göttermann, Antje; Grifka, Joachim; Meissner, Winfried; von Kunow, Frederik
2016-01-01
Background The number of total hip replacement surgeries has steadily increased over recent years. Reduction in postoperative pain increases patient satisfaction and enables better mobilization. Thus, pain management needs to be continuously improved. Problems are often caused not only by medical issues but also by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent, benchmarking. Methods All patients included in the study had undergone total hip arthroplasty (THA). Outcome parameters were analyzed 24 hours after surgery by means of the questionnaires from the German-wide project “Quality Improvement in Postoperative Pain Management” (QUIPS). A pain nurse interviewed patients and continuously assessed outcome quality parameters. A multidisciplinary team of anesthetists, orthopedic surgeons, and nurses implemented a regular procedure of data analysis and internal benchmarking. The health care team was informed of any results, and suggested improvements. Every staff member involved in pain management participated in educational lessons, and a special pain nurse was trained in each ward. Results From 2014 to 2015, 367 patients were included. The mean maximal pain score 24 hours after surgery was 4.0 (±3.0) on an 11-point numeric rating scale, and patient satisfaction was 9.0 (±1.2). Over time, the maximum pain score decreased (mean 3.0, ±2.0), whereas patient satisfaction significantly increased (mean 9.8, ±0.4; p<0.05). Among 49 anonymized hospitals, our clinic stayed on first rank in terms of lowest maximum pain and patient satisfaction over the period. Conclusion Results were already acceptable at the beginning of benchmarking a standardized pain management concept. But regular benchmarking, implementation of feedback mechanisms, and staff education made the pain management concept even more successful. Multidisciplinary teamwork and flexibility in adapting processes seem to be highly important for successful pain management. PMID:28031727
Benditz, Achim; Greimel, Felix; Auer, Patrick; Zeman, Florian; Göttermann, Antje; Grifka, Joachim; Meissner, Winfried; von Kunow, Frederik
2016-01-01
The number of total hip replacement surgeries has steadily increased over recent years. Reduction in postoperative pain increases patient satisfaction and enables better mobilization. Thus, pain management needs to be continuously improved. Problems are often caused not only by medical issues but also by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent, benchmarking. All patients included in the study had undergone total hip arthroplasty (THA). Outcome parameters were analyzed 24 hours after surgery by means of the questionnaires from the German-wide project "Quality Improvement in Postoperative Pain Management" (QUIPS). A pain nurse interviewed patients and continuously assessed outcome quality parameters. A multidisciplinary team of anesthetists, orthopedic surgeons, and nurses implemented a regular procedure of data analysis and internal benchmarking. The health care team was informed of any results, and suggested improvements. Every staff member involved in pain management participated in educational lessons, and a special pain nurse was trained in each ward. From 2014 to 2015, 367 patients were included. The mean maximal pain score 24 hours after surgery was 4.0 (±3.0) on an 11-point numeric rating scale, and patient satisfaction was 9.0 (±1.2). Over time, the maximum pain score decreased (mean 3.0, ±2.0), whereas patient satisfaction significantly increased (mean 9.8, ±0.4; p <0.05). Among 49 anonymized hospitals, our clinic stayed on first rank in terms of lowest maximum pain and patient satisfaction over the period. Results were already acceptable at the beginning of benchmarking a standardized pain management concept. But regular benchmarking, implementation of feedback mechanisms, and staff education made the pain management concept even more successful. Multidisciplinary teamwork and flexibility in adapting processes seem to be highly important for successful pain management.
Bertzbach, F; Franz, T; Möller, K
2012-01-01
This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.
Arroll, Bruce; Bennett, Merran; Dalbeth, Nicola; Hettiarachchi, Dilanka; Ben, Cribben; Shelling, Ginnie
2009-12-01
To establish a benchmark for gout control using the proportion of patients with serum uric acid (SUA) < 0.36 mmol/L, assess patients' understanding of their preventive medication and trial a mail and phone intervention to improve gout control. Patients clinically diagnosed with gout and baseline SUAs were identified in two South Auckland practices. A mail and phone intervention was introduced aimed at improving the control of gout. Intervention #1 took place in one practice over three months. Intervention #2 occurred in the other practice four to 16 months following baseline. No significant change in SUA from intervention #1 after three months. The second intervention by mail and phone resulted in improvement in SUA levels with a greater proportion of those with SUA < 0.36 mmol/L and the difference in means statistically significant (p = 0.039 two-tailed paired t-test). Benchmarking for usual care was established at 38-43% SUA < 0.36 level. It was possible to increase from 38% to 50%. Issues relating to gout identified included lack of understanding of the need for long-term allopurinol and diagnosis and management for patients for whom English is not their first language. 1. Community workers who speak Pacific languages may assist GPs in communicating to non-English speaking patients. 2. Alternative diagnoses should be considered in symptomatic patients with prolonged normouricaemia. 3. GPs should gradually introduce allopurinol after acute gout attacks, emphasising importance of prophylaxis. 4. A campaign to inform patients about benefits of allopurinol should be considered. 5. A simple one keystroke audit is needed for gout audit and benchmarking. 6. GP guidelines for gout diagnosis and management should be available.
Highly Enriched Uranium Metal Cylinders Surrounded by Various Reflector Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernard Jones; J. Blair Briggs; Leland Monteirth
A series of experiments was performed at Los Alamos Scientific Laboratory in 1958 to determine critical masses of cylinders of Oralloy (Oy) reflected by a number of materials. The experiments were all performed on the Comet Universal Critical Assembly Machine, and consisted of discs of highly enriched uranium (93.3 wt.% 235U) reflected by half-inch and one-inch-thick cylindrical shells of various reflector materials. The experiments were performed by members of Group N-2, particularly K. W. Gallup, G. E. Hansen, H. C. Paxton, and R. H. White. This experiment was intended to ascertain critical masses for criticality safety purposes, as well asmore » to compare neutron transport cross sections to those obtained from danger coefficient measurements with the Topsy Oralloy-Tuballoy reflected and Godiva unreflected critical assemblies. The reflector materials examined in this series of experiments are as follows: magnesium, titanium, aluminum, graphite, mild steel, nickel, copper, cobalt, molybdenum, natural uranium, tungsten, beryllium, aluminum oxide, molybdenum carbide, and polythene (polyethylene). Also included are two special configurations of composite beryllium and iron reflectors. Analyses were performed in which uncertainty associated with six different parameters was evaluated; namely, extrapolation to the uranium critical mass, uranium density, 235U enrichment, reflector density, reflector thickness, and reflector impurities. In addition to the idealizations made by the experimenters (removal of the platen and diaphragm), two simplifications were also made to the benchmark models that resulted in a small bias and additional uncertainty. First of all, since impurities in core and reflector materials are only estimated, they are not included in the benchmark models. Secondly, the room, support structure, and other possible surrounding equipment were not included in the model. Bias values that result from these two simplifications were determined and associated uncertainty in the bias values were included in the overall uncertainty in benchmark keff values. Bias values were very small, ranging from 0.0004 ?k low to 0.0007 ?k low. Overall uncertainties range from ? 0.0018 to ? 0.0030. Major contributors to the overall uncertainty include uncertainty in the extrapolation to the uranium critical mass and the uranium density. Results are summarized in Figure 1. Figure 1. Experimental, Benchmark-Model, and MCNP/KENO Calculated Results The 32 configurations described and evaluated under ICSBEP Identifier HEU-MET-FAST-084 are judged to be acceptable for use as criticality safety benchmark experiments and should be valuable integral benchmarks for nuclear data testing of the various reflector materials. Details of the benchmark models, uncertainty analyses, and final results are given in this paper.« less
Benchmarking clinical photography services in the NHS.
Arbon, Giles
2015-01-01
Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.
A Seafloor Benchmark for 3-dimensional Geodesy
NASA Astrophysics Data System (ADS)
Chadwell, C. D.; Webb, S. C.; Nooner, S. L.
2014-12-01
We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone. Using a ROV to place and remove sensors on the benchmarks will significantly reduce the number of sensors required by the community to monitor offshore strain in subduction zones.
A Visual Evaluation Study of Graph Sampling Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fangyan; Zhang, Song; Wong, Pak C.
2017-01-29
We evaluate a dozen prevailing graph-sampling techniques with an ultimate goal to better visualize and understand big and complex graphs that exhibit different properties and structures. The evaluation uses eight benchmark datasets with four different graph types collected from Stanford Network Analysis Platform and NetworkX to give a comprehensive comparison of various types of graphs. The study provides a practical guideline for visualizing big graphs of different sizes and structures. The paper discusses results and important observations from the study.
Linear Scaling Density Functional Calculations with Gaussian Orbitals
NASA Technical Reports Server (NTRS)
Scuseria, Gustavo E.
1999-01-01
Recent advances in linear scaling algorithms that circumvent the computational bottlenecks of large-scale electronic structure simulations make it possible to carry out density functional calculations with Gaussian orbitals on molecules containing more than 1000 atoms and 15000 basis functions using current workstations and personal computers. This paper discusses the recent theoretical developments that have led to these advances and demonstrates in a series of benchmark calculations the present capabilities of state-of-the-art computational quantum chemistry programs for the prediction of molecular structure and properties.
Resource requirements of inclusive urban development in India: insights from ten cities
NASA Astrophysics Data System (ADS)
Singh Nagpure, Ajay; Reiner, Mark; Ramaswami, Anu
2018-02-01
This paper develops a methodology to assess the resource requirements of inclusive urban development in India and compares those requirements to current community-wide material and energy flows. Methods include: (a) identifying minimum service level benchmarks for the provision of infrastructure services including housing, electricity and clean cooking fuels; (b) assessing the percentage of homes that lack access to infrastructure or that consume infrastructure services below the identified benchmarks; (c) quantifying the material requirements to provide basic infrastructure services using India-specific design data; and (d) computing material and energy requirements for inclusive development and comparing it with current community-wide material and energy flows. Applying the method to ten Indian cities, we find that: 1%-6% of households do not have electricity, 14%-71% use electricity below the benchmark of 25 kWh capita-month-1 4%-16% lack structurally sound housing; 50%-75% live in floor area less than the benchmark of 8.75 m2 floor area/capita; 10%-65% lack clean cooking fuel; and 6%-60% lack connection to a sewerage system. Across the ten cities examined, to provide basic electricity (25 kWh capita-month-1) to all will require an addition of only 1%-10% in current community-wide electricity use. To provide basic clean LPG fuel (1.2 kg capita-month-1) to all requires an increase of 5%-40% in current community-wide LPG use. Providing permanent shelter (implemented over a ten year period) to populations living in non-permanent housing in Delhi and Chandigarh would require a 6%-14% increase over current annual community-wide cement use. Conversely, to provide permanent housing to all people living in structurally unsound housing and those living in overcrowded housing (<5 m cap-2) would require 32%-115% of current community-wide cement flows. Except for the last scenario, these results suggest that social policies that seek to provide basic infrastructure provisioning for all residents would not dramatically increasing current community-wide resource flows.
The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool
Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.
Benchmarking--Measuring and Comparing for Continuous Improvement.
ERIC Educational Resources Information Center
Henczel, Sue
2002-01-01
Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…
Pickwell, Andrew J; Dorey, Robert A; Mba, David
2011-09-01
Monitoring the condition of complex engineering structures is an important aspect of modern engineering, eliminating unnecessary work and enabling planned maintenance, preventing failure. Acoustic emissions (AE) testing is one method of implementing continuous nondestructive structural health monitoring. A novel thick-film (17.6 μm) AE sensor is presented. Lead zirconate titanate thick films were fabricated using a powder/sol composite ink deposition technique and mechanically patterned to form a discrete thick-film piezoelectric AE sensor. The thick-film sensor was benchmarked against a commercial AE device and was found to exhibit comparable responses to simulated acoustic emissions.
Fuzzy Structures Analysis of Aircraft Panels in NASTRAN
NASA Technical Reports Server (NTRS)
Sparrow, Victor W.; Buehrle, Ralph D.
2001-01-01
This paper concerns an application of the fuzzy structures analysis (FSA) procedures of Soize to prototypical aerospace panels in MSC/NASTRAN, a large commercial finite element program. A brief introduction to the FSA procedures is first provided. The implementation of the FSA methods is then disclosed, and the method is validated by comparison to published results for the forced vibrations of a fuzzy beam. The results of the new implementation show excellent agreement to the benchmark results. The ongoing effort at NASA Langley and Penn State to apply these fuzzy structures analysis procedures to real aircraft panels is then described.
Developing Benchmarks for Solar Radio Bursts
NASA Astrophysics Data System (ADS)
Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Domm, P.; Love, J. J.; Pierson, J.
2016-12-01
Solar radio bursts can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan has asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The solar radio benchmark team was also asked to define the wavelength/frequency bands of interest. The benchmark team developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks and the basis used to derive them. We will also present the work that needs to be done in order to complete the final, or phase 2 benchmarks.
Benchmarking in national health service procurement in Scotland.
Walker, Scott; Masson, Ron; Telford, Ronnie; White, David
2007-11-01
The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland.
Blecher, Evan
2010-08-01
To investigate the appropriateness of tax incidence (the percentage of the retail price occupied by taxes) benchmarking in low-income and-middle-income countries (LMICs) with rapidly growing economies and to explore the viability of an alternative tax policy rule based on the affordability of cigarettes. The paper outlines criticisms of tax incidence benchmarking, particularly in the context of LMICs. It then considers an affordability-based benchmark using relative income price (RIP) as a measure of affordability. The RIP measures the percentage of annual per capita GDP required to purchase 100 packs of cigarettes. Using South Africa as a case study of an LMIC, future consumption is simulated using both tax incidence benchmarks and affordability benchmarks. I show that a tax incidence benchmark is not an optimal policy tool in South Africa and that an affordability benchmark could be a more effective means of reducing tobacco consumption in the future. Although a tax incidence benchmark was successful in increasing prices and reducing tobacco consumption in South Africa in the past, this approach has drawbacks, particularly in the context of a rapidly growing LMIC economy. An affordability benchmark represents an appropriate alternative that would be more effective in reducing future cigarette consumption.
Wallwiener, Markus; Brucker, Sara Y; Wallwiener, Diethelm
2012-06-01
This review summarizes the rationale for the creation of breast centres and discusses the studies conducted in Germany to obtain proof of principle for a voluntary, external benchmarking programme and proof of concept for third-party dual certification of breast centres and their mandatory quality management systems to the German Cancer Society (DKG) and German Society of Senology (DGS) Requirements of Breast Centres and ISO 9001 or similar. In addition, we report the most recent data on benchmarking and certification of breast centres in Germany. Review and summary of pertinent publications. Literature searches to identify additional relevant studies. Updates from the DKG/DGS programmes. Improvements in surrogate parameters as represented by structural and process quality indicators suggest that outcome quality is improving. The voluntary benchmarking programme has gained wide acceptance among DKG/DGS-certified breast centres. This is evidenced by early results from one of the largest studies in multidisciplinary cancer services research, initiated by the DKG and DGS to implement certified breast centres. The goal of establishing a nationwide network of certified breast centres in Germany can be considered largely achieved. Nonetheless the network still needs to be improved, and there is potential for optimization along the chain of care from mammography screening, interventional diagnosis and treatment through to follow-up. Specialization, guideline-concordant procedures as well as certification and recertification of breast centres remain essential to achieve further improvements in quality of breast cancer care and to stabilize and enhance the nationwide provision of high-quality breast cancer care.
Numerical simulation of magmatic hydrothermal systems
Ingebritsen, S.E.; Geiger, S.; Hurwitz, S.; Driesner, T.
2010-01-01
The dynamic behavior of magmatic hydrothermal systems entails coupled and nonlinear multiphase flow, heat and solute transport, and deformation in highly heterogeneous media. Thus, quantitative analysis of these systems depends mainly on numerical solution of coupled partial differential equations and complementary equations of state (EOS). The past 2 decades have seen steady growth of computational power and the development of numerical models that have eliminated or minimized the need for various simplifying assumptions. Considerable heuristic insight has been gained from process-oriented numerical modeling. Recent modeling efforts employing relatively complete EOS and accurate transport calculations have revealed dynamic behavior that was damped by linearized, less accurate models, including fluid property control of hydrothermal plume temperatures and three-dimensional geometries. Other recent modeling results have further elucidated the controlling role of permeability structure and revealed the potential for significant hydrothermally driven deformation. Key areas for future reSearch include incorporation of accurate EOS for the complete H2O-NaCl-CO2 system, more realistic treatment of material heterogeneity in space and time, realistic description of large-scale relative permeability behavior, and intercode benchmarking comparisons. Copyright 2010 by the American Geophysical Union.
NASA Technical Reports Server (NTRS)
Probst, D.; Jensen, L.
1991-01-01
Delay-insensitive VLSI systems have a certain appeal on the ground due to difficulties with clocks; they are even more attractive in space. We answer the question, is it possible to control state explosion arising from various sources during automatic verification (model checking) of delay-insensitive systems? State explosion due to concurrency is handled by introducing a partial-order representation for systems, and defining system correctness as a simple relation between two partial orders on the same set of system events (a graph problem). State explosion due to nondeterminism (chiefly arbitration) is handled when the system to be verified has a clean, finite recurrence structure. Backwards branching is a further optimization. The heart of this approach is the ability, during model checking, to discover a compact finite presentation of the verified system without prior composition of system components. The fully-implemented POM verification system has polynomial space and time performance on traditional asynchronous-circuit benchmarks that are exponential in space and time for other verification systems. We also sketch the generalization of this approach to handle delay-constrained VLSI systems.
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...