Sample records for life structural benchmark

  1. Protein Models Docking Benchmark 2

    PubMed Central

    Anishchenko, Ivan; Kundrotas, Petras J.; Tuzikov, Alexander V.; Vakser, Ilya A.

    2015-01-01

    Structural characterization of protein-protein interactions is essential for our ability to understand life processes. However, only a fraction of known proteins have experimentally determined structures. Such structures provide templates for modeling of a large part of the proteome, where individual proteins can be docked by template-free or template-based techniques. Still, the sensitivity of the docking methods to the inherent inaccuracies of protein models, as opposed to the experimentally determined high-resolution structures, remains largely untested, primarily due to the absence of appropriate benchmark set(s). Structures in such a set should have pre-defined inaccuracy levels and, at the same time, resemble actual protein models in terms of structural motifs/packing. The set should also be large enough to ensure statistical reliability of the benchmarking results. We present a major update of the previously developed benchmark set of protein models. For each interactor, six models were generated with the model-to-native Cα RMSD in the 1 to 6 Å range. The models in the set were generated by a new approach, which corresponds to the actual modeling of new protein structures in the “real case scenario,” as opposed to the previous set, where a significant number of structures were model-like only. In addition, the larger number of complexes (165 vs. 63 in the previous set) increases the statistical reliability of the benchmarking. We estimated the highest accuracy of the predicted complexes (according to CAPRI criteria), which can be attained using the benchmark structures. The set is available at http://dockground.bioinformatics.ku.edu. PMID:25712716

  2. Structural Benchmark Creep Testing for the Advanced Stirling Convertor Heater Head

    NASA Technical Reports Server (NTRS)

    Krause, David L.; Kalluri, Sreeramesh; Bowman, Randy R.; Shah, Ashwin R.

    2008-01-01

    The National Aeronautics and Space Administration (NASA) has identified the high efficiency Advanced Stirling Radioisotope Generator (ASRG) as a candidate power source for use on long duration Science missions such as lunar applications, Mars rovers, and deep space missions. For the inherent long life times required, a structurally significant design limit for the heater head component of the ASRG Advanced Stirling Convertor (ASC) is creep deformation induced at low stress levels and high temperatures. Demonstrating proof of adequate margins on creep deformation and rupture for the operating conditions and the MarM-247 material of construction is a challenge that the NASA Glenn Research Center is addressing. The combined analytical and experimental program ensures integrity and high reliability of the heater head for its 17-year design life. The life assessment approach starts with an extensive series of uniaxial creep tests on thin MarM-247 specimens that comprise the same chemistry, microstructure, and heat treatment processing as the heater head itself. This effort addresses a scarcity of openly available creep properties for the material as well as for the virtual absence of understanding of the effect on creep properties due to very thin walls, fine grains, low stress levels, and high-temperature fabrication steps. The approach continues with a considerable analytical effort, both deterministically to evaluate the median creep life using nonlinear finite element analysis, and probabilistically to calculate the heater head s reliability to a higher degree. Finally, the approach includes a substantial structural benchmark creep testing activity to calibrate and validate the analytical work. This last element provides high fidelity testing of prototypical heater head test articles; the testing includes the relevant material issues and the essential multiaxial stress state, and applies prototypical and accelerated temperature profiles for timely results in a highly controlled laboratory environment. This paper focuses on the last element and presents a preliminary methodology for creep rate prediction, the experimental methods, test challenges, and results from benchmark testing of a trial MarM-247 heater head test article. The results compare favorably with the analytical strain predictions. A description of other test findings is provided, and recommendations for future test procedures are suggested. The manuscript concludes with describing the potential impact of the heater head creep life assessment and benchmark testing effort on the ASC program.

  3. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    EPA Pesticide Factsheets

    EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate genera and salinity (measured as conductivity) and from that relationship derives a freshwater aquatic life benchmark. This benchmark of 300 µS/cm may be applied to waters in Appalachian streams that are dominated by calcium and magnesium salts of sulfate and bicarbonate at circum-neutral to mildly alkaline pH. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  4. Evaluation of Inelastic Constitutive Models for Nonlinear Structural Analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1983-01-01

    The influence of inelastic material models on computed stress-strain states, and therefore predicted lives, was studied for thermomechanically loaded structures. Nonlinear structural analyses were performed on a fatigue specimen which was subjected to thermal cycling in fluidized beds and on a mechanically load cycled benchmark notch specimen. Four incremental plasticity creep models (isotropic, kinematic, combined isotropic-kinematic, combined plus transient creep) were exercised. Of the plasticity models, kinematic hardening gave results most consistent with experimental observations. Life predictions using the computed strain histories at the critical location with a Strainrange Partitioning approach considerably overpredicted the crack initiation life of the thermal fatigue specimen.

  5. Aquatic Life Benchmarks and Ecological Risk Assessments for Registered Pesticides

    EPA Pesticide Factsheets

    Each Aquatic Life Benchmark is based on the most sensitive, scientifically acceptable toxicity endpoint available to EPA for a given taxon (for example, freshwater fish) of all scientifically acceptable toxicity data available to EPA.

  6. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    ERIC Educational Resources Information Center

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  7. Structural Life and Reliability Metrics: Benchmarking and Verification of Probabilistic Life Prediction Codes

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Soditus, Sherry; Hendricks, Robert C.; Zaretsky, Erwin V.

    2002-01-01

    Over the past two decades there has been considerable effort by NASA Glenn and others to develop probabilistic codes to predict with reasonable engineering certainty the life and reliability of critical components in rotating machinery and, more specifically, in the rotating sections of airbreathing and rocket engines. These codes have, to a very limited extent, been verified with relatively small bench rig type specimens under uniaxial loading. Because of the small and very narrow database the acceptance of these codes within the aerospace community has been limited. An alternate approach to generating statistically significant data under complex loading and environments simulating aircraft and rocket engine conditions is to obtain, catalog and statistically analyze actual field data. End users of the engines, such as commercial airlines and the military, record and store operational and maintenance information. This presentation describes a cooperative program between the NASA GRC, United Airlines, USAF Wright Laboratory, U.S. Army Research Laboratory and Australian Aeronautical & Maritime Research Laboratory to obtain and analyze these airline data for selected components such as blades, disks and combustors. These airline data will be used to benchmark and compare existing life prediction codes.

  8. The multicenter benchmarking study of burn injury: A content analysis of the outcome measures using the international classification of functioning, disability and health.

    PubMed

    Osborne, Candice L; Petersson, Christina; Graham, James E; Meyer, Walter J; Simeonsson, Rune J; Suman, Oscar E; Ottenbacher, Kenneth J

    2016-11-01

    To link, classify and describe the content of the Multicenter Benchmarking Study Burn Outcomes Questionnaires (BOQ) using the International Classification of Functioning, Disability and Health (ICF) to determine if the information garnered provides researchers with the data necessary to develop a comprehensive understanding of life after burns. Two ICF linking experts used a standardized linking technique endorsed by the World Health Organization to link all BOQ concepts to the ICF. Linking results were analyzed to determine the comprehensiveness of each of the five measures. The activities and participation component was most frequently addressed followed by the body functions component. Environmental factors are not extensively covered and body structures are not addressed. ICF chapter and category distribution were skewed and varied between assessments. The majority of BOQ items are of the health status perspective. BOQ item composition could be improved with a more even distribution of pertinent ICF topics. Assessment authors may consider addressing the impact of environmental factors on participation. Including body structure concepts would allow investigators to track structural deformation and/or developmental delay. Generally speaking, this data should not be used to examine quality of life outcomes. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.

  9. A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams (Final Report)

    EPA Science Inventory

    EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate g...

  10. Issues to consider in the derivation of water quality benchmarks for the protection of aquatic life.

    PubMed

    Schneider, Uwe

    2014-01-01

    While water quality benchmarks for the protection of aquatic life have been in use in some jurisdictions for several decades (USA, Canada, several European countries), more and more countries are now setting up their own national water quality benchmark development programs. In doing so, they either adopt an existing method from another jurisdiction, update on an existing approach, or develop their own new derivation method. Each approach has its own advantages and disadvantages, and many issues have to be addressed when setting up a water quality benchmark development program or when deriving a water quality benchmark. Each of these tasks requires a special expertise. They may seem simple, but are complex in their details. The intention of this paper was to provide some guidance for this process of water quality benchmark development on the program level, for the derivation methodology development, and in the actual benchmark derivation step, as well as to point out some issues (notably the inclusion of adapted populations and cryptic species and points to consider in the use of the species sensitivity distribution approach) and future opportunities (an international data repository and international collaboration in water quality benchmark development).

  11. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1996 revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Tsao, C.L.

    1996-06-01

    This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less

  12. Validation of mechanical models for reinforced concrete structures: Presentation of the French project ``Benchmark des Poutres de la Rance''

    NASA Astrophysics Data System (ADS)

    L'Hostis, V.; Brunet, C.; Poupard, O.; Petre-Lazar, I.

    2006-11-01

    Several ageing models are available for the prediction of the mechanical consequences of rebar corrosion. They are used for service life prediction of reinforced concrete structures. Concerning corrosion diagnosis of reinforced concrete, some Non Destructive Testing (NDT) tools have been developed, and have been in use for some years. However, these developments require validation on existing concrete structures. The French project “Benchmark des Poutres de la Rance” contributes to this aspect. It has two main objectives: (i) validation of mechanical models to estimate the influence of rebar corrosion on the load bearing capacity of a structure, (ii) qualification of the use of the NDT results to collect information on steel corrosion within reinforced-concrete structures. Ten French and European institutions from both academic research laboratories and industrial companies contributed during the years 2004 and 2005. This paper presents the project that was divided into several work packages: (i) the reinforced concrete beams were characterized from non-destructive testing tools, (ii) the mechanical behaviour of the beams was experimentally tested, (iii) complementary laboratory analysis were performed and (iv) finally numerical simulations results were compared to the experimental results obtained with the mechanical tests.

  13. Benchmarking in TESOL: A Study of the Malaysia Education Blueprint 2013

    ERIC Educational Resources Information Center

    Jawaid, Arif

    2014-01-01

    Benchmarking is a very common real-life function occurring every moment unnoticed. It has travelled from industry to education like other quality disciplines. Initially benchmarking was used in higher education. .Now it is diffusing into other areas including TESOL (Teaching English to Speakers of Other Languages), which has yet to devise a…

  14. Structural Benchmark Creep Testing for Microcast MarM-247 Advanced Stirling Convertor E2 Heater Head Test Article SN18

    NASA Technical Reports Server (NTRS)

    Krause, David L.; Brewer, Ethan J.; Pawlik, Ralph

    2013-01-01

    This report provides test methodology details and qualitative results for the first structural benchmark creep test of an Advanced Stirling Convertor (ASC) heater head of ASC-E2 design heritage. The test article was recovered from a flight-like Microcast MarM-247 heater head specimen previously used in helium permeability testing. The test article was utilized for benchmark creep test rig preparation, wall thickness and diametral laser scan hardware metrological developments, and induction heater custom coil experiments. In addition, a benchmark creep test was performed, terminated after one week when through-thickness cracks propagated at thermocouple weld locations. Following this, it was used to develop a unique temperature measurement methodology using contact thermocouples, thereby enabling future benchmark testing to be performed without the use of conventional welded thermocouples, proven problematic for the alloy. This report includes an overview of heater head structural benchmark creep testing, the origin of this particular test article, test configuration developments accomplished using the test article, creep predictions for its benchmark creep test, qualitative structural benchmark creep test results, and a short summary.

  15. MARC calculations for the second WIPP structural benchmark problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, H.S.

    1981-05-01

    This report describes calculations made with the MARC structural finite element code for the second WIPP structural benchmark problem. Specific aspects of problem implementation such as element choice, slip line modeling, creep law implementation, and thermal-mechanical coupling are discussed in detail. Also included are the computational results specified in the benchmark problem formulation.

  16. Benchmarking for Excellence and the Nursing Process

    NASA Technical Reports Server (NTRS)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  17. GoWeb: a semantic search engine for the life science web.

    PubMed

    Dietze, Heiko; Schroeder, Michael

    2009-10-01

    Current search engines are keyword-based. Semantic technologies promise a next generation of semantic search engines, which will be able to answer questions. Current approaches either apply natural language processing to unstructured text or they assume the existence of structured statements over which they can reason. Here, we introduce a third approach, GoWeb, which combines classical keyword-based Web search with text-mining and ontologies to navigate large results sets and facilitate question answering. We evaluate GoWeb on three benchmarks of questions on genes and functions, on symptoms and diseases, and on proteins and diseases. The first benchmark is based on the BioCreAtivE 1 Task 2 and links 457 gene names with 1352 functions. GoWeb finds 58% of the functional GeneOntology annotations. The second benchmark is based on 26 case reports and links symptoms with diseases. GoWeb achieves 77% success rate improving an existing approach by nearly 20%. The third benchmark is based on 28 questions in the TREC genomics challenge and links proteins to diseases. GoWeb achieves a success rate of 79%. GoWeb's combination of classical Web search with text-mining and ontologies is a first step towards answering questions in the biomedical domain. GoWeb is online at: http://www.gopubmed.org/goweb.

  18. Structural Benchmark Testing for Stirling Convertor Heater Heads

    NASA Technical Reports Server (NTRS)

    Krause, David L.; Kalluri, Sreeramesh; Bowman, Randy R.

    2007-01-01

    The National Aeronautics and Space Administration (NASA) has identified high efficiency Stirling technology for potential use on long duration Space Science missions such as Mars rovers, deep space missions, and lunar applications. For the long life times required, a structurally significant design limit for the Stirling convertor heater head is creep deformation induced even under relatively low stress levels at high material temperatures. Conventional investigations of creep behavior adequately rely on experimental results from uniaxial creep specimens, and much creep data is available for the proposed Inconel-718 (IN-718) and MarM-247 nickel-based superalloy materials of construction. However, very little experimental creep information is available that directly applies to the atypical thin walls, the specific microstructures, and the low stress levels. In addition, the geometry and loading conditions apply multiaxial stress states on the heater head components, far from the conditions of uniaxial testing. For these reasons, experimental benchmark testing is underway to aid in accurately assessing the durability of Stirling heater heads. The investigation supplements uniaxial creep testing with pneumatic testing of heater head test articles at elevated temperatures and with stress levels ranging from one to seven times design stresses. This paper presents experimental methods, results, post-test microstructural analyses, and conclusions for both accelerated and non-accelerated tests. The Stirling projects use the results to calibrate deterministic and probabilistic analytical creep models of the heater heads to predict their life times.

  19. Pesticides in U.S. streams and rivers: occurrence and trends during 1992-2011

    USGS Publications Warehouse

    Stone, Wesley W.; Gilliom, Robert J.; Ryberg, Karen R.

    2014-01-01

    During the 20 years from 1992 to 2011, pesticides were found at concentrations that exceeded aquatic-life benchmarks in many rivers and streams that drain agricultural, urban, and mixed-land use watersheds. Overall, the proportions of assessed streams with one or more pesticides that exceeded an aquatic-life benchmark were very similar between the two decades for agricultural (69% during 1992−2001 compared to 61% during 2002−2011) and mixed-land-use streams (45% compared to 46%). Urban streams, in contrast, increased from 53% during 1992−2011 to 90% during 2002−2011, largely because of fipronil and dichlorvos. The potential for adverse effects on aquatic life is likely greater than these results indicate because potentially important pesticide compounds were not included in the assessment. Human-health benchmarks were much less frequently exceeded, and during 2002−2011, only one agricultural stream and no urban or mixed-land-use streams exceeded human-health benchmarks for any of the measured pesticides. Widespread trends in pesticide concentrations, some downward and some upward, occurred in response to shifts in use patterns primarily driven by regulatory changes and introductions of new pesticides.

  20. Experimental Creep Life Assessment for the Advanced Stirling Convertor Heater Head

    NASA Technical Reports Server (NTRS)

    Krause, David L.; Kalluri, Sreeramesh; Shah, Ashwin R.; Korovaichuk, Igor

    2010-01-01

    The United States Department of Energy is planning to develop the Advanced Stirling Radioisotope Generator (ASRG) for the National Aeronautics and Space Administration (NASA) for potential use on future space missions. The ASRG provides substantial efficiency and specific power improvements over radioisotope power systems of heritage designs. The ASRG would use General Purpose Heat Source modules as energy sources and the free-piston Advanced Stirling Convertor (ASC) to convert heat into electrical energy. Lockheed Martin Corporation of Valley Forge, Pennsylvania, is integrating the ASRG systems, and Sunpower, Inc., of Athens, Ohio, is designing and building the ASC. NASA Glenn Research Center of Cleveland, Ohio, manages the Sunpower contract and provides technology development in several areas for the ASC. One area is reliability assessment for the ASC heater head, a critical pressure vessel within which heat is converted into mechanical oscillation of a displacer piston. For high system efficiency, the ASC heater head operates at very high temperature (850 C) and therefore is fabricated from an advanced heat-resistant nickel-based superalloy Microcast MarM-247. Since use of MarM-247 in a thin-walled pressure vessel is atypical, much effort is required to assure that the system will operate reliably for its design life of 17 years. One life-limiting structural response for this application is creep; creep deformation is the accumulation of time-dependent inelastic strain under sustained loading over time. If allowed to progress, the deformation eventually results in creep rupture. Since creep material properties are not available in the open literature, a detailed creep life assessment of the ASC heater head effort is underway. This paper presents an overview of that creep life assessment approach, including the reliability-based creep criteria developed from coupon testing, and the associated heater head deterministic and probabilistic analyses. The approach also includes direct benchmark experimental creep assessment. This element provides high-fidelity creep testing of prototypical heater head test articles to investigate the relevant material issues and multiaxial stress state. Benchmark testing provides required data to evaluate the complex life assessment methodology and to validate that analysis. Results from current benchmark heater head tests and newly developed experimental methods are presented. In the concluding remarks, the test results are shown to compare favorably with the creep strain predictions and are the first experimental evidence for a robust ASC heater head creep life.

  1. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    PubMed Central

    Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231

  2. Benchmarking specialty hospitals, a scoping review on theory and practice.

    PubMed

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  3. Benchmarking trial between France and Australia comparing management of primary rectal cancer beyond TME and locally recurrent rectal cancer (PelviCare Trial): rationale and design.

    PubMed

    Denost, Quentin; Saillour, Florence; Masya, Lindy; Martinaud, Helene Maillou; Guillon, Stephanie; Kret, Marion; Rullier, Eric; Quintard, Bruno; Solomon, Michael

    2016-04-04

    Among patients with rectal cancer, 5-10% have a primary rectal cancer beyond the total mesorectal excision plane (PRC-bTME) and 10% recur locally following primary surgery (LRRC). In both cases, patients 'care remains challenging with a significant worldwide variation in practice regarding overall management and criteria for operative intervention. These variations in practice can be explained by structural and organizational differences, as well as cultural dissimilarities. However, surgical resection of PRC-bTME and LRRC provides the best chance of long-term survival after complete resection (R0). With regards to the organization of the healthcare system and the operative criteria for these patients, France and Australia seem to be highly different. A benchmarking-type analysis between French and Australian clinical practice, with regards to the care and management of PRC-bTME and LRRC, would allow understanding of patients' care and management structures as well as individual and collective mechanisms of operative decision-making in order to ensure equitable practice and improve survival for these patients. The current study is an international Benchmarking trial comparing two cohorts of 120 consecutive patients with non-metastatic PRC-bTME and LRRC. Patients with curative and palliative treatment intent are included. The study design has three main parts: (1) French and Australian cohorts including clinical, radiological and surgical data, quality of life (MOS SF36, FACT-C) and distress level (Distress thermometer) at the inclusion, 6 and 12 months; (2) experimental analyses consisting of a blinded inter-country reading of pelvic MRI to assess operatory decisions; (3) qualitative analyses based on MDT meeting observation, semi-structured interviews and focus groups of health professional attendees and conducted by a research psychologist in both countries using the same guides. The primary endpoint will be the clinical resection rate. Secondary end points will be concordance rate between French and Australian operative decisions based on the inter-country reading MRI, post-operative mortality and morbidity rates, oncological outcomes based on resection status and one-year overall and disease-free survival, patients' quality of life and distress level. Qualitative analysis will compare obstacles and facilitators of operative decision-making between both countries. Benchmarking can be defined as a comparison and learning process which will allow, in the context of PRC-bTME and LRRC, to understand and to share the whole process management of these patientsbetween Farnce and Australia. NCT02551471 . (date of registration: 09/14/2015).

  4. Clear, Complete, and Justified Problem Formulations for Aquatic Life Benchmark Values: Specifying the Dimensions

    EPA Science Inventory

    Nations that develop water quality benchmark values have relied primarily on standard data and methods. However, experience with chemicals such as Se, ammonia, and tributyltin has shown that standard methods do not adequately address some taxa, modes of exposure and effects. Deve...

  5. CLEAR, COMPLETE, AND JUSTIFIED PROBLEM FORMULATIONS FOR AQUATIC LIFE BENCHMARK VALUES: SPECIFYING THE DIMENSIONS

    EPA Science Inventory

    Nations that develop water quality benchmark values have relied primarily on standard data and methods. However, experience with chemicals such as Se, ammonia, and tributyltin has shown that standard methods do not adequately address some taxa, modes of exposure and effects. Deve...

  6. How Benchmarking and Higher Education Came Together

    ERIC Educational Resources Information Center

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  7. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    EPA Pesticide Factsheets

    This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for dissolved salts as measured by conductivity in Central Appalachian streams using data from West Virginia and Kentucky. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  8. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    ERIC Educational Resources Information Center

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  9. A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams (2010) (External Review Draft)

    EPA Science Inventory

    This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for disso...

  10. Developing a Benchmark Tool for Sustainable Consumption: An Iterative Process

    ERIC Educational Resources Information Center

    Heiskanen, E.; Timonen, P.; Nissinen, A.; Gronroos, J.; Honkanen, A.; Katajajuuri, J. -M.; Kettunen, J.; Kurppa, S.; Makinen, T.; Seppala, J.; Silvenius, F.; Virtanen, Y.; Voutilainen, P.

    2007-01-01

    This article presents the development process of a consumer-oriented, illustrative benchmarking tool enabling consumers to use the results of environmental life cycle assessment (LCA) to make informed decisions. LCA provides a wealth of information on the environmental impacts of products, but its results are very difficult to present concisely…

  11. How to benchmark methods for structure-based virtual screening of large compound libraries.

    PubMed

    Christofferson, Andrew J; Huang, Niu

    2012-01-01

    Structure-based virtual screening is a useful computational technique for ligand discovery. To systematically evaluate different docking approaches, it is important to have a consistent benchmarking protocol that is both relevant and unbiased. Here, we describe the designing of a benchmarking data set for docking screen assessment, a standard docking screening process, and the analysis and presentation of the enrichment of annotated ligands among a background decoy database.

  12. Review of life-cycle approaches coupled with data envelopment analysis: launching the CFP + DEA method for energy policy making.

    PubMed

    Vázquez-Rowe, Ian; Iribarren, Diego

    2015-01-01

    Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting.

  13. Review of Life-Cycle Approaches Coupled with Data Envelopment Analysis: Launching the CFP + DEA Method for Energy Policy Making

    PubMed Central

    Vázquez-Rowe, Ian

    2015-01-01

    Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting. PMID:25654136

  14. Creep Life of Ceramic Components Using a Finite-Element-Based Integrated Design Program (CARES/CREEP)

    NASA Technical Reports Server (NTRS)

    Powers, L. M.; Jadaan, O. M.; Gyekenyesi, J. P.

    1998-01-01

    The desirable properties of ceramics at high temperatures have generated interest in their use for structural application such as in advanced turbine engine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilizes commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life, of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the Ceramics Analysis and Reliability Evaluation of Structures/CREEP (CARES/CREEP) integrated design program, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benchmark problems and engine components are included.

  15. Using chemical benchmarking to determine the persistence of chemicals in a Swedish lake.

    PubMed

    Zou, Hongyan; Radke, Michael; Kierkegaard, Amelie; MacLeod, Matthew; McLachlan, Michael S

    2015-02-03

    It is challenging to measure the persistence of chemicals under field conditions. In this work, two approaches for measuring persistence in the field were compared: the chemical mass balance approach, and a novel chemical benchmarking approach. Ten pharmaceuticals, an X-ray contrast agent, and an artificial sweetener were studied in a Swedish lake. Acesulfame K was selected as a benchmark to quantify persistence using the chemical benchmarking approach. The 95% confidence intervals of the half-life for transformation in the lake system ranged from 780-5700 days for carbamazepine to <1-2 days for ketoprofen. The persistence estimates obtained using the benchmarking approach agreed well with those from the mass balance approach (1-21% difference), indicating that chemical benchmarking can be a valid and useful method to measure the persistence of chemicals under field conditions. Compared to the mass balance approach, the benchmarking approach partially or completely eliminates the need to quantify mass flow of chemicals, so it is particularly advantageous when the quantification of mass flow of chemicals is difficult. Furthermore, the benchmarking approach allows for ready comparison and ranking of the persistence of different chemicals.

  16. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1994 Revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Mabrey, J.B.

    1994-07-01

    This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronicmore » Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.« less

  17. Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.

  18. Internal Benchmarking for Institutional Effectiveness

    ERIC Educational Resources Information Center

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  19. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres.

    PubMed

    van Lent, Wineke A M; de Beer, Relinde D; van Harten, Wim H

    2010-08-31

    Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations.Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. The improved benchmarking process and the success factors can produce relevant input to improve the operations management of specialty hospitals.

  20. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    PubMed Central

    2010-01-01

    Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. Results We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations. Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. Conclusions The improved benchmarking process and the success factors can produce relevant input to improve the operations management of specialty hospitals. PMID:20807408

  1. Organic contaminants, trace and major elements, and nutrients in water and sediment sampled in response to the Deepwater Horizon oil spill

    USGS Publications Warehouse

    Nowell, Lisa H.; Ludtke, Amy S.; Mueller, David K.; Scott, Jonathon C.

    2012-01-01

    Beach water and sediment samples were collected along the Gulf of Mexico coast to assess differences in contaminant concentrations before and after landfall of Macondo-1 well oil released into the Gulf of Mexico from the sinking of the British Petroleum Corporation's Deepwater Horizon drilling platform. Samples were collected at 70 coastal sites between May 7 and July 7, 2010, to document baseline, or "pre-landfall" conditions. A subset of 48 sites was resampled during October 4 to 14, 2010, after oil had made landfall on the Gulf of Mexico coast, called the "post-landfall" sampling period, to determine if actionable concentrations of oil were present along shorelines. Few organic contaminants were detected in water; their detection frequencies generally were low and similar in pre-landfall and post-landfall samples. Only one organic contaminant--toluene--had significantly higher concentrations in post-landfall than pre-landfall water samples. No water samples exceeded any human-health benchmarks, and only one post-landfall water sample exceeded an aquatic-life benchmark--the toxic-unit benchmark for polycyclic aromatic hydrocarbons (PAH) mixtures. In sediment, concentrations of 3 parent PAHs and 17 alkylated PAH groups were significantly higher in post-landfall samples than pre-landfall samples. One pre-landfall sample from Texas exceeded the sediment toxic-unit benchmark for PAH mixtures; this site was not sampled during the post-landfall period. Empirical upper screening-value benchmarks for PAHs in sediment were exceeded at 37 percent of post-landfall samples and 22 percent of pre-landfall samples, but there was no significant difference in the proportion of samples exceeding benchmarks between paired pre-landfall and post-landfall samples. Seven sites had the largest concentration differences between post-landfall and pre-landfall samples for 15 alkylated PAHs. Five of these seven sites, located in Louisiana, Mississippi, and Alabama, had diagnostic geochemical evidence of Macondo-1 oil in post-landfall sediments and tarballs. For trace and major elements in water, analytical reporting levels for several elements were high and variable. No human-health benchmarks were exceeded, although these were available for only two elements. Aquatic-life benchmarks for trace elements were exceeded in 47 percent of water samples overall. The elements responsible for the most exceedances in post-landfall samples were boron, copper, and manganese. Benchmark exceedances in water could be substantially underestimated because some samples had reporting levels higher than the applicable benchmarks (such as cobalt, copper, lead and zinc) and some elements (such as boron and vanadium) were analyzed in samples from only one sampling period. For trace elements in whole sediment, empirical upper screening-value benchmarks were exceeded in 57 percent of post-landfall samples and 40 percent of pre-landfall samples, but there was no significant difference in the proportion of samples exceeding benchmarks between paired pre-landfall and post-landfall samples. Benchmark exceedance frequencies could be conservatively high because they are based on measurements of total trace-element concentrations in sediment. In the less than 63-micrometer sediment fraction, one or more trace or major elements were anthropogenically enriched relative to national baseline values for U.S. streams for all sediment samples except one. Sixteen percent of sediment samples exceeded upper screening-value benchmarks for, and were enriched in, one or more of the following elements: barium, vanadium, aluminum, manganese, arsenic, chromium, and cobalt. These samples were evenly divided between the sampling periods. Aquatic-life benchmarks were frequently exceeded along the Gulf of Mexico coast by trace elements in both water and sediment and by PAHs in sediment. For the most part, however, significant differences between pre-landfall and post-landfall samples were limited to concentrations of PAHs in sediment. At five sites along the coast, the higher post-landfall concentrations of PAHs were associated with diagnostic geochemical evidence of Deepwater Horizon Macondo-1 oil.

  2. Cloud-Based Evaluation of Anatomical Structure Segmentation and Landmark Detection Algorithms: VISCERAL Anatomy Benchmarks.

    PubMed

    Jimenez-Del-Toro, Oscar; Muller, Henning; Krenn, Markus; Gruenberg, Katharina; Taha, Abdel Aziz; Winterstein, Marianne; Eggel, Ivan; Foncubierta-Rodriguez, Antonio; Goksel, Orcun; Jakab, Andras; Kontokotsios, Georgios; Langs, Georg; Menze, Bjoern H; Salas Fernandez, Tomas; Schaer, Roger; Walleyo, Anna; Weber, Marc-Andre; Dicente Cid, Yashin; Gass, Tobias; Heinrich, Mattias; Jia, Fucang; Kahl, Fredrik; Kechichian, Razmig; Mai, Dominic; Spanier, Assaf B; Vincent, Graham; Wang, Chunliang; Wyeth, Daniel; Hanbury, Allan

    2016-11-01

    Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.

  3. Adaptive unified continuum FEM modeling of a 3D FSI benchmark problem.

    PubMed

    Jansson, Johan; Degirmenci, Niyazi Cem; Hoffman, Johan

    2017-09-01

    In this paper, we address a 3D fluid-structure interaction benchmark problem that represents important characteristics of biomedical modeling. We present a goal-oriented adaptive finite element methodology for incompressible fluid-structure interaction based on a streamline diffusion-type stabilization of the balance equations for mass and momentum for the entire continuum in the domain, which is implemented in the Unicorn/FEniCS software framework. A phase marker function and its corresponding transport equation are introduced to select the constitutive law, where the mesh tracks the discontinuous fluid-structure interface. This results in a unified simulation method for fluids and structures. We present detailed results for the benchmark problem compared with experiments, together with a mesh convergence study. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Benchmarking Data Sets for the Evaluation of Virtual Ligand Screening Methods: Review and Perspectives.

    PubMed

    Lagarde, Nathalie; Zagury, Jean-François; Montes, Matthieu

    2015-07-27

    Virtual screening methods are commonly used nowadays in drug discovery processes. However, to ensure their reliability, they have to be carefully evaluated. The evaluation of these methods is often realized in a retrospective way, notably by studying the enrichment of benchmarking data sets. To this purpose, numerous benchmarking data sets were developed over the years, and the resulting improvements led to the availability of high quality benchmarking data sets. However, some points still have to be considered in the selection of the active compounds, decoys, and protein structures to obtain optimal benchmarking data sets.

  5. Benchmarking the FCI at Illinois State's Residential Life.

    ERIC Educational Resources Information Center

    Cain, David A.

    1998-01-01

    Describes how Office of Residential Life at one university met maintenance challenges facing its residential and food-service facilities. Discusses study conducted in 1992 to evaluate widespread management practices and addresses its findings, including six recommended practices. Examines development and implementation of facilities audit,…

  6. Benchmarks for effective primary care-based nursing services for adults with depression: a Delphi study.

    PubMed

    McIlrath, Carole; Keeney, Sinead; McKenna, Hugh; McLaughlin, Derek

    2010-02-01

    This paper is a report of a study conducted to identify and gain consensus on appropriate benchmarks for effective primary care-based nursing services for adults with depression. Worldwide evidence suggests that between 5% and 16% of the population have a diagnosis of depression. Most of their care and treatment takes place in primary care. In recent years, primary care nurses, including community mental health nurses, have become more involved in the identification and management of patients with depression; however, there are no appropriate benchmarks to guide, develop and support their practice. In 2006, a three-round electronic Delphi survey was completed by a United Kingdom multi-professional expert panel (n = 67). Round 1 generated 1216 statements relating to structures (such as training and protocols), processes (such as access and screening) and outcomes (such as patient satisfaction and treatments). Content analysis was used to collapse statements into 140 benchmarks. Seventy-three benchmarks achieved consensus during subsequent rounds. Of these, 45 (61%) were related to structures, 18 (25%) to processes and 10 (14%) to outcomes. Multi-professional primary care staff have similar views about the appropriate benchmarks for care of adults with depression. These benchmarks could serve as a foundation for depression improvement initiatives in primary care and ongoing research into depression management by nurses.

  7. Efficient Online Learning Algorithms Based on LSTM Neural Networks.

    PubMed

    Ergen, Tolga; Kozat, Suleyman Serdar

    2017-09-13

    We investigate online nonlinear regression and introduce novel regression structures based on the long short term memory (LSTM) networks. For the introduced structures, we also provide highly efficient and effective online training methods. To train these novel LSTM-based structures, we put the underlying architecture in a state space form and introduce highly efficient and effective particle filtering (PF)-based updates. We also provide stochastic gradient descent and extended Kalman filter-based updates. Our PF-based training method guarantees convergence to the optimal parameter estimation in the mean square error sense provided that we have a sufficient number of particles and satisfy certain technical conditions. More importantly, we achieve this performance with a computational complexity in the order of the first-order gradient-based methods by controlling the number of particles. Since our approach is generic, we also introduce a gated recurrent unit (GRU)-based approach by directly replacing the LSTM architecture with the GRU architecture, where we demonstrate the superiority of our LSTM-based approach in the sequential prediction task via different real life data sets. In addition, the experimental results illustrate significant performance improvements achieved by the introduced algorithms with respect to the conventional methods over several different benchmark real life data sets.

  8. Benchmark Design and Installation: A synthesis of Existing Information.

    DTIC Science & Technology

    1987-07-01

    casings (15 ft deep) drilled to rock and filled with concrete. Disks - 1 . Set on vertically stable structures (e.g., dam monoliths). 2 . Set in rock ...Structural movement survey 1 . Rock outcrops (first choice) -- chiseled square on high point. 2 . Massive concrete structure (second choice) - cut square on...bolt marker (type 2 ). 58,. % %--"% %I 1 ± 4 -I,.- Table Cl. Recomnded benchmarks. Type of condition or terrain Type of markert Bedrock, rock outcrops

  9. Stochastic-Strength-Based Damage Simulation of Ceramic Matrix Composite Laminates

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Mital, Subodh K.; Murthy, Pappu L. N.; Bednarcyk, Brett A.; Pineda, Evan J.; Bhatt, Ramakrishna T.; Arnold, Steven M.

    2016-01-01

    The Finite Element Analysis-Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program was used to characterize and predict the progressive damage response of silicon-carbide-fiber-reinforced reaction-bonded silicon nitride matrix (SiC/RBSN) composite laminate tensile specimens. Studied were unidirectional laminates [0] (sub 8), [10] (sub 8), [45] (sub 8), and [90] (sub 8); cross-ply laminates [0 (sub 2) divided by 90 (sub 2),]s; angled-ply laminates [plus 45 (sub 2) divided by -45 (sub 2), ]s; doubled-edge-notched [0] (sub 8), laminates; and central-hole laminates. Results correlated well with the experimental data. This work was performed as a validation and benchmarking exercise of the FEAMAC/CARES program. FEAMAC/CARES simulates stochastic-based discrete-event progressive damage of ceramic matrix composite and polymer matrix composite material structures. It couples three software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/Life), and (3) the Abaqus finite element analysis program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating-unit-cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC, and Abaqus is used to model the overall composite structure. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events that incrementally progress until ultimate structural failure.

  10. Neutron Deep Penetration Calculations in Light Water with Monte Carlo TRIPOLI-4® Variance Reduction Techniques

    NASA Astrophysics Data System (ADS)

    Lee, Yi-Kang

    2017-09-01

    Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.

  11. Benchmark Calibration Tests Completed for Stirling Convertor Heater Head Life Assessment

    NASA Technical Reports Server (NTRS)

    Krause, David L.; Halford, Gary R.; Bowman, Randy R.

    2005-01-01

    A major phase of benchmark testing has been completed at the NASA Glenn Research Center (http://www.nasa.gov/glenn/), where a critical component of the Stirling Radioisotope Generator (SRG) is undergoing extensive experimentation to aid the development of an analytical life-prediction methodology. Two special-purpose test rigs subjected SRG heater-head pressure-vessel test articles to accelerated creep conditions, using the standard design temperatures to stay within the wall material s operating creep-response regime, but increasing wall stresses up to 7 times over the design point. This resulted in well-controlled "ballooning" of the heater-head hot end. The test plan was developed to provide critical input to analytical parameters in a reasonable period of time.

  12. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  13. Evaluation and optimization of virtual screening workflows with DEKOIS 2.0--a public library of challenging docking benchmark sets.

    PubMed

    Bauer, Matthias R; Ibrahim, Tamer M; Vogel, Simon M; Boeckler, Frank M

    2013-06-24

    The application of molecular benchmarking sets helps to assess the actual performance of virtual screening (VS) workflows. To improve the efficiency of structure-based VS approaches, the selection and optimization of various parameters can be guided by benchmarking. With the DEKOIS 2.0 library, we aim to further extend and complement the collection of publicly available decoy sets. Based on BindingDB bioactivity data, we provide 81 new and structurally diverse benchmark sets for a wide variety of different target classes. To ensure a meaningful selection of ligands, we address several issues that can be found in bioactivity data. We have improved our previously introduced DEKOIS methodology with enhanced physicochemical matching, now including the consideration of molecular charges, as well as a more sophisticated elimination of latent actives in the decoy set (LADS). We evaluate the docking performance of Glide, GOLD, and AutoDock Vina with our data sets and highlight existing challenges for VS tools. All DEKOIS 2.0 benchmark sets will be made accessible at http://www.dekois.com.

  14. Benchmarking an Unstructured-Grid Model for Tsunami Current Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Yinglong J.; Priest, George; Allan, Jonathan; Stimely, Laura

    2016-12-01

    We present model results derived from a tsunami current benchmarking workshop held by the NTHMP (National Tsunami Hazard Mitigation Program) in February 2015. Modeling was undertaken using our own 3D unstructured-grid model that has been previously certified by the NTHMP for tsunami inundation. Results for two benchmark tests are described here, including: (1) vortex structure in the wake of a submerged shoal and (2) impact of tsunami waves on Hilo Harbor in the 2011 Tohoku event. The modeled current velocities are compared with available lab and field data. We demonstrate that the model is able to accurately capture the velocity field in the two benchmark tests; in particular, the 3D model gives a much more accurate wake structure than the 2D model for the first test, with the root-mean-square error and mean bias no more than 2 cm s-1 and 8 mm s-1, respectively, for the modeled velocity.

  15. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  16. Toxicological Benchmarks for Screening of Potential Contaminants of Concern for Effects on Aquatic Biota on the Oak Ridge Reservation, Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W., II

    1993-01-01

    One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less

  17. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Soil and Litter Invertebrates and Heterotrophic Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Will, M.E.

    1994-01-01

    This report presents a standard method for deriving benchmarks for the purpose of ''contaminant screening,'' performed by comparing measured ambient concentrations of chemicals. The work was performed under Work Breakdown Structure 1.4.12.2.3.04.07.02 (Activity Data Sheet 8304). In addition, this report presents sets of data concerning the effects of chemicals in soil on invertebrates and soil microbial processes, benchmarks for chemicals potentially associated with United States Department of Energy sites, and literature describing the experiments from which data were drawn for benchmark derivation.

  18. Benchmarking methods and data sets for ligand enrichment assessment in virtual screening.

    PubMed

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2015-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. "analogue bias", "artificial enrichment" and "false negative". In addition, we introduce our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylases (HDACs) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The leave-one-out cross-validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased as measured by property matching, ROC curves and AUCs. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Benchmarking Methods and Data Sets for Ligand Enrichment Assessment in Virtual Screening

    PubMed Central

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2014-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. “analogue bias”, “artificial enrichment” and “false negative”. In addition, we introduced our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylase (HDAC) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The Leave-One-Out Cross-Validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased in terms of property matching, ROC curves and AUCs. PMID:25481478

  20. Benchmark notch test for life prediction

    NASA Technical Reports Server (NTRS)

    Domas, P. A.; Sharpe, W. N.; Ward, M.; Yau, J. F.

    1982-01-01

    The laser Interferometric Strain Displacement Gage (ISDG) was used to measure local strains in notched Inconel 718 test bars subjected to six different load histories at 649 C (1200 F) and including effects of tensile and compressive hold periods. The measurements were compared to simplified Neuber notch analysis predictions of notch root stress and strain. The actual strains incurred at the root of a discontinuity in cyclically loaded test samples subjected to inelastic deformation at high temperature where creep deformations readily occur were determined. The steady state cyclic, stress-strain response at the root of the discontinuity was analyzed. Flat, double notched uniaxially loaded fatigue specimens manufactured from the nickel base, superalloy Inconel 718 were used. The ISDG was used to obtain cycle by cycle recordings of notch root strain during continuous and hold time cycling at 649 C. Comparisons to Neuber and finite element model analyses were made. The results obtained provide a benchmark data set in high technology design where notch fatigue life is the predominant component service life limitation.

  1. A benchmarking method to measure dietary absorption efficiency of chemicals by fish.

    PubMed

    Xiao, Ruiyang; Adolfsson-Erici, Margaretha; Åkerman, Gun; McLachlan, Michael S; MacLeod, Matthew

    2013-12-01

    Understanding the dietary absorption efficiency of chemicals in the gastrointestinal tract of fish is important from both a scientific and a regulatory point of view. However, reported fish absorption efficiencies for well-studied chemicals are highly variable. In the present study, the authors developed and exploited an internal chemical benchmarking method that has the potential to reduce uncertainty and variability and, thus, to improve the precision of measurements of fish absorption efficiency. The authors applied the benchmarking method to measure the gross absorption efficiency for 15 chemicals with a wide range of physicochemical properties and structures. They selected 2,2',5,6'-tetrachlorobiphenyl (PCB53) and decabromodiphenyl ethane as absorbable and nonabsorbable benchmarks, respectively. Quantities of chemicals determined in fish were benchmarked to the fraction of PCB53 recovered in fish, and quantities of chemicals determined in feces were benchmarked to the fraction of decabromodiphenyl ethane recovered in feces. The performance of the benchmarking procedure was evaluated based on the recovery of the test chemicals and precision of absorption efficiency from repeated tests. Benchmarking did not improve the precision of the measurements; after benchmarking, however, the median recovery for 15 chemicals was 106%, and variability of recoveries was reduced compared with before benchmarking, suggesting that benchmarking could account for incomplete extraction of chemical in fish and incomplete collection of feces from different tests. © 2013 SETAC.

  2. Food for Life: evaluation of the impact of the Hospital Food Programme in England using a case study approach

    PubMed Central

    Orme, Judy; Pitt, Hannah; Jones, Matthew

    2017-01-01

    Objectives To evaluate the impact and challenges of implementing a Food for Life approach within three pilot NHS sites in 2014/2015 in England. Food for Life is an initiative led by the Soil Association, a non-governmental organisation in the UK that aims to encourage a healthy, sustainable food culture across communities. Design A case-study approach was undertaken using semi-structured interviews with staff and key stakeholders together with analysis of relevant documents such as meeting minutes, strategic plans and reports. Setting Three NHS Trusts in England. Participants Staff and key stakeholders. Main outcome measures Synthesis of key findings from semi-structured interviews and analysis of relevant documents. Results Key themes included the potential to influence contracting processes; measuring quality; food for staff and visitors; the role of food in hospitals, and longer term sustainability and impact. Participants reported that adopting the Food for Life approach had provided enormous scope to improve the quality of food in hospital settings and had provided levers and external benchmarks for use in contracting to help drive up standards of the food provided by external contractors for patients and staff. This was demonstrated by the achievement of an FFLCM for staff and visitor catering in all three organisations. Conclusions Participants all felt that the importance of food in hospitals is not always recognised. Engagement with Food for Life can produce a significant change in the focus on food within hospitals, and help to improve the quality of food and mealtime experience for staff, visitors and patients. PMID:29051822

  3. Accumulo/Hadoop, MongoDB, and Elasticsearch Performance for Semi Structured Intrusion Detection (IDS) Data

    DTIC Science & Technology

    2016-11-01

    iii Contents List of Figures v 1. Introduction 1 2. Background 1 3. Yahoo ! Cloud Serving Benchmark (YCSB) 2 3.1 Data Loading and Performance...transactional system. 3. Yahoo ! Cloud Serving Benchmark (YCSB) 3.1 Data Loading and Performance Testing Framework When originally setting out to perform the...that referred to a data loading and performance testing framework, Yahoo ! Cloud Serving Benchmark (YCSB).12 This framework is freely available and

  4. Society of Critical Care Medicine

    MedlinePlus

    ... Liberation Sepsis ICU Management Coding and Billing ICU Design Workforce ICU REPORT Disaster ICU Benchmarking Tools International ... Family Award for Ethics Honorary Life Membership ICU Design Citation ICU Heroes Lifetime Achievement Norma J. Shoemaker ...

  5. Using a health promotion model to promote benchmarking.

    PubMed

    Welby, Jane

    2006-07-01

    The North East (England) Neonatal Benchmarking Group has been established for almost a decade and has researched and developed a substantial number of evidence-based benchmarks. With no firm evidence that these were being used or that there was any standardisation of neonatal care throughout the region, the group embarked on a programme to review the benchmarks and determine what evidence-based guidelines were needed to support standardisation. A health promotion planning model was used by one subgroup to structure the programme; it enabled all members of the sub group to engage in the review process and provided the motivation and supporting documentation for implementation of changes in practice. The need for a regional guideline development group to complement the activity of the benchmarking group is being addressed.

  6. Kohn-Sham Band Structure Benchmark Including Spin-Orbit Coupling for 2D and 3D Solids

    NASA Astrophysics Data System (ADS)

    Huhn, William; Blum, Volker

    2015-03-01

    Accurate electronic band structures serve as a primary indicator of the suitability of a material for a given application, e.g., as electronic or catalytic materials. Computed band structures, however, are subject to a host of approximations, some of which are more obvious (e.g., the treatment of the exchange-correlation of self-energy) and others less obvious (e.g., the treatment of core, semicore, or valence electrons, handling of relativistic effects, or the accuracy of the underlying basis set used). We here provide a set of accurate Kohn-Sham band structure benchmarks, using the numeric atom-centered all-electron electronic structure code FHI-aims combined with the ``traditional'' PBE functional and the hybrid HSE functional, to calculate core, valence, and low-lying conduction bands of a set of 2D and 3D materials. Benchmarks are provided with and without effects of spin-orbit coupling, using quasi-degenerate perturbation theory to predict spin-orbit splittings. This work is funded by Fritz-Haber-Institut der Max-Planck-Gesellschaft.

  7. Evolutionary Optimization of a Geometrically Refined Truss

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.

  8. Benchmark matrix and guide: Part II.

    PubMed

    1991-01-01

    In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.

  9. Mapping transiently formed and sparsely populated conformations on a complex energy landscape

    PubMed Central

    Wang, Yong; Papaleo, Elena; Lindorff-Larsen, Kresten

    2016-01-01

    Determining the structures, kinetics, thermodynamics and mechanisms that underlie conformational exchange processes in proteins remains extremely difficult. Only in favourable cases is it possible to provide atomic-level descriptions of sparsely populated and transiently formed alternative conformations. Here we benchmark the ability of enhanced-sampling molecular dynamics simulations to determine the free energy landscape of the L99A cavity mutant of T4 lysozyme. We find that the simulations capture key properties previously measured by NMR relaxation dispersion methods including the structure of a minor conformation, the kinetics and thermodynamics of conformational exchange, and the effect of mutations. We discover a new tunnel that involves the transient exposure towards the solvent of an internal cavity, and show it to be relevant for ligand escape. Together, our results provide a comprehensive view of the structural landscape of a protein, and point forward to studies of conformational exchange in systems that are less characterized experimentally. DOI: http://dx.doi.org/10.7554/eLife.17505.001 PMID:27552057

  10. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Quality of Work-Life Programs in U.S. Medical Schools: Review and Case Studies

    ERIC Educational Resources Information Center

    Otto, Ann; Bourguet, Claire

    2006-01-01

    Quality of work life is being recognized more and more as a driving factor in the recruitment and retention of highly qualified employees. Before Northeastern Ohio Universities College of Medicine began development of its QWL initiative, it surveyed other medical schools across the U.S. to determine benchmarks of best practices in these programs.…

  12. Can Middle-School Science Textbooks Help Students Learn Important Ideas? Findings from Project 2061's Curriculum Evaluation Study: Life Science

    ERIC Educational Resources Information Center

    Stern, Luli; Roseman, Jo Ellen

    2004-01-01

    The transfer of matter and energy from one organism to another and between organisms and their physical setting is a fundamental concept in life science. Not surprisingly, this concept is common to the "Benchmarks for Science Literacy" (American Association for the Advancement of Science, [1993]), the "National Science Education Standards"…

  13. Issues in Benchmarking and Assessing Institutional Engagement

    ERIC Educational Resources Information Center

    Furco, Andrew; Miller, William

    2009-01-01

    The process of assessing and benchmarking community engagement can take many forms. To date, more than two dozen assessment tools for measuring community engagement institutionalization have been published. These tools vary substantially in purpose, level of complexity, scope, process, structure, and focus. While some instruments are designed to…

  14. Global-local methodologies and their application to nonlinear analysis. [for structural postbuckling study

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1986-01-01

    An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.

  15. XWeB: The XML Warehouse Benchmark

    NASA Astrophysics Data System (ADS)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  16. A benchmark testing ground for integrating homology modeling and protein docking.

    PubMed

    Bohnuud, Tanggis; Luo, Lingqi; Wodak, Shoshana J; Bonvin, Alexandre M J J; Weng, Zhiping; Vajda, Sandor; Schueler-Furman, Ora; Kozakov, Dima

    2017-01-01

    Protein docking procedures carry out the task of predicting the structure of a protein-protein complex starting from the known structures of the individual protein components. More often than not, however, the structure of one or both components is not known, but can be derived by homology modeling on the basis of known structures of related proteins deposited in the Protein Data Bank (PDB). Thus, the problem is to develop methods that optimally integrate homology modeling and docking with the goal of predicting the structure of a complex directly from the amino acid sequences of its component proteins. One possibility is to use the best available homology modeling and docking methods. However, the models built for the individual subunits often differ to a significant degree from the bound conformation in the complex, often much more so than the differences observed between free and bound structures of the same protein, and therefore additional conformational adjustments, both at the backbone and side chain levels need to be modeled to achieve an accurate docking prediction. In particular, even homology models of overall good accuracy frequently include localized errors that unfavorably impact docking results. The predicted reliability of the different regions in the model can also serve as a useful input for the docking calculations. Here we present a benchmark dataset that should help to explore and solve combined modeling and docking problems. This dataset comprises a subset of the experimentally solved 'target' complexes from the widely used Docking Benchmark from the Weng Lab (excluding antibody-antigen complexes). This subset is extended to include the structures from the PDB related to those of the individual components of each complex, and hence represent potential templates for investigating and benchmarking integrated homology modeling and docking approaches. Template sets can be dynamically customized by specifying ranges in sequence similarity and in PDB release dates, or using other filtering options, such as excluding sets of specific structures from the template list. Multiple sequence alignments, as well as structural alignments of the templates to their corresponding subunits in the target are also provided. The resource is accessible online or can be downloaded at http://cluspro.org/benchmark, and is updated on a weekly basis in synchrony with new PDB releases. Proteins 2016; 85:10-16. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Extensive site-directed mutagenesis reveals interconnected functional units in the alkaline phosphatase active site

    PubMed Central

    Sunden, Fanny; Peck, Ariana; Salzman, Julia; Ressl, Susanne; Herschlag, Daniel

    2015-01-01

    Enzymes enable life by accelerating reaction rates to biological timescales. Conventional studies have focused on identifying the residues that have a direct involvement in an enzymatic reaction, but these so-called ‘catalytic residues’ are embedded in extensive interaction networks. Although fundamental to our understanding of enzyme function, evolution, and engineering, the properties of these networks have yet to be quantitatively and systematically explored. We dissected an interaction network of five residues in the active site of Escherichia coli alkaline phosphatase. Analysis of the complex catalytic interdependence of specific residues identified three energetically independent but structurally interconnected functional units with distinct modes of cooperativity. From an evolutionary perspective, this network is orders of magnitude more probable to arise than a fully cooperative network. From a functional perspective, new catalytic insights emerge. Further, such comprehensive energetic characterization will be necessary to benchmark the algorithms required to rationally engineer highly efficient enzymes. DOI: http://dx.doi.org/10.7554/eLife.06181.001 PMID:25902402

  18. An Unbiased Method To Build Benchmarking Sets for Ligand-Based Virtual Screening and its Application To GPCRs

    PubMed Central

    2015-01-01

    Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the “artificial enrichment” and “analogue bias” of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD. PMID:24749745

  19. Benchmarking in pathology: development of an activity-based costing model.

    PubMed

    Burnett, Leslie; Wilson, Roger; Pfeffer, Sally; Lowry, John

    2012-12-01

    Benchmarking in Pathology (BiP) allows pathology laboratories to determine the unit cost of all laboratory tests and procedures, and also provides organisational productivity indices allowing comparisons of performance with other BiP participants. We describe 14 years of progressive enhancement to a BiP program, including the implementation of 'avoidable costs' as the accounting basis for allocation of costs rather than previous approaches using 'total costs'. A hierarchical tree-structured activity-based costing model distributes 'avoidable costs' attributable to the pathology activities component of a pathology laboratory operation. The hierarchical tree model permits costs to be allocated across multiple laboratory sites and organisational structures. This has enabled benchmarking on a number of levels, including test profiles and non-testing related workload activities. The development of methods for dealing with variable cost inputs, allocation of indirect costs using imputation techniques, panels of tests, and blood-bank record keeping, have been successfully integrated into the costing model. A variety of laboratory management reports are produced, including the 'cost per test' of each pathology 'test' output. Benchmarking comparisons may be undertaken at any and all of the 'cost per test' and 'cost per Benchmarking Complexity Unit' level, 'discipline/department' (sub-specialty) level, or overall laboratory/site and organisational levels. We have completed development of a national BiP program. An activity-based costing methodology based on avoidable costs overcomes many problems of previous benchmarking studies based on total costs. The use of benchmarking complexity adjustment permits correction for varying test-mix and diagnostic complexity between laboratories. Use of iterative communication strategies with program participants can overcome many obstacles and lead to innovations.

  20. An unbiased method to build benchmarking sets for ligand-based virtual screening and its application to GPCRs.

    PubMed

    Xia, Jie; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren; Wang, Xiang Simon

    2014-05-27

    Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the "artificial enrichment" and "analogue bias" of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD.

  1. Organic contaminants, trace and major elements, and nutrients in water and sediment sampled in response to the Deepwater Horizon oil spill

    USGS Publications Warehouse

    Nowell, Lisa H.; Ludtke, Amy S.; Mueller, David K.; Scott, Jonathon C.

    2011-01-01

    Considering all the information evaluated in this report, there were significant differences between pre-landfall and post-landfall samples for PAH concentrations in sediment. Pre-landfall and post-landfall samples did not differ significantly in concentrations or benchmark exceedances for most organics in water or trace elements in sediment. For trace elements in water, aquatic-life benchmarks were exceeded in almost 50 percent of samples, but the high and variable analytical reporting levels precluded statistical comparison of benchmark exceedances between sampling periods. Concentrations of several PAH compounds in sediment were significantly higher in post-landfall samples than pre-landfall samples, and five of seven sites with the largest differences in PAH concentrations also had diagnostic geochemical evidence of Deepwater Horizon Macondo-1 oil from Rosenbauer and others (2010).

  2. Automatic EEG artifact removal: a weighted support vector machine approach with error correction.

    PubMed

    Shao, Shi-Yun; Shen, Kai-Quan; Ong, Chong Jin; Wilder-Smith, Einar P V; Li, Xiao-Ping

    2009-02-01

    An automatic electroencephalogram (EEG) artifact removal method is presented in this paper. Compared to past methods, it has two unique features: 1) a weighted version of support vector machine formulation that handles the inherent unbalanced nature of component classification and 2) the ability to accommodate structural information typically found in component classification. The advantages of the proposed method are demonstrated on real-life EEG recordings with comparisons made to several benchmark methods. Results show that the proposed method is preferable to the other methods in the context of artifact removal by achieving a better tradeoff between removing artifacts and preserving inherent brain activities. Qualitative evaluation of the reconstructed EEG epochs also demonstrates that after artifact removal inherent brain activities are largely preserved.

  3. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    PubMed

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  4. Delving into sensible measures to enhance the environmental performance of biohydrogen: A quantitative approach based on process simulation, life cycle assessment and data envelopment analysis.

    PubMed

    Martín-Gamboa, Mario; Iribarren, Diego; Susmozas, Ana; Dufour, Javier

    2016-08-01

    A novel approach is developed to evaluate quantitatively the influence of operational inefficiency in biomass production on the life-cycle performance of hydrogen from biomass gasification. Vine-growers and process simulation are used as key sources of inventory data. The life cycle assessment of biohydrogen according to current agricultural practices for biomass production is performed, as well as that of target biohydrogen according to agricultural practices optimised through data envelopment analysis. Only 20% of the vineyards assessed operate efficiently, and the benchmarked reduction percentages of operational inputs range from 45% to 73% in the average vineyard. The fulfilment of operational benchmarks avoiding irregular agricultural practices is concluded to improve significantly the environmental profile of biohydrogen (e.g., impact reductions above 40% for eco-toxicity and global warming). Finally, it is shown that this type of bioenergy system can be an excellent replacement for conventional hydrogen in terms of global warming and non-renewable energy demand. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Encouraging Reactivity to Create Robust Machines

    DTIC Science & Technology

    2013-07-01

    Performance Evaluation and Benchmarking of Intelligent Systems, 113 137. Baldwin, J. (1896). A new factor in evolution. The American Naturalist, 30(355...Once more unto the breach: Co evolving a robot and its simulator. In Proceed ings of the international conference on artifical life (alife9) (pp.57...Pfeifer, R. (2003). Evolving complete agents using artificial ontogeny. In (pp. 237 258). Springer Verlag. Brooks, R. (1994). Artifical life and

  6. Benchmarking on Tsunami Currents with ComMIT

    NASA Astrophysics Data System (ADS)

    Sharghi vand, N.; Kanoglu, U.

    2015-12-01

    There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)

  7. A Benchmark Problem for Development of Autonomous Structural Modal Identification

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Woodard, Stanley E.; Juang, Jer-Nan

    1996-01-01

    This paper summarizes modal identification results obtained using an autonomous version of the Eigensystem Realization Algorithm on a dynamically complex, laboratory structure. The benchmark problem uses 48 of 768 free-decay responses measured in a complete modal survey test. The true modal parameters of the structure are well known from two previous, independent investigations. Without user involvement, the autonomous data analysis identified 24 to 33 structural modes with good to excellent accuracy in 62 seconds of CPU time (on a DEC Alpha 4000 computer). The modal identification technique described in the paper is the baseline algorithm for NASA's Autonomous Dynamics Determination (ADD) experiment scheduled to fly on International Space Station assembly flights in 1997-1999.

  8. Recommendations for Benchmarking Web Site Usage among Academic Libraries.

    ERIC Educational Resources Information Center

    Hightower, Christy; Sih, Julie; Tilghman, Adam

    1998-01-01

    To help library directors and Web developers create a benchmarking program to compare statistics of academic Web sites, the authors analyzed the Web server log files of 14 university science and engineering libraries. Recommends a centralized voluntary reporting structure coordinated by the Association of Research Libraries (ARL) and a method for…

  9. Academic Achievement and Extracurricular School Activities of At-Risk High School Students

    ERIC Educational Resources Information Center

    Marchetti, Ryan; Wilson, Randal H.; Dunham, Mardis

    2016-01-01

    This study compared the employment, extracurricular participation, and family structure status of students from low socioeconomic families that achieved state-approved benchmarks on ACT reading and mathematics tests to those that did not achieve the benchmarks. Free and reduced lunch eligibility was used to determine SES. Participants included 211…

  10. Benchmarking Alumni Relations in Community Colleges: Findings from a 2015 CASE Survey

    ERIC Educational Resources Information Center

    Paradise, Andrew

    2016-01-01

    The Benchmarking Alumni Relations in Community Colleges white paper features key data on alumni relations programs at community colleges across the United States. The paper compares results from 2015 and 2012 across such areas as the structure, operations and budget for alumni relations, alumni data collection and management, alumni communications…

  11. Development of a strontium chronic effects benchmark for aquatic life in freshwater.

    PubMed

    McPherson, Cathy A; Lawrence, Gary S; Elphick, James R; Chapman, Peter M

    2014-11-01

    There are no national water-quality guidelines for strontium for the protection of freshwater aquatic life in North America or elsewhere. Available data on the acute and chronic toxicity of strontium to freshwater aquatic life were compiled and reviewed. Acute toxicity was reported to occur at concentrations ranging from 75 mg/L to 15 000 mg/L. The majority of chronic effects occurred at concentrations above 11 mg/L; however, calculation of a representative benchmark was confounded by results from 4 studies indicating that chronic effects occurred at lower concentrations than all other studies, in 2 cases below background concentrations reported for US and European streams. Two of these studies, including 1 reporting effects below background concentrations, were repeated and found not to be reproducible; chronic effects occurred at considerably higher strontium concentrations than in the original studies. Studies with narrow-mouthed toad and goldfish were not repeated; both studies reported chronic effects below background concentrations, and both studies had been conducted by the authors of 1 of the 2 studies that were repeated and shown to be nonreproducible. Studies by these authors (3 of the 4 confounding studies), conducted over 30 yr ago, lacked detail in reporting of methods and results. It is thus likely that repeating the toad and goldfish studies would also have resulted in a higher strontium effects concentration. A strontium chronic effects benchmark of 10.7 mg/L that incorporates the results of additional testing summarized in the present study is proposed for freshwater environments. © 2014 SETAC.

  12. Benchmarking initiatives in the water industry.

    PubMed

    Parena, R; Smeets, E

    2001-01-01

    Customer satisfaction and service care are every day pushing professionals in the water industry to seek to improve their performance, lowering costs and increasing the provided service level. Process Benchmarking is generally recognised as a systematic mechanism of comparing one's own utility with other utilities or businesses with the intent of self-improvement by adopting structures or methods used elsewhere. The IWA Task Force on Benchmarking, operating inside the Statistics and Economics Committee, has been committed to developing a general accepted concept of Process Benchmarking to support water decision-makers in addressing issues of efficiency. In a first step the Task Force disseminated among the Committee members a questionnaire focused on providing suggestions about the kind, the evolution degree and the main concepts of Benchmarking adopted in the represented Countries. A comparison among the guidelines adopted in The Netherlands and Scandinavia has recently challenged the Task Force in drafting a methodology for a worldwide process benchmarking in water industry. The paper provides a framework of the most interesting benchmarking experiences in the water sector and describes in detail both the final results of the survey and the methodology focused on identification of possible improvement areas.

  13. QUASAR--scoring and ranking of sequence-structure alignments.

    PubMed

    Birzele, Fabian; Gewehr, Jan E; Zimmer, Ralf

    2005-12-15

    Sequence-structure alignments are a common means for protein structure prediction in the fields of fold recognition and homology modeling, and there is a broad variety of programs that provide such alignments based on sequence similarity, secondary structure or contact potentials. Nevertheless, finding the best sequence-structure alignment in a pool of alignments remains a difficult problem. QUASAR (quality of sequence-structure alignments ranking) provides a unifying framework for scoring sequence-structure alignments that aids finding well-performing combinations of well-known and custom-made scoring schemes. Those scoring functions can be benchmarked against widely accepted quality scores like MaxSub, TMScore, Touch and APDB, thus enabling users to test their own alignment scores against 'standard-of-truth' structure-based scores. Furthermore, individual score combinations can be optimized with respect to benchmark sets based on known structural relationships using QUASAR's in-built optimization routines.

  14. [Does implementation of benchmarking in quality circles improve the quality of care of patients with asthma and reduce drug interaction?].

    PubMed

    Kaufmann-Kolle, Petra; Szecsenyi, Joachim; Broge, Björn; Haefeli, Walter Emil; Schneider, Antonius

    2011-01-01

    The purpose of this cluster-randomised controlled trial was to evaluate the efficacy of quality circles (QCs) working either with general data-based feedback or with an open benchmark within the field of asthma care and drug-drug interactions. Twelve QCs, involving 96 general practitioners from 85 practices, were randomised. Six QCs worked with traditional anonymous feedback and six with an open benchmark. Two QC meetings supported with feedback reports were held covering the topics "drug-drug interactions" and "asthma"; in both cases discussions were guided by a trained moderator. Outcome measures included health-related quality of life and patient satisfaction with treatment, asthma severity and number of potentially inappropriate drug combinations as well as the general practitioners' satisfaction in relation to the performance of the QC. A significant improvement in the treatment of asthma was observed in both trial arms. However, there was only a slight improvement regarding inappropriate drug combinations. There were no relevant differences between the group with open benchmark (B-QC) and traditional quality circles (T-QC). The physicians' satisfaction with the QC performance was significantly higher in the T-QCs. General practitioners seem to take a critical perspective about open benchmarking in quality circles. Caution should be used when implementing benchmarking in a quality circle as it did not improve healthcare when compared to the traditional procedure with anonymised comparisons. Copyright © 2011. Published by Elsevier GmbH.

  15. Ab initio calculations, structure, NBO and NCI analyses of Xsbnd H⋯π interactions

    NASA Astrophysics Data System (ADS)

    Wu, Qiyang; Su, He; Wang, Hongyan; Wang, Hui

    2018-02-01

    The performance of ab initio methods (MP2, DFT/B3LYP, random-phase approximation (RPA), CCSD(T) and QCISD(T)) in predicting interaction energy of Xsbnd H⋯π (Xsbnd H = HCCH, HCl, HF; π = C2H2, C2H4, C6H6) hydrogen complexes are assessed systematically. The CCSD(T)/CBS benchmarks of interaction energy are reported. It is found that RPA agrees well with CCSD(T)/CBS benchmarks and experimental results. CCSD(T) and QCISD(T) perform the best only when compared with CCSD(T)/CBS benchmarks, MP2 performs well only for experimental data. B3LYP provides the worst accuracy. Additionally, the equilibrium structure, interaction type of Xsbnd H⋯π hydrogen complexes are investigated by the natural bond orbital (NBO) and the non-covalent interaction index (NCI).

  16. Antibody-protein interactions: benchmark datasets and prediction tools evaluation

    PubMed Central

    Ponomarenko, Julia V; Bourne, Philip E

    2007-01-01

    Background The ability to predict antibody binding sites (aka antigenic determinants or B-cell epitopes) for a given protein is a precursor to new vaccine design and diagnostics. Among the various methods of B-cell epitope identification X-ray crystallography is one of the most reliable methods. Using these experimental data computational methods exist for B-cell epitope prediction. As the number of structures of antibody-protein complexes grows, further interest in prediction methods using 3D structure is anticipated. This work aims to establish a benchmark for 3D structure-based epitope prediction methods. Results Two B-cell epitope benchmark datasets inferred from the 3D structures of antibody-protein complexes were defined. The first is a dataset of 62 representative 3D structures of protein antigens with inferred structural epitopes. The second is a dataset of 82 structures of antibody-protein complexes containing different structural epitopes. Using these datasets, eight web-servers developed for antibody and protein binding sites prediction have been evaluated. In no method did performance exceed a 40% precision and 46% recall. The values of the area under the receiver operating characteristic curve for the evaluated methods were about 0.6 for ConSurf, DiscoTope, and PPI-PRED methods and above 0.65 but not exceeding 0.70 for protein-protein docking methods when the best of the top ten models for the bound docking were considered; the remaining methods performed close to random. The benchmark datasets are included as a supplement to this paper. Conclusion It may be possible to improve epitope prediction methods through training on datasets which include only immune epitopes and through utilizing more features characterizing epitopes, for example, the evolutionary conservation score. Notwithstanding, overall poor performance may reflect the generality of antigenicity and hence the inability to decipher B-cell epitopes as an intrinsic feature of the protein. It is an open question as to whether ultimately discriminatory features can be found. PMID:17910770

  17. Maximal Unbiased Benchmarking Data Sets for Human Chemokine Receptors and Comparative Analysis.

    PubMed

    Xia, Jie; Reid, Terry-Elinor; Wu, Song; Zhang, Liangren; Wang, Xiang Simon

    2018-05-29

    Chemokine receptors (CRs) have long been druggable targets for the treatment of inflammatory diseases and HIV-1 infection. As a powerful technique, virtual screening (VS) has been widely applied to identifying small molecule leads for modern drug targets including CRs. For rational selection of a wide variety of VS approaches, ligand enrichment assessment based on a benchmarking data set has become an indispensable practice. However, the lack of versatile benchmarking sets for the whole CRs family that are able to unbiasedly evaluate every single approach including both structure- and ligand-based VS somewhat hinders modern drug discovery efforts. To address this issue, we constructed Maximal Unbiased Benchmarking Data sets for human Chemokine Receptors (MUBD-hCRs) using our recently developed tools of MUBD-DecoyMaker. The MUBD-hCRs encompasses 13 subtypes out of 20 chemokine receptors, composed of 404 ligands and 15756 decoys so far and is readily expandable in the future. It had been thoroughly validated that MUBD-hCRs ligands are chemically diverse while its decoys are maximal unbiased in terms of "artificial enrichment", "analogue bias". In addition, we studied the performance of MUBD-hCRs, in particular CXCR4 and CCR5 data sets, in ligand enrichment assessments of both structure- and ligand-based VS approaches in comparison with other benchmarking data sets available in the public domain and demonstrated that MUBD-hCRs is very capable of designating the optimal VS approach. MUBD-hCRs is a unique and maximal unbiased benchmarking set that covers major CRs subtypes so far.

  18. A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems

    NASA Astrophysics Data System (ADS)

    Abtahi, Amir-Reza; Bijari, Afsane

    2017-03-01

    In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.

  19. Evaluation of Life Events in Major Depression: Assessing Negative Emotional Bias.

    PubMed

    Girz, Laura; Driver-Linn, Erin; Miller, Gregory A; Deldin, Patricia J

    2017-05-01

    Overly negative appraisals of negative life events characterize depression but patterns of emotion bias associated with life events in depression are not well understood. The goal of this paper is to determine under which situations emotional responses are stronger than expected given life events and which emotions are biased. Depressed (n = 16) and non-depressed (n = 14) participants (mean age = 41.4 years) wrote about negative life events involving their own actions and inactions, and rated the current emotion elicited by those events. They also rated emotions elicited by someone else's actions and inactions. These ratings were compared with evaluations provided by a second, 'benchmark' group of non-depressed individuals (n = 20) in order to assess the magnitude and direction of possible biased emotional reactions in the two groups. Participants with depression reported greater anger and disgust than expected in response to both actions and inactions, whereas they reported greater guilt, shame, sadness, responsibility and fear than expected in response to inactions. Relative to non-depressed and benchmark participants, depressed participants were overly negative in the evaluation of their own life events, but not the life events of others. A standardized method for establishing emotional bias reveals a pattern of overly negative emotion only in depressed individuals' self-evaluations, and in particular with respect to anger and disgust, lending support to claims that major depressives' evaluations represent negative emotional bias and to clinical interventions that address this bias. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Benchmark Analysis of Career and Technical Education in Lenawee County. Final Report.

    ERIC Educational Resources Information Center

    Hollenbeck, Kevin

    The career and technical education (CTE) provided in grades K-12 in the county's vocational-technical center and 12 local public school districts of Lenawee County, Michigan, was benchmarked with respect to its attention to career development. Data were collected from the following sources: structured interviews with a number of key respondents…

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackillop, William J., E-mail: william.mackillop@krcc.on.ca; Department of Public Health Sciences, Queen's University, Kingston, Ontario; Department of Oncology, Queen's University, Kingston, Ontario

    Purpose: Palliative radiation therapy (PRT) benefits many patients with incurable cancer, but the overall need for PRT is unknown. Our primary objective was to estimate the appropriate rate of use of PRT in Ontario. Methods and Materials: The Ontario Cancer Registry identified patients who died of cancer in Ontario between 2006 and 2010. Comprehensive RT records were linked to the registry. Multivariate analysis identified social and health system-related factors affecting the use of PRT, enabling us to define a benchmark population of patients with unimpeded access to PRT. The proportion of cases treated at any time (PRT{sub lifetime}), the proportionmore » of cases treated in the last 2 years of life (PRT{sub 2y}), and number of courses of PRT per thousand cancer deaths were measured in the benchmark population. These benchmarks were standardized to the characteristics of the overall population, and province-wide PRT rates were then compared to benchmarks. Results: Cases diagnosed at hospitals with no RT on-site and residents of poorer communities and those who lived farther from an RT center, were significantly less likely than others to receive PRT. However, availability of RT at the diagnosing hospital was the dominant factor. Neither socioeconomic status nor distance from home to nearest RT center had a significant effect on the use of PRT in patients diagnosed at a hospital with RT facilities. The benchmark population therefore consisted of patients diagnosed at a hospital with RT facilities. The standardized benchmark for PRT{sub lifetime} was 33.9%, and the corresponding province-wide rate was 28.5%. The standardized benchmark for PRT{sub 2y} was 32.4%, and the corresponding province-wide rate was 27.0%. The standardized benchmark for the number of courses of PRT per thousand cancer deaths was 652, and the corresponding province-wide rate was 542. Conclusions: Approximately one-third of patients who die of cancer in Ontario need PRT, but many of them are never treated.« less

  2. U.S. Solar Photovoltaic System Cost Benchmark: Q1 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Ran; Feldman, David; Margolis, Robert

    This report benchmarks U.S. solar photovoltaic (PV) system installed costs as of the first quarter of 2017 (Q1 2017). We use a bottom-up methodology, accounting for all system and projectdevelopment costs incurred during the installation to model the costs for residential, commercial, and utility-scale systems. In general, we attempt to model the typical installation techniques and business operations from an installed-cost perspective. Costs are represented from the perspective of the developer/installer; thus, all hardware costs represent the price at which components are purchased by the developer/installer, not accounting for preexisting supply agreements or other contracts. Importantly, the benchmark also representsmore » the sales price paid to the installer; therefore, it includes profit in the cost of the hardware, 1 along with the profit the installer/developer receives, as a separate cost category. However, it does not include any additional net profit, such as a developer fee or price gross-up, which is common in the marketplace. We adopt this approach owing to the wide variation in developer profits in all three sectors, where project pricing is highly dependent on region and project specifics such as local retail electricity rate structures, local rebate and incentive structures, competitive environment, and overall project or deal structures. Finally, our benchmarks are national averages weighted by state installed capacities.« less

  3. Benchmark Tests for Stirling Convertor Heater Head Life Assessment Conducted

    NASA Technical Reports Server (NTRS)

    Krause, David L.; Halford, Gary R.; Bowman, Randy R.

    2004-01-01

    A new in-house test capability has been developed at the NASA Glenn Research Center, where a critical component of the Stirling Radioisotope Generator (SRG) is undergoing extensive testing to aid the development of analytical life prediction methodology and to experimentally aid in verification of the flight-design component's life. The new facility includes two test rigs that are performing creep testing of the SRG heater head pressure vessel test articles at design temperature and with wall stresses ranging from operating level to seven times that (see the following photograph).

  4. IRaPPA: Information retrieval based integration of biophysical models for protein assembly selection

    PubMed Central

    Moal, Iain H.; Barradas-Bautista, Didier; Jiménez-García, Brian; Torchala, Mieczyslaw; van der Velde, Arjan; Vreven, Thom; Weng, Zhiping; Bates, Paul A.; Fernández-Recio, Juan

    2018-01-01

    Motivation In order to function, proteins frequently bind to one another and form 3D assemblies. Knowledge of the atomic details of these structures helps our understanding of how proteins work together, how mutations can lead to disease, and facilitates the designing of drugs which prevent or mimic the interaction. Results Atomic modeling of protein-protein interactions requires the selection of near-native structures from a set of docked poses based on their calculable properties. By considering this as an information retrieval problem, we have adapted methods developed for Internet search ranking and electoral voting into IRaPPA, a pipeline integrating biophysical properties. The approach enhances the identification of near-native structures when applied to four docking methods, resulting in a near-native appearing in the top 10 solutions for up to 50% of complexes benchmarked, and up to 70% in the top 100. Availability IRaPPA has been implemented in the SwarmDock server (http://bmm.crick.ac.uk/~SwarmDock/), pyDock server (http://life.bsc.es/pid/pydockrescoring/) and ZDOCK server (http://zdock.umassmed.edu/), with code available on request. PMID:28200016

  5. Adsorption structures and energetics of molecules on metal surfaces: Bridging experiment and theory

    NASA Astrophysics Data System (ADS)

    Maurer, Reinhard J.; Ruiz, Victor G.; Camarillo-Cisneros, Javier; Liu, Wei; Ferri, Nicola; Reuter, Karsten; Tkatchenko, Alexandre

    2016-05-01

    Adsorption geometry and stability of organic molecules on surfaces are key parameters that determine the observable properties and functions of hybrid inorganic/organic systems (HIOSs). Despite many recent advances in precise experimental characterization and improvements in first-principles electronic structure methods, reliable databases of structures and energetics for large adsorbed molecules are largely amiss. In this review, we present such a database for a range of molecules adsorbed on metal single-crystal surfaces. The systems we analyze include noble-gas atoms, conjugated aromatic molecules, carbon nanostructures, and heteroaromatic compounds adsorbed on five different metal surfaces. The overall objective is to establish a diverse benchmark dataset that enables an assessment of current and future electronic structure methods, and motivates further experimental studies that provide ever more reliable data. Specifically, the benchmark structures and energetics from experiment are here compared with the recently developed van der Waals (vdW) inclusive density-functional theory (DFT) method, DFT + vdWsurf. In comparison to 23 adsorption heights and 17 adsorption energies from experiment we find a mean average deviation of 0.06 Å and 0.16 eV, respectively. This confirms the DFT + vdWsurf method as an accurate and efficient approach to treat HIOSs. A detailed discussion identifies remaining challenges to be addressed in future development of electronic structure methods, for which the here presented benchmark database may serve as an important reference.

  6. Classification and assessment tools for structural motif discovery algorithms.

    PubMed

    Badr, Ghada; Al-Turaiki, Isra; Mathkour, Hassan

    2013-01-01

    Motif discovery is the problem of finding recurring patterns in biological data. Patterns can be sequential, mainly when discovered in DNA sequences. They can also be structural (e.g. when discovering RNA motifs). Finding common structural patterns helps to gain a better understanding of the mechanism of action (e.g. post-transcriptional regulation). Unlike DNA motifs, which are sequentially conserved, RNA motifs exhibit conservation in structure, which may be common even if the sequences are different. Over the past few years, hundreds of algorithms have been developed to solve the sequential motif discovery problem, while less work has been done for the structural case. In this paper, we survey, classify, and compare different algorithms that solve the structural motif discovery problem, where the underlying sequences may be different. We highlight their strengths and weaknesses. We start by proposing a benchmark dataset and a measurement tool that can be used to evaluate different motif discovery approaches. Then, we proceed by proposing our experimental setup. Finally, results are obtained using the proposed benchmark to compare available tools. To the best of our knowledge, this is the first attempt to compare tools solely designed for structural motif discovery. Results show that the accuracy of discovered motifs is relatively low. The results also suggest a complementary behavior among tools where some tools perform well on simple structures, while other tools are better for complex structures. We have classified and evaluated the performance of available structural motif discovery tools. In addition, we have proposed a benchmark dataset with tools that can be used to evaluate newly developed tools.

  7. Pharmaceutical Market Access: current state of affairs and key challenges - results of the Market Access Launch Excellence Inventory (MALEI).

    PubMed

    Koch, Marcus A

    2015-01-01

    To take inventory of the current state of affairs of Market Access Launch Excellence in the life sciences industry. To identify key gaps and challenges for Market Access (MA) and discuss how they can be addressed. To generate a baseline for benchmarking MA launch excellence. An online survey was conducted with pharmaceutical executives primarily working in MA, marketing, or general management. The survey aimed to evaluate MA excellence prerequisites across the product life cycle (rated by importance and level of implementation) and to describe MA activity models in the respective companies. Composite scores were calculated from respondents' ratings and answers. Implementation levels of MA excellence prerequisites generally lagged behind their perceived importance. Item importance and the respective level of implementation correlated well, which can be interpreted as proof of the validity of the questionnaire. The following areas were shown to be particularly underimplemented: 1) early integration of MA and health economic considerations in research and development decision making, 2) developing true partnerships with payers, including the development of services 'beyond the pill', and 3) consideration of human resource and talent management. The concept of importance-adjusted implementation levels as a hybrid parameter was introduced and shown to be a viable tool for benchmarking purposes. More than 70% of respondents indicated that their companies will invest broadly in MA in terms of capital and headcount within the next 3 years. MA (launch) excellence needs to be further developed in order to close implementation gaps across the entire product life cycle. As MA is a comparatively young pharmaceutical discipline in a complex and dynamic environment, this effort will require strategic focus and dedication. The Market Access Launch Excellence Inventory benchmarking tool may help guide decision makers to prioritize their endeavors.

  8. U.S. Solar Photovoltaic System Cost Benchmark: Q1 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Ran; Feldman, David J.; Margolis, Robert M.

    NREL has been modeling U.S. photovoltaic (PV) system costs since 2009. This year, our report benchmarks costs of U.S. solar PV for residential, commercial, and utility-scale systems built in the first quarter of 2017 (Q1 2017). Costs are represented from the perspective of the developer/installer, thus all hardware costs represent the price at which components are purchased by the developer/installer, not accounting for preexisting supply agreements or other contracts. Importantly, the benchmark this year (2017) also represents the sales price paid to the installer; therefore, it includes profit in the cost of the hardware, along with the profit the installer/developermore » receives, as a separate cost category. However, it does not include any additional net profit, such as a developer fee or price gross-up, which are common in the marketplace. We adopt this approach owing to the wide variation in developer profits in all three sectors, where project pricing is highly dependent on region and project specifics such as local retail electricity rate structures, local rebate and incentive structures, competitive environment, and overall project or deal structures.« less

  9. TRUST. I. A 3D externally illuminated slab benchmark for dust radiative transfer

    NASA Astrophysics Data System (ADS)

    Gordon, K. D.; Baes, M.; Bianchi, S.; Camps, P.; Juvela, M.; Kuiper, R.; Lunttila, T.; Misselt, K. A.; Natale, G.; Robitaille, T.; Steinacker, J.

    2017-07-01

    Context. The radiative transport of photons through arbitrary three-dimensional (3D) structures of dust is a challenging problem due to the anisotropic scattering of dust grains and strong coupling between different spatial regions. The radiative transfer problem in 3D is solved using Monte Carlo or Ray Tracing techniques as no full analytic solution exists for the true 3D structures. Aims: We provide the first 3D dust radiative transfer benchmark composed of a slab of dust with uniform density externally illuminated by a star. This simple 3D benchmark is explicitly formulated to provide tests of the different components of the radiative transfer problem including dust absorption, scattering, and emission. Methods: The details of the external star, the slab itself, and the dust properties are provided. This benchmark includes models with a range of dust optical depths fully probing cases that are optically thin at all wavelengths to optically thick at most wavelengths. The dust properties adopted are characteristic of the diffuse Milky Way interstellar medium. This benchmark includes solutions for the full dust emission including single photon (stochastic) heating as well as two simplifying approximations: One where all grains are considered in equilibrium with the radiation field and one where the emission is from a single effective grain with size-distribution-averaged properties. A total of six Monte Carlo codes and one Ray Tracing code provide solutions to this benchmark. Results: The solution to this benchmark is given as global spectral energy distributions (SEDs) and images at select diagnostic wavelengths from the ultraviolet through the infrared. Comparison of the results revealed that the global SEDs are consistent on average to a few percent for all but the scattered stellar flux at very high optical depths. The image results are consistent within 10%, again except for the stellar scattered flux at very high optical depths. The lack of agreement between different codes of the scattered flux at high optical depths is quantified for the first time. Convergence tests using one of the Monte Carlo codes illustrate the sensitivity of the solutions to various model parameters. Conclusions: We provide the first 3D dust radiative transfer benchmark and validate the accuracy of this benchmark through comparisons between multiple independent codes and detailed convergence tests.

  10. The demographic impact and development benefits of meeting demand for family planning with modern contraceptive methods.

    PubMed

    Goodkind, Daniel; Lollock, Lisa; Choi, Yoonjoung; McDevitt, Thomas; West, Loraine

    2018-01-01

    Meeting demand for family planning can facilitate progress towards all major themes of the United Nations Sustainable Development Goals (SDGs): people, planet, prosperity, peace, and partnership. Many policymakers have embraced a benchmark goal that at least 75% of the demand for family planning in all countries be satisfied with modern contraceptive methods by the year 2030. This study examines the demographic impact (and development implications) of achieving the 75% benchmark in 13 developing countries that are expected to be the furthest from achieving that benchmark. Estimation of the demographic impact of achieving the 75% benchmark requires three steps in each country: 1) translate contraceptive prevalence assumptions (with and without intervention) into future fertility levels based on biometric models, 2) incorporate each pair of fertility assumptions into separate population projections, and 3) compare the demographic differences between the two population projections. Data are drawn from the United Nations, the US Census Bureau, and Demographic and Health Surveys. The demographic impact of meeting the 75% benchmark is examined via projected differences in fertility rates (average expected births per woman's reproductive lifetime), total population, growth rates, age structure, and youth dependency. On average, meeting the benchmark would imply a 16 percentage point increase in modern contraceptive prevalence by 2030 and a 20% decline in youth dependency, which portends a potential demographic dividend to spur economic growth. Improvements in meeting the demand for family planning with modern contraceptive methods can bring substantial benefits to developing countries. To our knowledge, this is the first study to show formally how such improvements can alter population size and age structure. Declines in youth dependency portend a demographic dividend, an added bonus to the already well-known benefits of meeting existing demands for family planning.

  11. ICSBEP Benchmarks For Nuclear Data Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briggs, J. Blair

    2005-05-24

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) -- Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive andmore » internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled ''International Handbook of Evaluated Criticality Safety Benchmark Experiments.'' The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.« less

  12. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    deWit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure, This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  13. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  14. BACT Simulation User Guide (Version 7.0)

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1997-01-01

    This report documents the structure and operation of a simulation model of the Benchmark Active Control Technology (BACT) Wind-Tunnel Model. The BACT system was designed, built, and tested at NASA Langley Research Center as part of the Benchmark Models Program and was developed to perform wind-tunnel experiments to obtain benchmark quality data to validate computational fluid dynamics and computational aeroelasticity codes, to verify the accuracy of current aeroservoelasticity design and analysis tools, and to provide an active controls testbed for evaluating new and innovative control algorithms for flutter suppression and gust load alleviation. The BACT system has been especially valuable as a control system testbed.

  15. Management characteristics of beef cattle production in the western United States

    USDA-ARS?s Scientific Manuscript database

    A comprehensive life cycle assessment (LCA) of beef in the United States is being conducted to provide benchmarks and identify opportunities for improvement of the beef value chain. Region-specific data are being collected to accurately characterize cattle production practices. This study reports pr...

  16. Dual linear structured support vector machine tracking method via scale correlation filter

    NASA Astrophysics Data System (ADS)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  17. Developing and Trialling an independent, scalable and repeatable IT-benchmarking procedure for healthcare organisations.

    PubMed

    Liebe, J D; Hübner, U

    2013-01-01

    Continuous improvements of IT-performance in healthcare organisations require actionable performance indicators, regularly conducted, independent measurements and meaningful and scalable reference groups. Existing IT-benchmarking initiatives have focussed on the development of reliable and valid indicators, but less on the questions about how to implement an environment for conducting easily repeatable and scalable IT-benchmarks. This study aims at developing and trialling a procedure that meets the afore-mentioned requirements. We chose a well established, regularly conducted (inter-) national IT-survey of healthcare organisations (IT-Report Healthcare) as the environment and offered the participants of the 2011 survey (CIOs of hospitals) to enter a benchmark. The 61 structural and functional performance indicators covered among others the implementation status and integration of IT-systems and functions, global user satisfaction and the resources of the IT-department. Healthcare organisations were grouped by size and ownership. The benchmark results were made available electronically and feedback on the use of these results was requested after several months. Fifty-ninehospitals participated in the benchmarking. Reference groups consisted of up to 141 members depending on the number of beds (size) and the ownership (public vs. private). A total of 122 charts showing single indicator frequency views were sent to each participant. The evaluation showed that 94.1% of the CIOs who participated in the evaluation considered this benchmarking beneficial and reported that they would enter again. Based on the feedback of the participants we developed two additional views that provide a more consolidated picture. The results demonstrate that establishing an independent, easily repeatable and scalable IT-benchmarking procedure is possible and was deemed desirable. Based on these encouraging results a new benchmarking round which includes process indicators is currently conducted.

  18. A Simulation Environment for Benchmarking Sensor Fusion-Based Pose Estimators.

    PubMed

    Ligorio, Gabriele; Sabatini, Angelo Maria

    2015-12-19

    In-depth analysis and performance evaluation of sensor fusion-based estimators may be critical when performed using real-world sensor data. For this reason, simulation is widely recognized as one of the most powerful tools for algorithm benchmarking. In this paper, we present a simulation framework suitable for assessing the performance of sensor fusion-based pose estimators. The systems used for implementing the framework were magnetic/inertial measurement units (MIMUs) and a camera, although the addition of further sensing modalities is straightforward. Typical nuisance factors were also included for each sensor. The proposed simulation environment was validated using real-life sensor data employed for motion tracking. The higher mismatch between real and simulated sensors was about 5% of the measured quantity (for the camera simulation), whereas a lower correlation was found for an axis of the gyroscope (0.90). In addition, a real benchmarking example of an extended Kalman filter for pose estimation from MIMU and camera data is presented.

  19. Optimal type 2 diabetes mellitus management: the randomised controlled OPTIMISE benchmarking study: baseline results from six European countries.

    PubMed

    Hermans, Michel P; Brotons, Carlos; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank

    2013-12-01

    Micro- and macrovascular complications of type 2 diabetes have an adverse impact on survival, quality of life and healthcare costs. The OPTIMISE (OPtimal Type 2 dIabetes Management Including benchmarking and Standard trEatment) trial comparing physicians' individual performances with a peer group evaluates the hypothesis that benchmarking, using assessments of change in three critical quality indicators of vascular risk: glycated haemoglobin (HbA1c), low-density lipoprotein-cholesterol (LDL-C) and systolic blood pressure (SBP), may improve quality of care in type 2 diabetes in the primary care setting. This was a randomised, controlled study of 3980 patients with type 2 diabetes. Six European countries participated in the OPTIMISE study (NCT00681850). Quality of care was assessed by the percentage of patients achieving pre-set targets for the three critical quality indicators over 12 months. Physicians were randomly assigned to receive either benchmarked or non-benchmarked feedback. All physicians received feedback on six of their patients' modifiable outcome indicators (HbA1c, fasting glycaemia, total cholesterol, high-density lipoprotein-cholesterol (HDL-C), LDL-C and triglycerides). Physicians in the benchmarking group additionally received information on levels of control achieved for the three critical quality indicators compared with colleagues. At baseline, the percentage of evaluable patients (N = 3980) achieving pre-set targets was 51.2% (HbA1c; n = 2028/3964); 34.9% (LDL-C; n = 1350/3865); 27.3% (systolic blood pressure; n = 911/3337). OPTIMISE confirms that target achievement in the primary care setting is suboptimal for all three critical quality indicators. This represents an unmet but modifiable need to revisit the mechanisms and management of improving care in type 2 diabetes. OPTIMISE will help to assess whether benchmarking is a useful clinical tool for improving outcomes in type 2 diabetes.

  20. The national hydrologic bench-mark network

    USGS Publications Warehouse

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  1. Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems

    NASA Astrophysics Data System (ADS)

    Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald

    A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.

  2. Benchmarking facilities providing care: An international overview of initiatives

    PubMed Central

    Thonon, Frédérique; Watson, Jonathan; Saghatchian, Mahasti

    2015-01-01

    We performed a literature review of existing benchmarking projects of health facilities to explore (1) the rationales for those projects, (2) the motivation for health facilities to participate, (3) the indicators used and (4) the success and threat factors linked to those projects. We studied both peer-reviewed and grey literature. We examined 23 benchmarking projects of different medical specialities. The majority of projects used a mix of structure, process and outcome indicators. For some projects, participants had a direct or indirect financial incentive to participate (such as reimbursement by Medicaid/Medicare or litigation costs related to quality of care). A positive impact was reported for most projects, mainly in terms of improvement of practice and adoption of guidelines and, to a lesser extent, improvement in communication. Only 1 project reported positive impact in terms of clinical outcomes. Success factors and threats are linked to both the benchmarking process (such as organisation of meetings, link with existing projects) and indicators used (such as adjustment for diagnostic-related groups). The results of this review will help coordinators of a benchmarking project to set it up successfully. PMID:26770800

  3. Benchmarking forensic mental health organizations.

    PubMed

    Coombs, Tim; Taylor, Monica; Pirkis, Jane

    2011-04-01

    This paper describes the forensic mental health forums that were conducted as part of the National Mental Health Benchmarking Project (NMHBP). These forums encouraged participating organizations to compare their performance on a range of key performance indicators (KPIs) with that of their peers. Four forensic mental health organizations took part in the NMHBP. Representatives from these organizations attended eight benchmarking forums at which they documented their performance against previously agreed KPIs. They also undertook three special projects which explored some of the factors that might explain inter-organizational variation in performance. The inter-organizational range for many of the indicators was substantial. Observing this led participants to conduct the special projects to explore three factors which might help explain the variability - seclusion practices, delivery of community mental health services, and provision of court liaison services. The process of conducting the special projects gave participants insights into the practices and structures employed by their counterparts, and provided them with some important lessons for quality improvement. The forensic mental health benchmarking forums have demonstrated that benchmarking is feasible and likely to be useful in improving service performance and quality.

  4. Revisiting the PLUMBER Experiments from a Process-Diagnostics Perspective

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.; Ruddell, B. L.; Clark, M. P.; Nijssen, B.; Peters-Lidard, C. D.

    2017-12-01

    The PLUMBER benchmarking experiments [1] showed that some of the most sophisticated land models (CABLE, CH-TESSEL, COLA-SSiB, ISBA-SURFEX, JULES, Mosaic, Noah, ORCHIDEE) were outperformed - in simulations of half-hourly surface energy fluxes - by instantaneous, out-of-sample, and globally-stationary regressions with no state memory. One criticism of PLUMBER is that the benchmarking methodology was not derived formally, so that applying a similar methodology with different performance metrics can result in qualitatively different results. Another common criticism of model intercomparison projects in general is that they offer little insight into process-level deficiencies in the models, and therefore are of marginal value for helping to improve the models. We address both of these issues by proposing a formal benchmarking methodology that also yields a formal and quantitative method for process-level diagnostics. We apply this to the PLUMBER experiments to show that (1) the PLUMBER conclusions were generally correct - the models use only a fraction of the information available to them from met forcing data (<50% by our analysis), and (2) all of the land models investigated by PLUMBER have similar process-level error structures, and therefore together do not represent a meaningful sample of structural or epistemic uncertainty. We conclude by suggesting two ways to improve the experimental design of model intercomparison and/or model benchmarking studies like PLUMBER. First, PLUMBER did not report model parameter values, and it is necessary to know these values to separate parameter uncertainty from structural uncertainty. This is a first order requirement if we want to use intercomparison studies to provide feedback to model development. Second, technical documentation of land models is inadequate. Future model intercomparison projects should begin with a collaborative effort by model developers to document specific differences between model structures. This could be done in a reproducible way using a unified, process-flexible system like SUMMA [2]. [1] Best, M.J. et al. (2015) 'The plumbing of land surface models: benchmarking model performance', J. Hydrometeor. [2] Clark, M.P. et al. (2015) 'A unified approach for process-based hydrologic modeling: 1. Modeling concept', Water Resour. Res.

  5. Implementation of BT, SP, LU, and FT of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Schultz, Matthew; Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of Java features make it an attractive but a debatable choice for High Performance Computing. We have implemented benchmarks working on single structured grid BT,SP,LU and FT in Java. The performance and scalability of the Java code shows that a significant improvement in Java compiler technology and in Java thread implementation are necessary for Java to compete with Fortran in HPC applications.

  6. PPI4DOCK: large scale assessment of the use of homology models in free docking over more than 1000 realistic targets.

    PubMed

    Yu, Jinchao; Guerois, Raphaël

    2016-12-15

    Protein-protein docking methods are of great importance for understanding interactomes at the structural level. It has become increasingly appealing to use not only experimental structures but also homology models of unbound subunits as input for docking simulations. So far we are missing a large scale assessment of the success of rigid-body free docking methods on homology models. We explored how we could benefit from comparative modelling of unbound subunits to expand docking benchmark datasets. Starting from a collection of 3157 non-redundant, high X-ray resolution heterodimers, we developed the PPI4DOCK benchmark containing 1417 docking targets based on unbound homology models. Rigid-body docking by Zdock showed that for 1208 cases (85.2%), at least one correct decoy was generated, emphasizing the efficiency of rigid-body docking in generating correct assemblies. Overall, the PPI4DOCK benchmark contains a large set of realistic cases and provides new ground for assessing docking and scoring methodologies. Benchmark sets can be downloaded from http://biodev.cea.fr/interevol/ppi4dock/ CONTACT: guerois@cea.frSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Promoting Effective Program Leadership in Psychology: A Benchmarking Strategy

    ERIC Educational Resources Information Center

    Halonen, Jane S.

    2013-01-01

    Although scholars have scrutinized many aspects of academic life in psychology, the topic of leadership for psychology programs has remained elusive. This article describes the importance of high-quality leadership in the development of thriving psychology programs. The author offers a strategy for evaluating leaders to help provide developmental…

  8. 75 FR 81268 - Science Advisory Board Staff Office; Notification of Two Public Quality Review Teleconferences of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-27

    ... Two Public Quality Review Teleconferences of the Chartered Science Advisory Board AGENCY... Office announces two public teleconferences of the chartered SAB to conduct quality reviews of three SAB... Appalachian Coalfields'' and ``Review of Field-Based Aquatic Life Benchmark for Conductivity in Central...

  9. Information Technology Budgets and Costs: Do You Know What Your Information Technology Costs Each Year?

    ERIC Educational Resources Information Center

    Dugan, Robert E.

    2002-01-01

    Discusses yearly information technology costs for academic libraries. Topics include transformation and modernization activities that affect prices and budgeting; a cost model for information technologies; life cycle costs, including initial costs and recurring costs; cost benchmarks; and examples of pressures concerning cost accountability. (LRW)

  10. 75 FR 29339 - Science Advisory Board Staff Office; Notification of a Public Meeting of the SAB Panel for the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-25

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9154-7] Science Advisory Board Staff Office; Notification of... Aquatic Ecosystems and Aquatic Life Benchmark for Conductivity AGENCY: Environmental Protection Agency (EPA). ACTION: Notice. SUMMARY: The EPA Science Advisory Board (SAB) Staff Office announces a public...

  11. Food-System Botany

    ERIC Educational Resources Information Center

    Rop, Charles J.

    2011-01-01

    This set of inquiry lessons is adaptable for middle school through high school life science or biology classrooms and will help meet the NSTA scientific inquiry position statement (2004) and the AAAS benchmarks (1993) and NRC standards (1996; 2000) related to health and food literacy. The standards require adolescents to examine their own diet and…

  12. A CPU benchmark for protein crystallographic refinement.

    PubMed

    Bourne, P E; Hendrickson, W A

    1990-01-01

    The CPU time required to complete a cycle of restrained least-squares refinement of a protein structure from X-ray crystallographic data using the FORTRAN codes PROTIN and PROLSQ are reported for 48 different processors, ranging from single-user workstations to supercomputers. Sequential, vector, VLIW, multiprocessor, and RISC hardware architectures are compared using both a small and a large protein structure. Representative compile times for each hardware type are also given, and the improvement in run-time when coding for a specific hardware architecture considered. The benchmarks involve scalar integer and vector floating point arithmetic and are representative of the calculations performed in many scientific disciplines.

  13. ForceGen 3D structure and conformer generation: from small lead-like molecules to macrocyclic drugs

    NASA Astrophysics Data System (ADS)

    Cleves, Ann E.; Jain, Ajay N.

    2017-05-01

    We introduce the ForceGen method for 3D structure generation and conformer elaboration of drug-like small molecules. ForceGen is novel, avoiding use of distance geometry, molecular templates, or simulation-oriented stochastic sampling. The method is primarily driven by the molecular force field, implemented using an extension of MMFF94s and a partial charge estimator based on electronegativity-equalization. The force field is coupled to algorithms for direct sampling of realistic physical movements made by small molecules. Results are presented on a standard benchmark from the Cambridge Crystallographic Database of 480 drug-like small molecules, including full structure generation from SMILES strings. Reproduction of protein-bound crystallographic ligand poses is demonstrated on four carefully curated data sets: the ConfGen Set (667 ligands), the PINC cross-docking benchmark (1062 ligands), a large set of macrocyclic ligands (182 total with typical ring sizes of 12-23 atoms), and a commonly used benchmark for evaluating macrocycle conformer generation (30 ligands total). Results compare favorably to alternative methods, and performance on macrocyclic compounds approaches that observed on non-macrocycles while yielding a roughly 100-fold speed improvement over alternative MD-based methods with comparable performance.

  14. Implementing a benchmarking and feedback concept decreases postoperative pain after total knee arthroplasty: A prospective study including 256 patients.

    PubMed

    Benditz, A; Drescher, J; Greimel, F; Zeman, F; Grifka, J; Meißner, W; Völlner, F

    2016-12-05

    Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16 th in terms of activity-related pain and 9 th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1 st activity-related pain and to 2 nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA.

  15. Implementing a benchmarking and feedback concept decreases postoperative pain after total knee arthroplasty: A prospective study including 256 patients

    PubMed Central

    Benditz, A.; Drescher, J.; Greimel, F.; Zeman, F.; Grifka, J.; Meißner, W.; Völlner, F.

    2016-01-01

    Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16th in terms of activity-related pain and 9th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1st activity-related pain and to 2nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA. PMID:27917911

  16. Basecalling with LifeTrace

    PubMed Central

    Walther, Dirk; Bartha, Gábor; Morris, Macdonald

    2001-01-01

    A pivotal step in electrophoresis sequencing is the conversion of the raw, continuous chromatogram data into the actual sequence of discrete nucleotides, a process referred to as basecalling. We describe a novel algorithm for basecalling implemented in the program LifeTrace. Like Phred, currently the most widely used basecalling software program, LifeTrace takes processed trace data as input. It was designed to be tolerant to variable peak spacing by means of an improved peak-detection algorithm that emphasizes local chromatogram information over global properties. LifeTrace is shown to generate high-quality basecalls and reliable quality scores. It proved particularly effective when applied to MegaBACE capillary sequencing machines. In a benchmark test of 8372 dye-primer MegaBACE chromatograms, LifeTrace generated 17% fewer substitution errors, 16% fewer insertion/deletion errors, and 2.4% more aligned bases to the finished sequence than did Phred. For two sets totaling 6624 dye-terminator chromatograms, the performance improvement was 15% fewer substitution errors, 10% fewer insertion/deletion errors, and 2.1% more aligned bases. The processing time required by LifeTrace is comparable to that of Phred. The predicted quality scores were in line with observed quality scores, permitting direct use for quality clipping and in silico single nucleotide polymorphism (SNP) detection. Furthermore, we introduce a new type of quality score associated with every basecall: the gap-quality. It estimates the probability of a deletion error between the current and the following basecall. This additional quality score improves detection of single basepair deletions when used for locating potential basecalling errors during the alignment. We also describe a new protocol for benchmarking that we believe better discerns basecaller performance differences than methods previously published. PMID:11337481

  17. Experimental flutter boundaries with unsteady pressure distributions for the NACA 0012 Benchmark Model

    NASA Technical Reports Server (NTRS)

    Rivera, Jose A., Jr.; Dansberry, Bryan E.; Farmer, Moses G.; Eckstrom, Clinton V.; Seidel, David A.; Bennett, Robert M.

    1991-01-01

    The Structural Dynamics Div. at NASA-Langley has started a wind tunnel activity referred to as the Benchmark Models Program. The objective is to acquire test data that will be useful for developing and evaluating aeroelastic type Computational Fluid Dynamics codes currently in use or under development. The progress is described which was achieved in testing the first model in the Benchmark Models Program. Experimental flutter boundaries are presented for a rigid semispan model (NACA 0012 airfoil section) mounted on a flexible mount system. Also, steady and unsteady pressure measurements taken at the flutter condition are presented. The pressure data were acquired over the entire model chord located at the 60 pct. span station.

  18. A benchmark for subduction zone modeling

    NASA Astrophysics Data System (ADS)

    van Keken, P.; King, S.; Peacock, S.

    2003-04-01

    Our understanding of subduction zones hinges critically on the ability to discern its thermal structure and dynamics. Computational modeling has become an essential complementary approach to observational and experimental studies. The accurate modeling of subduction zones is challenging due to the unique geometry, complicated rheological description and influence of fluid and melt formation. The complicated physics causes problems for the accurate numerical solution of the governing equations. As a consequence it is essential for the subduction zone community to be able to evaluate the ability and limitations of various modeling approaches. The participants of a workshop on the modeling of subduction zones, held at the University of Michigan at Ann Arbor, MI, USA in 2002, formulated a number of case studies to be developed into a benchmark similar to previous mantle convection benchmarks (Blankenbach et al., 1989; Busse et al., 1991; Van Keken et al., 1997). Our initial benchmark focuses on the dynamics of the mantle wedge and investigates three different rheologies: constant viscosity, diffusion creep, and dislocation creep. In addition we investigate the ability of codes to accurate model dynamic pressure and advection dominated flows. Proceedings of the workshop and the formulation of the benchmark are available at www.geo.lsa.umich.edu/~keken/subduction02.html We strongly encourage interested research groups to participate in this benchmark. At Nice 2003 we will provide an update and first set of benchmark results. Interested researchers are encouraged to contact one of the authors for further details.

  19. Accelerated Life Structural Benchmark Testing for a Stirling Convertor Heater Head

    NASA Technical Reports Server (NTRS)

    Krause, David L.; Kantzos, Pete T.

    2006-01-01

    For proposed long-duration NASA Space Science missions, the Department of Energy, Lockheed Martin, Infinia Corporation, and NASA Glenn Research Center are developing a high-efficiency, 110 W Stirling Radioisotope Generator (SRG110). A structurally significant limit state for the SRG110 heater head component is creep deformation induced at high material temperature and low stress level. Conventional investigations of creep behavior adequately rely on experimental results from uniaxial creep specimens, and a wealth of creep data is available for the Inconel 718 material of construction. However, the specified atypical thin heater head material is fine-grained with a heat treatment that limits precipitate growth, and little creep property data for this microstructure is available in the literature. In addition, the geometry and loading conditions apply a multiaxial stress state on the component, far from the conditions of uniaxial testing. For these reasons, an extensive experimental investigation is ongoing to aid in accurately assessing the durability of the SRG110 heater head. This investigation supplements uniaxial creep testing with pneumatic testing of heater head-like pressure vessels at design temperature with stress levels ranging from approximately the design stress to several times that. This paper presents experimental results, post-test microstructural analyses, and conclusions for four higher-stress, accelerated life tests. Analysts are using these results to calibrate deterministic and probabilistic analytical creep models of the SRG110 heater head.

  20. Global-local methodologies and their application to nonlinear analysis

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1989-01-01

    An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.

  1. Where the Road Ends, Yaws Begins? The Cost-effectiveness of Eradication versus More Roads

    PubMed Central

    Fitzpatrick, Christopher; Asiedu, Kingsley; Jannin, Jean

    2014-01-01

    Introduction A disabling and disfiguring disease that “begins where the road ends”, yaws is targeted by WHO for eradication by the year 2020. The global campaign is not yet financed. To evaluate yaws eradication within the context of the post-2015 development agenda, we perform a somewhat allegorical cost-effectiveness analysis of eradication, comparing it to a counterfactual in which we simply wait for more roads (the end of poverty). Methods We use evidence from four yaws eradication pilot sites and other mass treatment campaigns to set benchmarks for the cost of eradication in 12 known endemic countries. We construct a compartmental model of long-term health effects to 2050. Conservatively, we attribute zero cost to the counterfactual and allow for gradual exit of the susceptible (at risk) population by road (poverty reduction). We report mean, 5th and 95th centile estimates to reflect uncertainty about costs and effects. Results Our benchmark for the economic cost of yaws eradication is uncertain but not high –US$ 362 (75–1073) million in 12 countries. Eradication would cost US$ 26 (4.2–78) for each year of life lived without disability or disfigurement due to yaws, or US$ 324 (47–936) per disability-adjusted life year (DALY). Excluding drugs, existing staff and assets, the financial cost benchmark is US$ 213 (74–522) million. The real cost of waiting for more roads (poverty reduction) would be 13 (7.3–20) million years of life affected by early-stage yaws and 2.3 (1.1–4.2) million years of life affected by late-stage yaws. Discussion Endemic countries need financing to begin implementing and adapting global strategy to local conditions. Donations of drugs and diagnostics could reduce cost to the public sector and catalyze financing. Resources may be harnessed from the extractive industries. Yaws eradication should be seen as complementary to universal health coverage and shared prosperity on the post-2015 development agenda. PMID:25255131

  2. A clustering algorithm for determining community structure in complex networks

    NASA Astrophysics Data System (ADS)

    Jin, Hong; Yu, Wei; Li, ShiJun

    2018-02-01

    Clustering algorithms are attractive for the task of community detection in complex networks. DENCLUE is a representative density based clustering algorithm which has a firm mathematical basis and good clustering properties allowing for arbitrarily shaped clusters in high dimensional datasets. However, this method cannot be directly applied to community discovering due to its inability to deal with network data. Moreover, it requires a careful selection of the density parameter and the noise threshold. To solve these issues, a new community detection method is proposed in this paper. First, we use a spectral analysis technique to map the network data into a low dimensional Euclidean Space which can preserve node structural characteristics. Then, DENCLUE is applied to detect the communities in the network. A mathematical method named Sheather-Jones plug-in is chosen to select the density parameter which can describe the intrinsic clustering structure accurately. Moreover, every node on the network is meaningful so there were no noise nodes as a result the noise threshold can be ignored. We test our algorithm on both benchmark and real-life networks, and the results demonstrate the effectiveness of our algorithm over other popularity density based clustering algorithms adopted to community detection.

  3. International benchmarking and best practice management: in search of health care and hospital excellence.

    PubMed

    von Eiff, Wilfried

    2015-01-01

    Hospitals worldwide are facing the same opportunities and threats: the demographics of an aging population; steady increases in chronic diseases and severe illnesses; and a steadily increasing demand for medical services with more intensive treatment for multi-morbid patients. Additionally, patients are becoming more demanding. They expect high quality medicine within a dignity-driven and painless healing environment. The severe financial pressures that these developments entail oblige care providers to more and more cost-containment and to apply process reengineering, as well as continuous performance improvement measures, so as to achieve future financial sustainability. At the same time, regulators are calling for improved patient outcomes. Benchmarking and best practice management are successfully proven performance improvement tools for enabling hospitals to achieve a higher level of clinical output quality, enhanced patient satisfaction, and care delivery capability, while simultaneously containing and reducing costs. This chapter aims to clarify what benchmarking is and what it is not. Furthermore, it is stated that benchmarking is a powerful managerial tool for improving decision-making processes that can contribute to the above-mentioned improvement measures in health care delivery. The benchmarking approach described in this chapter is oriented toward the philosophy of an input-output model and is explained based on practical international examples from different industries in various countries. Benchmarking is not a project with a defined start and end point, but a continuous initiative of comparing key performance indicators, process structures, and best practices from best-in-class companies inside and outside industry. Benchmarking is an ongoing process of measuring and searching for best-in-class performance: Measure yourself with yourself over time against key performance indicators. Measure yourself against others. Identify best practices. Equal or exceed this best practice in your institution. Focus on simple and effective ways to implement solutions. Comparing only figures, such as average length of stay, costs of procedures, infection rates, or out-of-stock rates, can lead easily to wrong conclusions and decision making with often-disastrous consequences. Just looking at figures and ratios is not the basis for detecting potential excellence. It is necessary to look beyond the numbers to understand how processes work and contribute to best-in-class results. Best practices from even quite different industries can enable hospitals to leapfrog results in patient orientation, clinical excellence, and cost-effectiveness. Despite common benchmarking approaches, it is pointed out that a comparison without "looking behind the figures" (what it means to be familiar with the process structure, process dynamic and drivers, process institutions/rules and process-related incentive components) will be extremely limited referring to reliability and quality of findings. In order to demonstrate transferability of benchmarking results between different industries practical examples from health care, automotive, and hotel service have been selected. Additionally, it is depicted that international comparisons between hospitals providing medical services in different health care systems do have a great potential for achieving leapfrog results in medical quality, organization of service provision, effective work structures, purchasing and logistics processes, or management, etc.

  4. Opportunities and Problems of Comparative Higher Education Research: The Daily Life of Research

    ERIC Educational Resources Information Center

    Teichler, Ulrich

    2014-01-01

    Higher education had a predominant national and institutional focus for a long time. In Europe, supra-national political activities played a major role for increasing the interest in comparative research. Comparative perspectives are important in order to deconstruct the often national perspective of causal reasoning, for proving benchmarks, for…

  5. Teacher Control over Interagency Collaboration: A Roadblock for Effective Transitioning of Youth with Disabilities

    ERIC Educational Resources Information Center

    Meadows, Denis; Davies, Michael; Beamish, Wendi

    2014-01-01

    Poor post-school outcomes for youth with disabilities have consistently been reported internationally. Interagency collaboration between school systems and post-school services is critical and key to improving post-school life for these youth. An initial Queensland study that benchmarked the teacher practice of 104 transition teachers and…

  6. Characteristics of beef cattle operations in the Midwest (Illinois, Indiana, Iowa, Michigan, Minnesota, Missouri, and Wisconsin)

    USDA-ARS?s Scientific Manuscript database

    Following the launch of the Beef Checkoff’s U.S. Beef Industry Sustainability Assessment in 2011, region-specific collection of beef production information is underway to provide data for a benchmark national life cycle assessment. The aim of this factsheet is to summarize data gathered from online ...

  7. 75 FR 51242 - Fisheries of the South Atlantic; Southeast Data, Assessment, and Review (SEDAR); Public Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-19

    ... benchmarks, projects future population conditions, and recommends research and monitoring needs. Participants....--4 p.m. Assessment panelists will discuss data inputs to the stock assessment model and make recommendations for additional years of data to be updated in the model. New information on black sea bass life...

  8. Rock type discrimination and structural analysis with LANDSAT and Seasat data: San Rafael swell, Utah

    NASA Technical Reports Server (NTRS)

    Stewart, H. E.; Blom, R.; Abrams, M.; Daily, M.

    1980-01-01

    Satellite synthetic aperture radar (SAR) images is evaluated in terms of its geologic applications. The benchmark to which the SAR images are compared is LANDSAT, used both for structural and lithologic interpretations.

  9. Imidazole derivatives as angiotensin II AT1 receptor blockers: Benchmarks, drug-like calculations and quantitative structure-activity relationships modeling

    NASA Astrophysics Data System (ADS)

    Alloui, Mebarka; Belaidi, Salah; Othmani, Hasna; Jaidane, Nejm-Eddine; Hochlaf, Majdi

    2018-03-01

    We performed benchmark studies on the molecular geometry, electron properties and vibrational analysis of imidazole using semi-empirical, density functional theory and post Hartree-Fock methods. These studies validated the use of AM1 for the treatment of larger systems. Then, we treated the structural, physical and chemical relationships for a series of imidazole derivatives acting as angiotensin II AT1 receptor blockers using AM1. QSAR studies were done for these imidazole derivatives using a combination of various physicochemical descriptors. A multiple linear regression procedure was used to design the relationships between molecular descriptor and the activity of imidazole derivatives. Results validate the derived QSAR model.

  10. Use of benchmarking and public reporting for infection control in four high-income countries.

    PubMed

    Haustein, Thomas; Gastmeier, Petra; Holmes, Alison; Lucet, Jean-Christophe; Shannon, Richard P; Pittet, Didier; Harbarth, Stephan

    2011-06-01

    Benchmarking of surveillance data for health-care-associated infection (HCAI) has been used for more than three decades to inform prevention strategies and improve patients' safety. In recent years, public reporting of HCAI indicators has been mandated in several countries because of an increasing demand for transparency, although many methodological issues surrounding benchmarking remain unresolved and are highly debated. In this Review, we describe developments in benchmarking and public reporting of HCAI indicators in England, France, Germany, and the USA. Although benchmarking networks in these countries are derived from a common model and use similar methods, approaches to public reporting have been more diverse. The USA and England have predominantly focused on reporting of infection rates, whereas France has put emphasis on process and structure indicators. In Germany, HCAI indicators of individual institutions are treated confidentially and are not disseminated publicly. Although evidence for a direct effect of public reporting of indicators alone on incidence of HCAIs is weak at present, it has been associated with substantial organisational change. An opportunity now exists to learn from the different strategies that have been adopted. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Benchmarking Brain-Computer Interfaces Outside the Laboratory: The Cybathlon 2016

    PubMed Central

    Novak, Domen; Sigrist, Roland; Gerig, Nicolas J.; Wyss, Dario; Bauer, René; Götz, Ulrich; Riener, Robert

    2018-01-01

    This paper presents a new approach to benchmarking brain-computer interfaces (BCIs) outside the lab. A computer game was created that mimics a real-world application of assistive BCIs, with the main outcome metric being the time needed to complete the game. This approach was used at the Cybathlon 2016, a competition for people with disabilities who use assistive technology to achieve tasks. The paper summarizes the technical challenges of BCIs, describes the design of the benchmarking game, then describes the rules for acceptable hardware, software and inclusion of human pilots in the BCI competition at the Cybathlon. The 11 participating teams, their approaches, and their results at the Cybathlon are presented. Though the benchmarking procedure has some limitations (for instance, we were unable to identify any factors that clearly contribute to BCI performance), it can be successfully used to analyze BCI performance in realistic, less structured conditions. In the future, the parameters of the benchmarking game could be modified to better mimic different applications (e.g., the need to use some commands more frequently than others). Furthermore, the Cybathlon has the potential to showcase such devices to the general public. PMID:29375294

  12. Precision and accuracy in smFRET based structural studies—A benchmark study of the Fast-Nano-Positioning System

    NASA Astrophysics Data System (ADS)

    Nagy, Julia; Eilert, Tobias; Michaelis, Jens

    2018-03-01

    Modern hybrid structural analysis methods have opened new possibilities to analyze and resolve flexible protein complexes where conventional crystallographic methods have reached their limits. Here, the Fast-Nano-Positioning System (Fast-NPS), a Bayesian parameter estimation-based analysis method and software, is an interesting method since it allows for the localization of unknown fluorescent dye molecules attached to macromolecular complexes based on single-molecule Förster resonance energy transfer (smFRET) measurements. However, the precision, accuracy, and reliability of structural models derived from results based on such complex calculation schemes are oftentimes difficult to evaluate. Therefore, we present two proof-of-principle benchmark studies where we use smFRET data to localize supposedly unknown positions on a DNA as well as on a protein-nucleic acid complex. Since we use complexes where structural information is available, we can compare Fast-NPS localization to the existing structural data. In particular, we compare different dye models and discuss how both accuracy and precision can be optimized.

  13. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thrower, A.W.; Patric, J.; Keister, M.

    2008-07-01

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how thesemore » findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in safely and efficiently shipping spent nuclear fuel and other radioactive materials. Additional business processes may be examined in this phase. The findings of these benchmarking efforts will help determine the organizational structure and requirements of the national transportation system. (authors)« less

  14. Pharmaceutical Market Access: current state of affairs and key challenges – results of the Market Access Launch Excellence Inventory (MALEI)

    PubMed Central

    Koch, Marcus A.

    2015-01-01

    Objectives To take inventory of the current state of affairs of Market Access Launch Excellence in the life sciences industry. To identify key gaps and challenges for Market Access (MA) and discuss how they can be addressed. To generate a baseline for benchmarking MA launch excellence. Methodology An online survey was conducted with pharmaceutical executives primarily working in MA, marketing, or general management. The survey aimed to evaluate MA excellence prerequisites across the product life cycle (rated by importance and level of implementation) and to describe MA activity models in the respective companies. Composite scores were calculated from respondents’ ratings and answers. Results Implementation levels of MA excellence prerequisites generally lagged behind their perceived importance. Item importance and the respective level of implementation correlated well, which can be interpreted as proof of the validity of the questionnaire. The following areas were shown to be particularly underimplemented: 1) early integration of MA and health economic considerations in research and development decision making, 2) developing true partnerships with payers, including the development of services ‘beyond the pill’, and 3) consideration of human resource and talent management. The concept of importance-adjusted implementation levels as a hybrid parameter was introduced and shown to be a viable tool for benchmarking purposes. More than 70% of respondents indicated that their companies will invest broadly in MA in terms of capital and headcount within the next 3 years. Conclusions MA (launch) excellence needs to be further developed in order to close implementation gaps across the entire product life cycle. As MA is a comparatively young pharmaceutical discipline in a complex and dynamic environment, this effort will require strategic focus and dedication. The Market Access Launch Excellence Inventory benchmarking tool may help guide decision makers to prioritize their endeavors. PMID:29785250

  15. Advanced Stirling Convertor Heater Head Durability and Reliability Quantification

    NASA Technical Reports Server (NTRS)

    Krause, David L.; Shah, Ashwin R.; Korovaichuk, Igor; Kalluri, Sreeramesh

    2008-01-01

    The National Aeronautics and Space Administration (NASA) has identified the high efficiency Advanced Stirling Radioisotope Generator (ASRG) as a candidate power source for long duration Science missions, such as lunar applications, Mars rovers, and deep space missions, that require reliable design lifetimes of up to 17 years. Resistance to creep deformation of the MarM-247 heater head (HH), a structurally critical component of the ASRG Advanced Stirling Convertor (ASC), under high temperatures (up to 850 C) is a key design driver for durability. Inherent uncertainties in the creep behavior of the thin-walled HH and the variations in the wall thickness, control temperature, and working gas pressure need to be accounted for in the life and reliability prediction. Due to the availability of very limited test data, assuring life and reliability of the HH is a challenging task. The NASA Glenn Research Center (GRC) has adopted an integrated approach combining available uniaxial MarM-247 material behavior testing, HH benchmark testing and advanced analysis in order to demonstrate the integrity, life and reliability of the HH under expected mission conditions. The proposed paper describes analytical aspects of the deterministic and probabilistic approaches and results. The deterministic approach involves development of the creep constitutive model for the MarM-247 (akin to the Oak Ridge National Laboratory master curve model used previously for Inconel 718 (Special Metals Corporation)) and nonlinear finite element analysis to predict the mean life. The probabilistic approach includes evaluation of the effect of design variable uncertainties in material creep behavior, geometry and operating conditions on life and reliability for the expected life. The sensitivity of the uncertainties in the design variables on the HH reliability is also quantified, and guidelines to improve reliability are discussed.

  16. Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2013-01-01

    The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.

  17. Time and frequency structure of causal correlation networks in the China bond market

    NASA Astrophysics Data System (ADS)

    Wang, Zhongxing; Yan, Yan; Chen, Xiaosong

    2017-07-01

    There are more than eight hundred interest rates published in the China bond market every day. Identifying the benchmark interest rates that have broad influences on most other interest rates is a major concern for economists. In this paper, a multi-variable Granger causality test is developed and applied to construct a directed network of interest rates, whose important nodes, regarded as key interest rates, are evaluated with CheiRank scores. The results indicate that repo rates are the benchmark of short-term rates, the central bank bill rates are in the core position of mid-term interest rates network, and treasury bond rates lead the long-term bond rates. The evolution of benchmark interest rates from 2008 to 2014 is also studied, and it is found that SHIBOR has generally become the benchmark interest rate in China. In the frequency domain we identify the properties of information flows between interest rates, and the result confirms the existence of market segmentation in the China bond market.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunden, Fanny; Peck, Ariana; Salzman, Julia

    Enzymes enable life by accelerating reaction rates to biological timescales. Conventional studies have focused on identifying the residues that have a direct involvement in an enzymatic reaction, but these so-called ‘catalytic residues’ are embedded in extensive interaction networks. Although fundamental to our understanding of enzyme function, evolution, and engineering, the properties of these networks have yet to be quantitatively and systematically explored. We dissected an interaction network of five residues in the active site of Escherichia coli alkaline phosphatase. Analysis of the complex catalytic interdependence of specific residues identified three energetically independent but structurally interconnected functional units with distinct modesmore » of cooperativity. From an evolutionary perspective, this network is orders of magnitude more probable to arise than a fully cooperative network. From a functional perspective, new catalytic insights emerge. Further, such comprehensive energetic characterization will be necessary to benchmark the algorithms required to rationally engineer highly efficient enzymes.« less

  19. Semi-active control of a cable-stayed bridge under multiple-support excitations.

    PubMed

    Dai, Ze-Bing; Huang, Jin-Zhi; Wang, Hong-Xia

    2004-03-01

    This paper presents a semi-active strategy for seismic protection of a benchmark cable-stayed bridge with consideration of multiple-support excitations. In this control strategy, Magnetorheological (MR) dampers are proposed as control devices, a LQG-clipped-optimal control algorithm is employed. An active control strategy, shown in previous researches to perform well at controlling the benchmark bridge when uniform earthquake motion was assumed, is also used in this study to control this benchmark bridge with consideration of multiple-support excitations. The performance of active control system is compared to that of the presented semi-active control strategy. Because the MR fluid damper is a controllable energy- dissipation device that cannot add mechanical energy to the structural system, the proposed control strategy is fail-safe in that bounded-input, bounded-output stability of the controlled structure is guaranteed. The numerical results demonstrated that the performance of the presented control design is nearly the same as that of the active control system; and that the MR dampers can effectively be used to control seismically excited cable-stayed bridges with multiple-support excitations.

  20. The Filament Sensor for Near Real-Time Detection of Cytoskeletal Fiber Structures

    PubMed Central

    Eltzner, Benjamin; Wollnik, Carina; Gottschlich, Carsten; Huckemann, Stephan; Rehfeldt, Florian

    2015-01-01

    A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source. PMID:25996921

  1. Quest for Orthologs Entails Quest for Tree of Life: In Search of the Gene Stream

    PubMed Central

    Boeckmann, Brigitte; Marcet-Houben, Marina; Rees, Jonathan A.; Forslund, Kristoffer; Huerta-Cepas, Jaime; Muffato, Matthieu; Yilmaz, Pelin; Xenarios, Ioannis; Bork, Peer; Lewis, Suzanna E.; Gabaldón, Toni

    2015-01-01

    Quest for Orthologs (QfO) is a community effort with the goal to improve and benchmark orthology predictions. As quality assessment assumes prior knowledge on species phylogenies, we investigated the congruency between existing species trees by comparing the relationships of 147 QfO reference organisms from six Tree of Life (ToL)/species tree projects: The National Center for Biotechnology Information (NCBI) taxonomy, Opentree of Life, the sequenced species/species ToL, the 16S ribosomal RNA (rRNA) database, and trees published by Ciccarelli et al. (Ciccarelli FD, et al. 2006. Toward automatic reconstruction of a highly resolved tree of life. Science 311:1283–1287) and by Huerta-Cepas et al. (Huerta-Cepas J, Marcet-Houben M, Gabaldon T. 2014. A nested phylogenetic reconstruction approach provides scalable resolution in the eukaryotic Tree Of Life. PeerJ PrePrints 2:223) Our study reveals that each species tree suggests a different phylogeny: 87 of the 146 (60%) possible splits of a dichotomous and rooted tree are congruent, while all other splits are incongruent in at least one of the species trees. Topological differences are observed not only at deep speciation events, but also within younger clades, such as Hominidae, Rodentia, Laurasiatheria, or rosids. The evolutionary relationships of 27 archaea and bacteria are highly inconsistent. By assessing 458,108 gene trees from 65 genomes, we show that consistent species topologies are more often supported by gene phylogenies than contradicting ones. The largest concordant species tree includes 77 of the QfO reference organisms at the most. Results are summarized in the form of a consensus ToL (http://swisstree.vital-it.ch/species_tree) that can serve different benchmarking purposes. PMID:26133389

  2. Using relative survival measures for cross-sectional and longitudinal benchmarks of countries, states, and districts: the BenchRelSurv- and BenchRelSurvPlot-macros

    PubMed Central

    2013-01-01

    Background The objective of screening programs is to discover life threatening diseases in as many patients as early as possible and to increase the chance of survival. To be able to compare aspects of health care quality, methods are needed for benchmarking that allow comparisons on various health care levels (regional, national, and international). Objectives Applications and extensions of algorithms can be used to link the information on disease phases with relative survival rates and to consolidate them in composite measures. The application of the developed SAS-macros will give results for benchmarking of health care quality. Data examples for breast cancer care are given. Methods A reference scale (expected, E) must be defined at a time point at which all benchmark objects (observed, O) are measured. All indices are defined as O/E, whereby the extended standardized screening-index (eSSI), the standardized case-mix-index (SCI), the work-up-index (SWI), and the treatment-index (STI) address different health care aspects. The composite measures called overall-performance evaluation (OPE) and relative overall performance indices (ROPI) link the individual indices differently for cross-sectional or longitudinal analyses. Results Algorithms allow a time point and a time interval associated comparison of the benchmark objects in the indices eSSI, SCI, SWI, STI, OPE, and ROPI. Comparisons between countries, states and districts are possible. Exemplarily comparisons between two countries are made. The success of early detection and screening programs as well as clinical health care quality for breast cancer can be demonstrated while the population’s background mortality is concerned. Conclusions If external quality assurance programs and benchmark objects are based on population-based and corresponding demographic data, information of disease phase and relative survival rates can be combined to indices which offer approaches for comparative analyses between benchmark objects. Conclusions on screening programs and health care quality are possible. The macros can be transferred to other diseases if a disease-specific phase scale of prognostic value (e.g. stage) exists. PMID:23316692

  3. Learning How to Play Ball: Applying Sabermetric Thinking to Benchmarking in Higher Education

    ERIC Educational Resources Information Center

    Levy, Gary D.

    2012-01-01

    Although the notion is certainly cliched, baseball often serves as an excellent metaphor for life. Some of the methodologies currently being used to measure, evaluate, manage, and even play baseball may serve as references for ways that higher education may be measured, evaluated, managed, and played. This chapter proposes and presents…

  4. A Measure of Excellence of Young European Research Council Grantees

    ERIC Educational Resources Information Center

    Arevalo, Javier

    2017-01-01

    Bibliometric benchmarking can be an aid to researchers pondering whether to apply for competitive grants. In this paper, the highly prestigious grants offered by the European Research Council to young scientists of any nationality were scrutinized. The analysis of the 2014-2015 data indicates that over 75% of life science grantees in the starting…

  5. Advanced technology commercial fuselage structure

    NASA Technical Reports Server (NTRS)

    Ilcewicz, L. B.; Smith, P. J.; Walker, T. H.; Johnson, R. W.

    1991-01-01

    Boeing's program for Advanced Technology Composite Aircraft Structure (ATCAS) has focused on the manufacturing and performance issues associated with a wide body commercial transport fuselage. The primary goal of ATCAS is to demonstrate cost and weight savings over a 1995 aluminum benchmark. A 31 foot section of fuselage directly behind the wing to body intersection was selected for study purposes. This paper summarizes ATCAS contract plans and review progress to date. The six year ATCAS program will study technical issues for crown, side, and keel areas of the fuselage. All structural details in these areas will be included in design studies that incorporate a design build team (DBT) approach. Manufacturing technologies will be developed for concepts deemed by the DBT to have the greatest potential for cost and weight savings. Assembly issues for large, stiff, quadrant panels will receive special attention. Supporting technologies and mechanical tests will concentrate on the major issues identified for fuselage. These include damage tolerance, pressure containment, splices, load redistribution, post-buckled structure, and durability/life. Progress to date includes DBT selection of baseline fuselage concepts; cost and weight comparisons for crown panel designs; initial panel fabrication for manufacturing and structural mechanics research; and toughened material studies related to keel panels. Initial ATCAS studies have shown that NASA's Advanced Composite Technology program goals for cost and weight savings are attainable for composite fuselage.

  6. Hybrid cloud and cluster computing paradigms for life science applications

    PubMed Central

    2010-01-01

    Background Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Results Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. Conclusions The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. Methods We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments. PMID:21210982

  7. Hybrid cloud and cluster computing paradigms for life science applications.

    PubMed

    Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey

    2010-12-21

    Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.

  8. The associations between work-life balance behaviours, teamwork climate and safety climate: cross-sectional survey introducing the work-life climate scale, psychometric properties, benchmarking data and future directions.

    PubMed

    Sexton, J Bryan; Schwartz, Stephanie P; Chadwick, Whitney A; Rehder, Kyle J; Bae, Jonathan; Bokovoy, Joanna; Doram, Keith; Sotile, Wayne; Adair, Kathryn C; Profit, Jochen

    2017-08-01

    Improving the resiliency of healthcare workers is a national imperative, driven in part by healthcare workers having minimal exposure to the skills and culture to achieve work-life balance (WLB). Regardless of current policies, healthcare workers feel compelled to work more and take less time to recover from work. Satisfaction with WLB has been measured, as has work-life conflict, but how frequently healthcare workers engage in specific WLB behaviours is rarely assessed. Measurement of behaviours may have advantages over measurement of perceptions; behaviours more accurately reflect WLB and can be targeted by leaders for improvement. 1. To describe a novel survey scale for evaluating work-life climate based on specific behavioural frequencies in healthcare workers.2. To evaluate the scale's psychometric properties and provide benchmarking data from a large healthcare system.3. To investigate associations between work-life climate, teamwork climate and safety climate. Cross-sectional survey study of US healthcare workers within a large healthcare system. 7923 of 9199 eligible healthcare workers across 325 work settings within 16 hospitals completed the survey in 2009 (86% response rate). The overall work-life climate scale internal consistency was Cronbach α=0.790. t-Tests of top versus bottom quartile work settings revealed that positive work-life climate was associated with better teamwork climate, safety climate and increased participation in safety leadership WalkRounds with feedback (p<0.001). Univariate analysis of variance demonstrated differences that varied significantly in WLB between healthcare worker role, hospitals and work setting. The work-life climate scale exhibits strong psychometric properties, elicits results that vary widely by work setting, discriminates between positive and negative workplace norms, and aligns well with other culture constructs that have been found to correlate with clinical outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  9. Benchmarking wastewater treatment plants under an eco-efficiency perspective.

    PubMed

    Lorenzo-Toja, Yago; Vázquez-Rowe, Ian; Amores, María José; Termes-Rifé, Montserrat; Marín-Navarro, Desirée; Moreira, María Teresa; Feijoo, Gumersindo

    2016-10-01

    The new ISO 14045 framework is expected to slowly start shifting the definition of eco-efficiency toward a life-cycle perspective, using Life Cycle Assessment (LCA) as the environmental impact assessment method together with a system value assessment method for the economic analysis. In the present study, a set of 22 wastewater treatment plants (WWTPs) in Spain were analyzed on the basis of eco-efficiency criteria, using LCA and Life Cycle Costing (LCC) as a system value assessment method. The study is intended to be useful to decision-makers in the wastewater treatment sector, since the combined method provides an alternative scheme for analyzing the relationship between environmental impacts and costs. Two midpoint impact categories, global warming and eutrophication potential, as well as an endpoint single score indicator were used for the environmental assessment, while LCC was used for value assessment. Results demonstrated that substantial differences can be observed between different WWTPs depending on a wide range of factors such as plant configuration, plant size or even legal discharge limits. Based on these results the benchmarking of wastewater treatment facilities was performed by creating a specific classification and certification scheme. The proposed eco-label for the WWTPs rating is based on the integration of the three environmental indicators and an economic indicator calculated within the study under the eco-efficiency new framework. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Asthma-specific health-related quality of life of people in Great Britain: A national survey.

    PubMed

    Upton, Jane; Lewis, Carine; Humphreys, Emily; Price, David; Walker, Samantha

    2016-11-01

    Although the ultimate goal of asthma treatment is to improve asthma-specific Health-Related Quality-Of-Life (HRQOL), in the UK population this is insufficiently studied. National asthma-specific HRQOL data is needed to inform strategies to address this condition. To benchmark asthma-specific HRQOL in a national survey of adults with asthma, and explore differences in this measure within subsections of the population. We analysed answers to the Marks Asthma Quality-of-Life Questionnaire (AQLQ-M) from a representative sample of 658 adults with asthma. Respondents answered asthma-specific questions to assess control, previous hospital admissions, asthma attacks and an indicator of severity. Higher scores indicate poorer HRQOL (maximum = 60). The highest quintile formed a subgroup 'Poor HRQOL'. Data were weighted to correct for any biases caused by differential non-response. Chi-square analyses were used to determine differences between good and poor quality of life and regression analyses performed to determine what factors are associated with poor HRQOL. The response rate was 49%. AQLQ-M median (IQR) scores were 5 (2-13) for the total sample (poor HRQOL = 21, good HRQOL = 3). Significant differences between good and poor HRQOL were observed in smoking status, SES, employment status and co-morbidities, but no differences were found between age groups. Those with poorly controlled asthma were significantly more likely to have poor HRQOL, ≥1 breathing related hospital admission or ≥1 asthma attack. This article provides benchmarking data on asthma-specific HRQOL. Improved strategies are needed to target interventions towards people experiencing poor HRQOL.

  11. Benchmarking the minimum Electron Beam (eBeam) dose required for the sterilization of space foods

    NASA Astrophysics Data System (ADS)

    Bhatia, Sohini S.; Wall, Kayley R.; Kerth, Chris R.; Pillai, Suresh D.

    2018-02-01

    As manned space missions extend in length, the safety, nutrition, acceptability, and shelf life of space foods are of paramount importance to NASA. Since food and mealtimes play a key role in reducing stress and boredom of prolonged missions, the quality of food in terms of appearance, flavor, texture, and aroma can have significant psychological ramifications on astronaut performance. The FDA, which oversees space foods, currently requires a minimum dose of 44 kGy for irradiated space foods. The underlying hypothesis was that commercial sterility of space foods could be achieved at a significantly lower dose, and this lowered dose would positively affect the shelf life of the product. Electron beam processed beef fajitas were used as an example NASA space food to benchmark the minimum eBeam dose required for sterility. A 15 kGy dose was able to achieve an approximately 10 log reduction in Shiga-toxin-producing Escherichia coli bacteria, and a 5 log reduction in Clostridium sporogenes spores. Furthermore, accelerated shelf life testing (ASLT) to determine sensory and quality characteristics under various conditions was conducted. Using Multidimensional gas-chromatography-olfactometry-mass spectrometry (MDGC-O-MS), numerous volatiles were shown to be dependent on the dose applied to the product. Furthermore, concentrations of off -flavor aroma compounds such as dimethyl sulfide were decreased at the reduced 15 kGy dose. The results suggest that the combination of conventional cooking combined with eBeam processing (15 kGy) can achieve the safety and shelf-life objectives needed for long duration space-foods.

  12. How to report and discuss ADME data in medicinal chemistry publications: in vitro data or in vivo extrapolations?

    PubMed

    Svennebring, Andreas M

    2015-01-01

    Early drug discovery projects often utilize data from ADME (absorption, distribution, metabolism, elimination) assays to benchmark data and guide discussion, rather than the predicted in vivo consequences of these data. Here, the two paradigms are compared, using evaluations of metabolic stability based on either microsomal clearance assay data or from the predicted in vivo hepatic clearance and half-life calculated through the combination of the venous well-stirred model and Øie-Tozer's model. The need for a shift in paradigm is presented, and its implications discussed. It is suggested that discussions about ADME data should revolve around potential clinical problems that are most likely to surface during the development phase, each benchmarked with a suitable variable derived from the assay data.

  13. Evaluation of a novel electronic eigenvalue (EEVA) molecular descriptor for QSAR/QSPR studies: validation using a benchmark steroid data set.

    PubMed

    Tuppurainen, Kari; Viisas, Marja; Laatikainen, Reino; Peräkylä, Mikael

    2002-01-01

    A novel electronic eigenvalue (EEVA) descriptor of molecular structure for use in the derivation of predictive QSAR/QSPR models is described. Like other spectroscopic QSAR/QSPR descriptors, EEVA is also invariant as to the alignment of the structures concerned. Its performance was tested with respect to the CBG (corticosteroid binding globulin) affinity of 31 benchmark steroids. It appeared that the electronic structure of the steroids, i.e., the "spectra" derived from molecular orbital energies, is directly related to the CBG binding affinities. The predictive ability of EEVA is compared to other QSAR approaches, and its performance is discussed in the context of the Hammett equation. The good performance of EEVA is an indication of the essential quantum mechanical nature of QSAR. The EEVA method is a supplement to conventional 3D QSAR methods, which employ fields or surface properties derived from Coulombic and van der Waals interactions.

  14. Vegetation composition and structure of southern coastal plain pine forests: An ecological comparison

    USGS Publications Warehouse

    Hedman, C.W.; Grace, S.L.; King, S.E.

    2000-01-01

    Longleaf pine (Pinus palustris) ecosystems are characterized by a diverse community of native groundcover species. Critics of plantation forestry claim that loblolly (Pinus taeda) and slash pine (Pinus elliottii) forests are devoid of native groundcover due to associated management practices. As a result of these practices, some believe that ecosystem functions characteristic of longleaf pine are lost under loblolly and slash pine plantation management. Our objective was to quantify and compare vegetation composition and structure of longleaf, loblolly, and slash pine forests of differing ages, management strategies, and land-use histories. Information from this study will further our understanding and lead to inferences about functional differences among pine cover types. Vegetation and environmental data were collected in 49 overstory plots across Southlands Experiment Forest in Bainbridge, GA. Nested plots, i.e. midstory, understory, and herbaceous, were replicated four times within each overstory plot. Over 400 species were identified. Herbaceous species richness was variable for all three pine cover types. Herbaceous richness for longleaf, slash, and loblolly pine averaged 15, 13, and 12 species per m2, respectively. Longleaf pine plots had significantly more (p < 0.029) herbaceous species and greater herbaceous cover (p < 0.001) than loblolly or slash pine plots. Longleaf and slash pine plots were otherwise similar in species richness and stand structure, both having lower overstory density, midstory density, and midstory cover than loblolly pine plots. Multivariate analyses provided additional perspectives on vegetation patterns. Ordination and classification procedures consistently placed herbaceous plots into two groups which we refer to as longleaf pine benchmark (34 plots) and non-benchmark (15 plots). Benchmark plots typically contained numerous herbaceous species characteristic of relic longleaf pine/wiregrass communities found in the area. Conversely, non-benchmark plots contained fewer species characteristic of relic longleaf pine/wiregrass communities and more ruderal species common to highly disturbed sites. The benchmark group included 12 naturally regenerated longleaf plots and 22 loblolly, slash, and longleaf pine plantation plots encompassing a broad range of silvicultural disturbances. Non-benchmark plots included eight afforested old-field plantation plots and seven cutover plantation plots. Regardless of overstory species, all afforested old fields were low either in native species richness or in abundance. Varying degrees of this groundcover condition were also found in some cutover plantation plots that were classified as non-benchmark. Environmental variables strongly influencing vegetation patterns included agricultural history and fire frequency. Results suggest that land-use history, particularly related to agriculture, has a greater influence on groundcover composition and structure in southern pine forests than more recent forest management activities or pine cover type. Additional research is needed to identify the potential for afforested old fields to recover native herbaceous species. In the interim, high-yield plantation management should initially target old-field sites which already support reduced numbers of groundcover species. Sites which have not been farmed in the past 50-60 years should be considered for longleaf pine restoration and multiple-use objectives, since they have the greatest potential for supporting diverse native vegetation. (C) 2000 Elsevier Science B.V.

  15. Performance evaluation of structure based and ligand based virtual screening methods on ten selected anti-cancer targets.

    PubMed

    Ramasamy, Thilagavathi; Selvam, Chelliah

    2015-10-15

    Virtual screening has become an important tool in drug discovery process. Structure based and ligand based approaches are generally used in virtual screening process. To date, several benchmark sets for evaluating the performance of the virtual screening tool are available. In this study, our aim is to compare the performance of both structure based and ligand based virtual screening methods. Ten anti-cancer targets and their corresponding benchmark sets from 'Demanding Evaluation Kits for Objective In silico Screening' (DEKOIS) library were selected. X-ray crystal structures of protein-ligand complexes were selected based on their resolution. Openeye tools such as FRED, vROCS were used and the results were carefully analyzed. At EF1%, vROCS produced better results but at EF5% and EF10%, both FRED and ROCS produced almost similar results. It was noticed that the enrichment factor values were decreased while going from EF1% to EF5% and EF10% in many cases. Published by Elsevier Ltd.

  16. Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andonov, Rumen; Djidjev, Hristo Nikolov; Klau, Gunnar W.

    In this paper, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of protein representations. Having a metric in that space allows one to avoid pairwise comparisons on the entire database and, thus, to significantly accelerate exploring the protein space compared to no-metric spaces. We show on a gold standard superfamily classification benchmark set of 6759 proteins that our exact k-nearest neighbor (k-NN) scheme classifiesmore » up to 224 out of 236 queries correctly and on a larger, extended version of the benchmark with 60; 850 additional structures, up to 1361 out of 1369 queries. Finally, our k-NN classification thus provides a promising approach for the automatic classification of protein structures based on flexible contact map overlap alignments.« less

  17. Fundamental constraints on the performance of broadband ultrasonic matching structures and absorbers.

    PubMed

    Acher, O; Bernard, J M L; Maréchal, P; Bardaine, A; Levassort, F

    2009-04-01

    Recent fundamental results concerning the ultimate performance of electromagnetic absorbers were adapted and extrapolated to the field of sound waves. It was possible to deduce some appropriate figures of merit indicating whether a particular structure was close to the best possible matching properties. These figures of merit had simple expressions and were easy to compute in practical cases. Numerical examples illustrated that conventional state-of-the-art matching structures had an overall efficiency of approximately 50% of the fundamental limit. However, if the bandwidth at -6 dB was retained as a benchmark, the achieved bandwidth would be, at most, 12% of the fundamental limit associated with the same mass for the matching structure. Consequently, both encouragement for future improvements and accurate estimates of the surface mass required to obtain certain desired broadband properties could be provided. The results presented here can be used to investigate the broadband sound absorption and to benchmark passive and active noise control systems.

  18. Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric

    DOE PAGES

    Andonov, Rumen; Djidjev, Hristo Nikolov; Klau, Gunnar W.; ...

    2015-10-09

    In this paper, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of protein representations. Having a metric in that space allows one to avoid pairwise comparisons on the entire database and, thus, to significantly accelerate exploring the protein space compared to no-metric spaces. We show on a gold standard superfamily classification benchmark set of 6759 proteins that our exact k-nearest neighbor (k-NN) scheme classifiesmore » up to 224 out of 236 queries correctly and on a larger, extended version of the benchmark with 60; 850 additional structures, up to 1361 out of 1369 queries. Finally, our k-NN classification thus provides a promising approach for the automatic classification of protein structures based on flexible contact map overlap alignments.« less

  19. Sintered Cathodes for All-Solid-State Structural Lithium-Ion Batteries

    NASA Technical Reports Server (NTRS)

    Huddleston, William; Dynys, Frederick; Sehirlioglu, Alp

    2017-01-01

    All-solid-state structural lithium ion batteries serve as both structural load-bearing components and as electrical energy storage devices to achieve system level weight savings in aerospace and other transportation applications. This multifunctional design goal is critical for the realization of next generation hybrid or all-electric propulsion systems. Additionally, transitioning to solid state technology improves upon battery safety from previous volatile architectures. This research established baseline solid state processing conditions and performance benchmarks for intercalation-type layered oxide materials for multifunctional application. Under consideration were lithium cobalt oxide and lithium nickel manganese cobalt oxide. Pertinent characteristics such as electrical conductivity, strength, chemical stability, and microstructure were characterized for future application in all-solid-state structural battery cathodes. The study includes characterization by XRD, ICP, SEM, ring-on-ring mechanical testing, and electrical impedance spectroscopy to elucidate optimal processing parameters, material characteristics, and multifunctional performance benchmarks. These findings provide initial conditions for implementing existing cathode materials in load bearing applications.

  20. Invitations to Life's Diversity. Teacher-Friendly Science Activities with Reproducible Handouts in English and Spanish. Grades 3-5. Living Things Science Series.

    ERIC Educational Resources Information Center

    Camp, Carole Ann, Ed.

    This booklet, one of six in the Living Things Science series, presents activities about diversity and classification of living things which address basic "Benchmarks" suggested by the American Association for the Advancement of Science for the Living Environment for grades 3-5. Contents include background information, vocabulary (in…

  1. The Stability of Rankings Derived from Composite Indicators: Analysis of the "IL Sole 24 Ore" Quality of Life Report

    ERIC Educational Resources Information Center

    Lun, G.; Holzer, D.; Tappeiner, G.; Tappeiner, U.

    2006-01-01

    The calculation of composite indicators and the derivation of respective rankings is a common method used to benchmark countries or regions. However, although the statistical robustness of these rankings is often criticised, they often still spark off heated political debate. Here, we assess the sensitivity of the province ranking published by the…

  2. A model for evaluating the environmental benefits of elementary school facilities.

    PubMed

    Ji, Changyoon; Hong, Taehoon; Jeong, Kwangbok; Leigh, Seung-Bok

    2014-01-01

    In this study, a model that is capable of evaluating the environmental benefits of a new elementary school facility was developed. The model is composed of three steps: (i) retrieval of elementary school facilities having similar characteristics as the new elementary school facility using case-based reasoning; (ii) creation of energy consumption and material data for the benchmark elementary school facility using the retrieved similar elementary school facilities; and (iii) evaluation of the environmental benefits of the new elementary school facility by assessing and comparing the environmental impact of the new and created benchmark elementary school facility using life cycle assessment. The developed model can present the environmental benefits of a new elementary school facility in terms of monetary values using Environmental Priority Strategy 2000, a damage-oriented life cycle impact assessment method. The developed model can be used for the following: (i) as criteria for a green-building rating system; (ii) as criteria for setting the support plan and size, such as the government's incentives for promoting green-building projects; and (iii) as criteria for determining the feasibility of green building projects in key business sectors. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Novel Computational Approaches to Drug Discovery

    NASA Astrophysics Data System (ADS)

    Skolnick, Jeffrey; Brylinski, Michal

    2010-01-01

    New approaches to protein functional inference based on protein structure and evolution are described. First, FINDSITE, a threading based approach to protein function prediction, is summarized. Then, the results of large scale benchmarking of ligand binding site prediction, ligand screening, including applications to HIV protease, and GO molecular functional inference are presented. A key advantage of FINDSITE is its ability to use low resolution, predicted structures as well as high resolution experimental structures. Then, an extension of FINDSITE to ligand screening in GPCRs using predicted GPCR structures, FINDSITE/QDOCKX, is presented. This is a particularly difficult case as there are few experimentally solved GPCR structures. Thus, we first train on a subset of known binding ligands for a set of GPCRs; this is then followed by benchmarking against a large ligand library. For the virtual ligand screening of a number of Dopamine receptors, encouraging results are seen, with significant enrichment in identified ligands over those found in the training set. Thus, FINDSITE and its extensions represent a powerful approach to the successful prediction of a variety of molecular functions.

  4. The National Practice Benchmark for oncology, 2014 report on 2013 data.

    PubMed

    Towle, Elaine L; Barr, Thomas R; Senese, James L

    2014-11-01

    The National Practice Benchmark (NPB) is a unique tool to measure oncology practices against others across the country in a way that allows meaningful comparisons despite differences in practice size or setting. In today's economic environment every oncology practice, regardless of business structure or affiliation, should be able to produce, monitor, and benchmark basic metrics to meet current business pressures for increased efficiency and efficacy of care. Although we recognize that the NPB survey results do not capture the experience of all oncology practices, practices that can and do participate demonstrate exceptional managerial capability, and this year those practices are recognized for their participation. In this report, we continue to emphasize the methodology introduced last year in which we reported medical revenue net of the cost of the drugs as net medical revenue for the hematology/oncology product line. The effect of this is to capture only the gross margin attributable to drugs as revenue. New this year, we introduce six measures of clinical data density and expand the radiation oncology benchmarks. Copyright © 2014 by American Society of Clinical Oncology.

  5. Has Metal-On-Metal Resurfacing Been a Cost-Effective Intervention for Health Care Providers?-A Registry Based Study.

    PubMed

    Pulikottil-Jacob, Ruth; Connock, Martin; Kandala, Ngianga-Bakwin; Mistry, Hema; Grove, Amy; Freeman, Karoline; Costa, Matthew; Sutcliffe, Paul; Clarke, Aileen

    2016-01-01

    Total hip replacement for end stage arthritis of the hip is currently the most common elective surgical procedure. In 2007 about 7.5% of UK implants were metal-on-metal joint resurfacing (MoM RS) procedures. Due to poor revision performance and concerns about metal debris, the use of RS had declined by 2012 to about a 1% share of UK hip procedures. This study estimated the lifetime cost-effectiveness of metal-on-metal resurfacing (RS) procedures versus commonly employed total hip replacement (THR) methods. We performed a cost-utility analysis using a well-established multi-state semi-Markov model from an NHS and personal and social services perspective. We used individual patient data (IPD) from the National Joint Registry (NJR) for England and Wales on RS and THR surgery for osteoarthritis recorded from April 2003 to December 2012. We used flexible parametric modelling of NJR RS data to guide identification of patient subgroups and RS devices which delivered revision rates within the NICE 5% revision rate benchmark at 10 years. RS procedures overall have an estimated revision rate of 13% at 10 years, compared to <4% for most THR devices. New NICE guidance now recommends a revision rate benchmark of <5% at 10 years. 60% of RS implants in men and 2% in women were predicted to be within the revision benchmark. RS devices satisfying the 5% benchmark were unlikely to be cost-effective compared to THR at a standard UK willingness to pay of £20,000 per quality-adjusted life-year. However, the probability of cost effectiveness was sensitive to small changes in the costs of devices or in quality of life or revision rate estimates. Our results imply that in most cases RS has not been a cost-effective resource and should probably not be adopted by decision makers concerned with the cost effectiveness of hip replacement, or by patients concerned about the likelihood of revision, regardless of patient age or gender.

  6. Surface protection overview

    NASA Technical Reports Server (NTRS)

    Levine, S. R.

    1982-01-01

    A first-cut integrated environmental attack life prediction methodology for hot section components is addressed. The HOST program is concerned with oxidation and hot corrosion attack of metallic coatings as well as their degradation by interdiffusion with the substrate. The effects of the environment and coatings on creep/fatigue behavior are being addressed through a joint effort with the Fatigue sub-project. An initial effort will attempt to scope the problem of thermal barrier coating life prediction. Verification of models will be carried out through benchmark rig tests including a 4 atm. replaceable blade turbine and a 50 atm. pressurized burner rig.

  7. Engine dynamic analysis with general nonlinear finite element codes. Part 2: Bearing element implementation overall numerical characteristics and benchmaking

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Adams, M.; Fertis, J.; Zeid, I.; Lam, P.

    1982-01-01

    Finite element codes are used in modelling rotor-bearing-stator structure common to the turbine industry. Engine dynamic simulation is used by developing strategies which enable the use of available finite element codes. benchmarking the elements developed are benchmarked by incorporation into a general purpose code (ADINA); the numerical characteristics of finite element type rotor-bearing-stator simulations are evaluated through the use of various types of explicit/implicit numerical integration operators. Improving the overall numerical efficiency of the procedure is improved.

  8. IRaPPA: information retrieval based integration of biophysical models for protein assembly selection.

    PubMed

    Moal, Iain H; Barradas-Bautista, Didier; Jiménez-García, Brian; Torchala, Mieczyslaw; van der Velde, Arjan; Vreven, Thom; Weng, Zhiping; Bates, Paul A; Fernández-Recio, Juan

    2017-06-15

    In order to function, proteins frequently bind to one another and form 3D assemblies. Knowledge of the atomic details of these structures helps our understanding of how proteins work together, how mutations can lead to disease, and facilitates the designing of drugs which prevent or mimic the interaction. Atomic modeling of protein-protein interactions requires the selection of near-native structures from a set of docked poses based on their calculable properties. By considering this as an information retrieval problem, we have adapted methods developed for Internet search ranking and electoral voting into IRaPPA, a pipeline integrating biophysical properties. The approach enhances the identification of near-native structures when applied to four docking methods, resulting in a near-native appearing in the top 10 solutions for up to 50% of complexes benchmarked, and up to 70% in the top 100. IRaPPA has been implemented in the SwarmDock server ( http://bmm.crick.ac.uk/∼SwarmDock/ ), pyDock server ( http://life.bsc.es/pid/pydockrescoring/ ) and ZDOCK server ( http://zdock.umassmed.edu/ ), with code available on request. moal@ebi.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  9. Surflex-Dock: Docking benchmarks and real-world application

    NASA Astrophysics Data System (ADS)

    Spitzer, Russell; Jain, Ajay N.

    2012-06-01

    Benchmarks for molecular docking have historically focused on re-docking the cognate ligand of a well-determined protein-ligand complex to measure geometric pose prediction accuracy, and measurement of virtual screening performance has been focused on increasingly large and diverse sets of target protein structures, cognate ligands, and various types of decoy sets. Here, pose prediction is reported on the Astex Diverse set of 85 protein ligand complexes, and virtual screening performance is reported on the DUD set of 40 protein targets. In both cases, prepared structures of targets and ligands were provided by symposium organizers. The re-prepared data sets yielded results not significantly different than previous reports of Surflex-Dock on the two benchmarks. Minor changes to protein coordinates resulting from complex pre-optimization had large effects on observed performance, highlighting the limitations of cognate ligand re-docking for pose prediction assessment. Docking protocols developed for cross-docking, which address protein flexibility and produce discrete families of predicted poses, produced substantially better performance for pose prediction. Performance on virtual screening performance was shown to benefit by employing and combining multiple screening methods: docking, 2D molecular similarity, and 3D molecular similarity. In addition, use of multiple protein conformations significantly improved screening enrichment.

  10. Modeling Blast Loading on Buried Reinforced Concrete Structures with Zapotec

    DOE PAGES

    Bessette, Greg C.

    2008-01-01

    A coupled Euler-Lagrange solution approach is used to model the response of a buried reinforced concrete structure subjected to a close-in detonation of a high explosive charge. The coupling algorithm is discussed along with a set of benchmark calculations involving detonations in clay and sand.

  11. Revel8or: Model Driven Capacity Planning Tool Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Liming; Liu, Yan; Bui, Ngoc B.

    2007-05-31

    Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less

  12. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    NASA Technical Reports Server (NTRS)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the Aeroelasticity Branch will examine other experimental efforts within the Subsonic Fixed Wing (SFW) program (such as testing of the NASA Common Research Model (CRM)) and other NASA programs and assess aeroelasticity issues and research topics.

  13. Theoretical Background and Prognostic Modeling for Benchmarking SHM Sensors for Composite Structures

    DTIC Science & Technology

    2010-10-01

    minimum flaw size can be detected by the existing SHM based monitoring methods. Sandwich panels with foam , WebCore and honeycomb structures were...Whether it be hat stiffened, corrugated sandwich, honeycomb sandwich, or foam filled sandwich, all composite structures have one basic handicap in...based monitoring methods. Sandwich panels with foam , WebCore and honeycomb structures were considered for use in this study. Eigenmode frequency

  14. How long will my mouse live? Machine learning approaches for prediction of mouse life span.

    PubMed

    Swindell, William R; Harper, James M; Miller, Richard A

    2008-09-01

    Prediction of individual life span based on characteristics evaluated at middle-age represents a challenging objective for aging research. In this study, we used machine learning algorithms to construct models that predict life span in a stock of genetically heterogeneous mice. Life-span prediction accuracy of 22 algorithms was evaluated using a cross-validation approach, in which models were trained and tested with distinct subsets of data. Using a combination of body weight and T-cell subset measures evaluated before 2 years of age, we show that the life-span quartile to which an individual mouse belongs can be predicted with an accuracy of 35.3% (+/-0.10%). This result provides a new benchmark for the development of life-span-predictive models, but improvement can be expected through identification of new predictor variables and development of computational approaches. Future work in this direction can provide tools for aging research and will shed light on associations between phenotypic traits and longevity.

  15. Matt: local flexibility aids protein multiple structure alignment.

    PubMed

    Menke, Matthew; Berger, Bonnie; Cowen, Lenore

    2008-01-01

    Even when there is agreement on what measure a protein multiple structure alignment should be optimizing, finding the optimal alignment is computationally prohibitive. One approach used by many previous methods is aligned fragment pair chaining, where short structural fragments from all the proteins are aligned against each other optimally, and the final alignment chains these together in geometrically consistent ways. Ye and Godzik have recently suggested that adding geometric flexibility may help better model protein structures in a variety of contexts. We introduce the program Matt (Multiple Alignment with Translations and Twists), an aligned fragment pair chaining algorithm that, in intermediate steps, allows local flexibility between fragments: small translations and rotations are temporarily allowed to bring sets of aligned fragments closer, even if they are physically impossible under rigid body transformations. After a dynamic programming assembly guided by these "bent" alignments, geometric consistency is restored in the final step before the alignment is output. Matt is tested against other recent multiple protein structure alignment programs on the popular Homstrad and SABmark benchmark datasets. Matt's global performance is competitive with the other programs on Homstrad, but outperforms the other programs on SABmark, a benchmark of multiple structure alignments of proteins with more distant homology. On both datasets, Matt demonstrates an ability to better align the ends of alpha-helices and beta-strands, an important characteristic of any structure alignment program intended to help construct a structural template library for threading approaches to the inverse protein-folding problem. The related question of whether Matt alignments can be used to distinguish distantly homologous structure pairs from pairs of proteins that are not homologous is also considered. For this purpose, a p-value score based on the length of the common core and average root mean squared deviation (RMSD) of Matt alignments is shown to largely separate decoys from homologous protein structures in the SABmark benchmark dataset. We postulate that Matt's strong performance comes from its ability to model proteins in different conformational states and, perhaps even more important, its ability to model backbone distortions in more distantly related proteins.

  16. Self-growing neural network architecture using crisp and fuzzy entropy

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.

    1992-01-01

    The paper briefly describes the self-growing neural network algorithm, CID2, which makes decision trees equivalent to hidden layers of a neural network. The algorithm generates a feedforward architecture using crisp and fuzzy entropy measures. The results of a real-life recognition problem of distinguishing defects in a glass ribbon and of a benchmark problem of differentiating two spirals are shown and discussed.

  17. Invitations to Cells: Life's Building Blocks. Teacher-Friendly Science Activities with Reproducible Handouts in English and Spanish. Grades 3-5. Living Things Science Series.

    ERIC Educational Resources Information Center

    Camp, Carole Ann, Ed.

    This booklet, one of six in the Living Things Science series, presents activities about cells which address basic "Benchmarks" suggested by the American Association for the Advancement of Science for the Living Environment for grades 3-5. Contents include background information, vocabulary (in English and Spanish), materials, procedures,…

  18. Self-growing neural network architecture using crisp and fuzzy entropy

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.

    1992-01-01

    The paper briefly describes the self-growing neural network algorithm, CID3, which makes decision trees equivalent to hidden layers of a neural network. The algorithm generates a feedforward architecture using crisp and fuzzy entropy measures. The results for a real-life recognition problem of distinguishing defects in a glass ribbon, and for a benchmark problen of telling two spirals apart are shown and discussed.

  19. Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures

    DTIC Science & Technology

    2016-06-01

    inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number

  20. Towards Using Transformative Education as a Benchmark for Clarifying Differences and Similarities between Environmental Education and Education for Sustainable Development

    ERIC Educational Resources Information Center

    Pavlova, Margarita

    2013-01-01

    The UN Decade of Education for Sustainable Development (DESD) charges educators with a key role in developing and "securing sustainable life chances, aspirations and futures for young people". Environmental Education (EE) and ESD share a vision of quality education and a society that lives in balance with Earth's carrying capacity,…

  1. Protein binding hot spots prediction from sequence only by a new ensemble learning method.

    PubMed

    Hu, Shan-Shan; Chen, Peng; Wang, Bing; Li, Jinyan

    2017-10-01

    Hot spots are interfacial core areas of binding proteins, which have been applied as targets in drug design. Experimental methods are costly in both time and expense to locate hot spot areas. Recently, in-silicon computational methods have been widely used for hot spot prediction through sequence or structure characterization. As the structural information of proteins is not always solved, and thus hot spot identification from amino acid sequences only is more useful for real-life applications. This work proposes a new sequence-based model that combines physicochemical features with the relative accessible surface area of amino acid sequences for hot spot prediction. The model consists of 83 classifiers involving the IBk (Instance-based k means) algorithm, where instances are encoded by important properties extracted from a total of 544 properties in the AAindex1 (Amino Acid Index) database. Then top-performance classifiers are selected to form an ensemble by a majority voting technique. The ensemble classifier outperforms the state-of-the-art computational methods, yielding an F1 score of 0.80 on the benchmark binding interface database (BID) test set. http://www2.ahu.edu.cn/pchen/web/HotspotEC.htm .

  2. High gene flow in epiphytic ferns despite habitat loss and fragmentation.

    PubMed

    Winkler, Manuela; Koch, Marcus; Hietz, Peter

    2011-01-01

    Tropical montane forests suffer from increasing fragmentation and replacement by other types of land-use such as coffee plantations. These processes are known to affect gene flow and genetic structure of plant populations. Epiphytes are particularly vulnerable because they depend on their supporting trees for their entire life-cycle. We compared population genetic structure and genetic diversity derived from AFLP markers of two epiphytic fern species differing in their ability to colonize secondary habitats. One species, Pleopeltis crassinervata , is a successful colonizer of shade trees and isolated trees whereas the other species, Polypodium rhodopleuron , is restricted to forests with anthropogenic separation leading to significant isolation between populations. By far most genetic variation was distributed within rather than among populations in both species, and a genetic admixture analysis did not reveal any clustering. Gene flow exceeded by far the benchmark of one migrant per generation to prevent genetic divergence between populations in both species. Though populations are threatened by habitat loss, long-distance dispersal is likely to support gene flow even between distant populations, which efficiently delays genetic isolation. Consequently, populations may rather be threatened by ecological consequences of habitat loss and fragmentation.

  3. NACA0012 benchmark model experimental flutter results with unsteady pressure distributions

    NASA Technical Reports Server (NTRS)

    Rivera, Jose A., Jr.; Dansberry, Bryan E.; Bennett, Robert M.; Durham, Michael H.; Silva, Walter A.

    1992-01-01

    The Structural Dynamics Division at NASA Langley Research Center has started a wind tunnel activity referred to as the Benchmark Models Program. The primary objective of this program is to acquire measured dynamic instability and corresponding pressure data that will be useful for developing and evaluating aeroelastic type computational fluid dynamics codes currently in use or under development. The program is a multi-year activity that will involve testing of several different models to investigate various aeroelastic phenomena. This paper describes results obtained from a second wind tunnel test of the first model in the Benchmark Models Program. This first model consisted of a rigid semispan wing having a rectangular planform and a NACA 0012 airfoil shape which was mounted on a flexible two degree of freedom mount system. Experimental flutter boundaries and corresponding unsteady pressure distribution data acquired over two model chords located at the 60 and 95 percent span stations are presented.

  4. Anharmonic Vibrational Spectroscopy on Metal Transition Complexes

    NASA Astrophysics Data System (ADS)

    Latouche, Camille; Bloino, Julien; Barone, Vincenzo

    2014-06-01

    Advances in hardware performance and the availability of efficient and reliable computational models have made possible the application of computational spectroscopy to ever larger molecular systems. The systematic interpretation of experimental data and the full characterization of complex molecules can then be facilitated. Focusing on vibrational spectroscopy, several approaches have been proposed to simulate spectra beyond the double harmonic approximation, so that more details become available. However, a routine use of such tools requires the preliminary definition of a valid protocol with the most appropriate combination of electronic structure and nuclear calculation models. Several benchmark of anharmonic calculations frequency have been realized on organic molecules. Nevertheless, benchmarks of organometallics or inorganic metal complexes at this level are strongly lacking despite the interest of these systems due to their strong emission and vibrational properties. Herein we report the benchmark study realized with anharmonic calculations on simple metal complexes, along with some pilot applications on systems of direct technological or biological interest.

  5. Modeling the economic outcomes of immuno-oncology drugs: alternative model frameworks to capture clinical outcomes.

    PubMed

    Gibson, E J; Begum, N; Koblbauer, I; Dranitsaris, G; Liew, D; McEwan, P; Tahami Monfared, A A; Yuan, Y; Juarez-Garcia, A; Tyas, D; Lees, M

    2018-01-01

    Economic models in oncology are commonly based on the three-state partitioned survival model (PSM) distinguishing between progression-free and progressive states. However, the heterogeneity of responses observed in immuno-oncology (I-O) suggests that new approaches may be appropriate to reflect disease dynamics meaningfully. This study explored the impact of incorporating immune-specific health states into economic models of I-O therapy. Two variants of the PSM and a Markov model were populated with data from one clinical trial in metastatic melanoma patients. Short-term modeled outcomes were benchmarked to the clinical trial data and a lifetime model horizon provided estimates of life years and quality adjusted life years (QALYs). The PSM-based models produced short-term outcomes closely matching the trial outcomes. Adding health states generated increased QALYs while providing a more granular representation of outcomes for decision making. The Markov model gave the greatest level of detail on outcomes but gave short-term results which diverged from those of the trial (overstating year 1 progression-free survival by around 60%). Increased sophistication in the representation of disease dynamics in economic models is desirable when attempting to model treatment response in I-O. However, the assumptions underlying different model structures and the availability of data for health state mapping may be important limiting factors.

  6. Effect of Random Thermal Spikes on Stirling Convertor Heater Head Reliability

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Halford, Gary R.

    2004-01-01

    Onboard radioisotope power systems being developed to support future NASA exploration missions require reliable design lifetimes of up to 14 yr and beyond. The structurally critical heater head of the high-efficiency developmental Stirling power convertor has undergone extensive computational analysis of operating temperatures (up to 650 C), stresses, and creep resistance of the thin-walled Inconel 718 bill of material. Additionally, assessment of the effect of uncertainties in the creep behavior of the thin-walled heater head, the variation in the manufactured thickness, variation in control temperature, and variation in pressure on the durability and reliability were performed. However, it is possible for the heater head to experience rare incidences of random temperature spikes (excursions) of short duration. These incidences could occur randomly with random magnitude and duration during the desired mission life. These rare incidences could affect the creep strain rate and therefore the life. The paper accounts for these uncertainties and includes the effect of such rare incidences, random in nature, on the reliability. The sensitivities of variables affecting the reliability are quantified and guidelines developed to improve the reliability are outlined. Furthermore, the quantified reliability is being verified with test data from the accelerated benchmark tests being conducted at the NASA Glenn Research Center.

  7. Reliability-Based Life Assessment of Stirling Convertor Heater Head

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Halford, Gary R.; Korovaichuk, Igor

    2004-01-01

    Onboard radioisotope power systems being developed and planned for NASA's deep-space missions require reliable design lifetimes of up to 14 yr. The structurally critical heater head of the high-efficiency Stirling power convertor has undergone extensive computational analysis of operating temperatures, stresses, and creep resistance of the thin-walled Inconel 718 bill of material. A preliminary assessment of the effect of uncertainties in the material behavior was also performed. Creep failure resistance of the thin-walled heater head could show variation due to small deviations in the manufactured thickness and in uncertainties in operating temperature and pressure. Durability prediction and reliability of the heater head are affected by these deviations from nominal design conditions. Therefore, it is important to include the effects of these uncertainties in predicting the probability of survival of the heater head under mission loads. Furthermore, it may be possible for the heater head to experience rare incidences of small temperature excursions of short duration. These rare incidences would affect the creep strain rate and, therefore, the life. This paper addresses the effects of such rare incidences on the reliability. In addition, the sensitivities of variables affecting the reliability are quantified, and guidelines developed to improve the reliability are outlined. Heater head reliability is being quantified with data from NASA Glenn Research Center's accelerated benchmark testing program.

  8. Benchmark Testing of the Largest Titanium Aluminide Sheet Subelement Conducted

    NASA Technical Reports Server (NTRS)

    Bartolotta, Paul A.; Krause, David L.

    2000-01-01

    To evaluate wrought titanium aluminide (gamma TiAl) as a viable candidate material for the High-Speed Civil Transport (HSCT) exhaust nozzle, an international team led by the NASA Glenn Research Center at Lewis Field successfully fabricated and tested the largest gamma TiAl sheet structure ever manufactured. The gamma TiAl sheet structure, a 56-percent subscale divergent flap subelement, was fabricated for benchmark testing in three-point bending. Overall, the subelement was 84-cm (33-in.) long by 13-cm (5-in.) wide by 8-cm (3-in.) deep. Incorporated into the subelement were features that might be used in the fabrication of a full-scale divergent flap. These features include the use of: (1) gamma TiAl shear clips to join together sections of corrugations, (2) multiple gamma TiAl face sheets, (3) double hot-formed gamma TiAl corrugations, and (4) brazed joints. The structural integrity of the gamma TiAl sheet subelement was evaluated by conducting a room-temperature three-point static bend test.

  9. USDA-ARS?s Scientific Manuscript database

    Ecosystems that maximize soil organic matter and good soil structure maintain high soil biological functioning, soil health and plant growth. Natural ecosystems such as prairies are valuable benchmarks for developing sustainable crop and soil management practices. Soil biological properties critical...

  10. Online Object Tracking, Learning and Parsing with And-Or Graphs.

    PubMed

    Wu, Tianfu; Lu, Yang; Zhu, Song-Chun

    2017-12-01

    This paper presents a method, called AOGTracker, for simultaneously tracking, learning and parsing (TLP) of unknown objects in video sequences with a hierarchical and compositional And-Or graph (AOG) representation. The TLP method is formulated in the Bayesian framework with a spatial and a temporal dynamic programming (DP) algorithms inferring object bounding boxes on-the-fly. During online learning, the AOG is discriminatively learned using latent SVM [1] to account for appearance (e.g., lighting and partial occlusion) and structural (e.g., different poses and viewpoints) variations of a tracked object, as well as distractors (e.g., similar objects) in background. Three key issues in online inference and learning are addressed: (i) maintaining purity of positive and negative examples collected online, (ii) controling model complexity in latent structure learning, and (iii) identifying critical moments to re-learn the structure of AOG based on its intrackability. The intrackability measures uncertainty of an AOG based on its score maps in a frame. In experiments, our AOGTracker is tested on two popular tracking benchmarks with the same parameter setting: the TB-100/50/CVPR2013 benchmarks  , [3] , and the VOT benchmarks [4] -VOT 2013, 2014, 2015 and TIR2015 (thermal imagery tracking). In the former, our AOGTracker outperforms state-of-the-art tracking algorithms including two trackers based on deep convolutional network   [5] , [6] . In the latter, our AOGTracker outperforms all other trackers in VOT2013 and is comparable to the state-of-the-art methods in VOT2014, 2015 and TIR2015.

  11. A phylogenetic transform enhances analysis of compositional microbiota data

    PubMed Central

    Silverman, Justin D; Washburne, Alex D; Mukherjee, Sayan; David, Lawrence A

    2017-01-01

    Surveys of microbial communities (microbiota), typically measured as relative abundance of species, have illustrated the importance of these communities in human health and disease. Yet, statistical artifacts commonly plague the analysis of relative abundance data. Here, we introduce the PhILR transform, which incorporates microbial evolutionary models with the isometric log-ratio transform to allow off-the-shelf statistical tools to be safely applied to microbiota surveys. We demonstrate that analyses of community-level structure can be applied to PhILR transformed data with performance on benchmarks rivaling or surpassing standard tools. Additionally, by decomposing distance in the PhILR transformed space, we identified neighboring clades that may have adapted to distinct human body sites. Decomposing variance revealed that covariation of bacterial clades within human body sites increases with phylogenetic relatedness. Together, these findings illustrate how the PhILR transform combines statistical and phylogenetic models to overcome compositional data challenges and enable evolutionary insights relevant to microbial communities. DOI: http://dx.doi.org/10.7554/eLife.21887.001 PMID:28198697

  12. Unsupported Pt-Ni Aerogels with Enhanced High Current Performance and Durability in Fuel Cell Cathodes.

    PubMed

    Henning, Sebastian; Ishikawa, Hiroshi; Kühn, Laura; Herranz, Juan; Müller, Elisabeth; Eychmüller, Alexander; Schmidt, Thomas J

    2017-08-28

    Highly active and durable oxygen reduction catalysts are needed to reduce the costs and enhance the service life of polymer electrolyte fuel cells (PEFCs). This can be accomplished by alloying Pt with a transition metal (for example Ni) and by eliminating the corrodible, carbon-based catalyst support. However, materials combining both approaches have seldom been implemented in PEFC cathodes. In this work, an unsupported Pt-Ni alloy nanochain ensemble (aerogel) demonstrates high current PEFC performance commensurate with that of a carbon-supported benchmark (Pt/C) following optimization of the aerogel's catalyst layer (CL) structure. The latter is accomplished using a soluble filler to shift the CL's pore size distribution towards larger pores which improves reactant and product transport. Chiefly, the optimized PEFC aerogel cathodes display a circa 2.5-fold larger surface-specific ORR activity than Pt/C and maintain 90 % of the initial activity after an accelerated stress test (vs. 40 % for Pt/C). © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Extensive site-directed mutagenesis reveals interconnected functional units in the alkaline phosphatase active site

    DOE PAGES

    Sunden, Fanny; Peck, Ariana; Salzman, Julia; ...

    2015-04-22

    Enzymes enable life by accelerating reaction rates to biological timescales. Conventional studies have focused on identifying the residues that have a direct involvement in an enzymatic reaction, but these so-called ‘catalytic residues’ are embedded in extensive interaction networks. Although fundamental to our understanding of enzyme function, evolution, and engineering, the properties of these networks have yet to be quantitatively and systematically explored. We dissected an interaction network of five residues in the active site of Escherichia coli alkaline phosphatase. Analysis of the complex catalytic interdependence of specific residues identified three energetically independent but structurally interconnected functional units with distinct modesmore » of cooperativity. From an evolutionary perspective, this network is orders of magnitude more probable to arise than a fully cooperative network. From a functional perspective, new catalytic insights emerge. Further, such comprehensive energetic characterization will be necessary to benchmark the algorithms required to rationally engineer highly efficient enzymes.« less

  14. Identifying protein complexes in PPI network using non-cooperative sequential game.

    PubMed

    Maulik, Ujjwal; Basu, Srinka; Ray, Sumanta

    2017-08-21

    Identifying protein complexes from protein-protein interaction (PPI) network is an important and challenging task in computational biology as it helps in better understanding of cellular mechanisms in various organisms. In this paper we propose a noncooperative sequential game based model for protein complex detection from PPI network. The key hypothesis is that protein complex formation is driven by mechanism that eventually optimizes the number of interactions within the complex leading to dense subgraph. The hypothesis is drawn from the observed network property named small world. The proposed multi-player game model translates the hypothesis into the game strategies. The Nash equilibrium of the game corresponds to a network partition where each protein either belong to a complex or form a singleton cluster. We further propose an algorithm to find the Nash equilibrium of the sequential game. The exhaustive experiment on synthetic benchmark and real life yeast networks evaluates the structural as well as biological significance of the network partitions.

  15. High-Precision Half-Life Measurement for the Superallowed β+ Emitter Alm26

    NASA Astrophysics Data System (ADS)

    Finlay, P.; Ettenauer, S.; Ball, G. C.; Leslie, J. R.; Svensson, C. E.; Andreoiu, C.; Austin, R. A. E.; Bandyopadhyay, D.; Cross, D. S.; Demand, G.; Djongolov, M.; Garrett, P. E.; Green, K. L.; Grinyer, G. F.; Hackman, G.; Leach, K. G.; Pearson, C. J.; Phillips, A. A.; Sumithrarachchi, C. S.; Triambak, S.; Williams, S. J.

    2011-01-01

    A high-precision half-life measurement for the superallowed β+ emitter Alm26 was performed at the TRIUMF-ISAC radioactive ion beam facility yielding T1/2=6346.54±0.46stat±0.60systms, consistent with, but 2.5 times more precise than, the previous world average. The Alm26 half-life and ft value, 3037.53(61) s, are now the most precisely determined for any superallowed β decay. Combined with recent theoretical corrections for isospin-symmetry-breaking and radiative effects, the corrected Ft value for Alm26, 3073.0(12) s, sets a new benchmark for the high-precision superallowed Fermi β-decay studies used to test the conserved vector current hypothesis and determine the Vud element of the Cabibbo-Kobayashi-Maskawa quark mixing matrix.

  16. New Multi-group Transport Neutronics (PHISICS) Capabilities for RELAP5-3D and its Application to Phase I of the OECD/NEA MHTGR-350 MW Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Cristian Rabiti; Andrea Alfonsi

    2012-10-01

    PHISICS is a neutronics code system currently under development at the Idaho National Laboratory (INL). Its goal is to provide state of the art simulation capability to reactor designers. The different modules for PHISICS currently under development are a nodal and semi-structured transport core solver (INSTANT), a depletion module (MRTAU) and a cross section interpolation (MIXER) module. The INSTANT module is the most developed of the mentioned above. Basic functionalities are ready to use, but the code is still in continuous development to extend its capabilities. This paper reports on the effort of coupling the nodal kinetics code package PHISICSmore » (INSTANT/MRTAU/MIXER) to the thermal hydraulics system code RELAP5-3D, to enable full core and system modeling. This will enable the possibility to model coupled (thermal-hydraulics and neutronics) problems with more options for 3D neutron kinetics, compared to the existing diffusion theory neutron kinetics module in RELAP5-3D (NESTLE). In the second part of the paper, an overview of the OECD/NEA MHTGR-350 MW benchmark is given. This benchmark has been approved by the OECD, and is based on the General Atomics 350 MW Modular High Temperature Gas Reactor (MHTGR) design. The benchmark includes coupled neutronics thermal hydraulics exercises that require more capabilities than RELAP5-3D with NESTLE offers. Therefore, the MHTGR benchmark makes extensive use of the new PHISICS/RELAP5-3D coupling capabilities. The paper presents the preliminary results of the three steady state exercises specified in Phase I of the benchmark using PHISICS/RELAP5-3D.« less

  17. Establishing objective benchmarks in robotic virtual reality simulation at the level of a competent surgeon using the RobotiX Mentor simulator.

    PubMed

    Watkinson, William; Raison, Nicholas; Abe, Takashige; Harrison, Patrick; Khan, Shamim; Van der Poel, Henk; Dasgupta, Prokar; Ahmed, Kamran

    2018-05-01

    To establish objective benchmarks at the level of a competent robotic surgeon across different exercises and metrics for the RobotiX Mentor virtual reality (VR) simulator suitable for use within a robotic surgical training curriculum. This retrospective observational study analysed results from multiple data sources, all of which used the RobotiX Mentor VR simulator. 123 participants with varying experience from novice to expert completed the exercises. Competency was established as the 25th centile of the mean advanced intermediate score. Three basic skill exercises and two advanced skill exercises were used. King's College London. 84 Novice, 26 beginner intermediates, 9 advanced intermediates and 4 experts were used in this retrospective observational study. Objective benchmarks derived from the 25th centile of the mean scores of the advanced intermediates provided suitably challenging yet also achievable targets for training surgeons. The disparity in scores was greatest for the advanced exercises. Novice surgeons are able to achieve the benchmarks across all exercises in the majority of metrics. We have successfully created this proof-of-concept study, which requires validation in a larger cohort. Objective benchmarks obtained from the 25th centile of the mean scores of advanced intermediates provide clinically relevant benchmarks at the standard of a competent robotic surgeon that are challenging yet also attainable. That can be used within a VR training curriculum allowing participants to track and monitor their progress in a structured and progressional manner through five exercises. Providing clearly defined targets, ensuring that a universal training standard has been achieved across training surgeons. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. Increasing awareness among fluid milk processors of the economic feasibility of energy efficiency projects, and encouraging their adoption through access to benchmarking and other decision-support tools

    USDA-ARS?s Scientific Manuscript database

    Based on a study done by Thoma et al. (2010) the energy used in fluid milk processing in the United States of America is responsible for approximately 2 million metric tons of greenhouse gas (GHG) emissions within the total life cycle of milk. These emissions come from electricity use (about 75 perc...

  19. USAR Recruiting Success Factors.

    DTIC Science & Technology

    1987-12-01

    scores, were used to predict production scores for each recruiter. Benchmark Achievement Scores (BAS) were computed by dividing total production bv...performance compnred to thil avigle would b ,’asl,_cr t.o compute . SAS correlated high y witi IIAS r .96 . so tihe two scores were practically equivalent...asked to make a sales pitch to a prospective enlistee about the benefits of Army life. Presen- tations were scored by computing the ratio of the

  20. Agricultural Spray Drift Concentrations in Rainwater, Stemflow ...

    EPA Pesticide Factsheets

    In order to study spray drift contribution to non-targeted habitats, pesticide concentrations were measured in stemflow (water flowing down the trunk of a tree during a rain event), rainfall, and amphibians in an agriculturally impacted wetland area near Tifton, Georgia, USA. Agricultural fields and sampling locations were located on the University of Georgia's Gibbs research farm. Samples were analyzed for >150 pesticides and over 20 different pesticides were detected in these matrices. Data indicated that herbicides (metolachlor and atrazine) and fungicides (tebuconazole) were present with the highest concentrations in stemflow, followed by those in rainfall and amphibian tissue samples. Metolachlor had the highest frequency of detection and highest concentration in rainfall and stemflow samples. Higher concentrations of pesticides were observed in stemflow for a longer period than rainfall. Furthermore, rainfall and stemflow concentrations were compared against aquatic life benchmarks and environmental water screening values to determine if adverse effects would potentially occur for non-targeted organisms. Of the pesticides detected, several had concentrations that exceeded the aquatic life benchmark value. The majority of the time mixtures were present in the different matrices, making it difficult to determine the potential adverse effects that these compounds will have on non-target species, due to unknown potentiating effects. These data help assess the

  1. Modified Mahalanobis Taguchi System for Imbalance Data Classification

    PubMed Central

    2017-01-01

    The Mahalanobis Taguchi System (MTS) is considered one of the most promising binary classification algorithms to handle imbalance data. Unfortunately, MTS lacks a method for determining an efficient threshold for the binary classification. In this paper, a nonlinear optimization model is formulated based on minimizing the distance between MTS Receiver Operating Characteristics (ROC) curve and the theoretical optimal point named Modified Mahalanobis Taguchi System (MMTS). To validate the MMTS classification efficacy, it has been benchmarked with Support Vector Machines (SVMs), Naive Bayes (NB), Probabilistic Mahalanobis Taguchi Systems (PTM), Synthetic Minority Oversampling Technique (SMOTE), Adaptive Conformal Transformation (ACT), Kernel Boundary Alignment (KBA), Hidden Naive Bayes (HNB), and other improved Naive Bayes algorithms. MMTS outperforms the benchmarked algorithms especially when the imbalance ratio is greater than 400. A real life case study on manufacturing sector is used to demonstrate the applicability of the proposed model and to compare its performance with Mahalanobis Genetic Algorithm (MGA). PMID:28811820

  2. A community detection algorithm using network topologies and rule-based hierarchical arc-merging strategies

    PubMed Central

    2017-01-01

    The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100

  3. Use the results of measurements on KBR facility for testing of neutron data of main structural materials for fast reactors

    NASA Astrophysics Data System (ADS)

    Koscheev, Vladimir; Manturov, Gennady; Pronyaev, Vladimir; Rozhikhin, Evgeny; Semenov, Mikhail; Tsibulya, Anatoly

    2017-09-01

    Several k∞ experiments were performed on the KBR critical facility at the Institute of Physics and Power Engineering (IPPE), Obninsk, Russia during the 1970s and 80s for study of neutron absorption properties of Cr, Mn, Fe, Ni, Zr, and Mo. Calculations of these benchmarks with almost any modern evaluated nuclear data libraries demonstrate bad agreement with the experiment. Neutron capture cross sections of the odd isotopes of Cr, Mn, Fe, and Ni in the ROSFOND-2010 library have been reevaluated and another evaluation of the Zr nuclear data has been adopted. Use of the modified nuclear data for Cr, Mn, Fe, Ni, and Zr leads to significant improvement of the C/E ratio for the KBR assemblies. Also a significant improvement in agreement between calculated and evaluated values for benchmarks with Fe reflectors was observed. C/E results obtained with the modified ROSFOND library for complex benchmark models that are highly sensitive to the cross sections of structural materials are no worse than results obtained with other major evaluated data libraries. Possible improvement in results by decreasing the capture cross section for Zr and Mo at the energies above 1 keV is indicated.

  4. Benefits of Structured After-School Literacy Tutoring by University Students for Struggling Elementary Readers

    ERIC Educational Resources Information Center

    Lindo, Endia J.; Weiser, Beverly; Cheatham, Jennifer P.; Allor, Jill H.

    2018-01-01

    This study examines the effectiveness of minimally trained tutors providing a highly structured tutoring intervention for struggling readers. We screened students in Grades K-6 for participation in an after-school tutoring program. We randomly assigned those students not meeting the benchmark on a reading screening measure to either a tutoring…

  5. Performance Evaluation of State of the Art Systems for Physical Activity Classification of Older Subjects Using Inertial Sensors in a Real Life Scenario: A Benchmark Study

    PubMed Central

    Awais, Muhammad; Palmerini, Luca; Bourke, Alan K.; Ihlen, Espen A. F.; Helbostad, Jorunn L.; Chiari, Lorenzo

    2016-01-01

    The popularity of using wearable inertial sensors for physical activity classification has dramatically increased in the last decade due to their versatility, low form factor, and low power requirements. Consequently, various systems have been developed to automatically classify daily life activities. However, the scope and implementation of such systems is limited to laboratory-based investigations. Furthermore, these systems are not directly comparable, due to the large diversity in their design (e.g., number of sensors, placement of sensors, data collection environments, data processing techniques, features set, classifiers, cross-validation methods). Hence, the aim of this study is to propose a fair and unbiased benchmark for the field-based validation of three existing systems, highlighting the gap between laboratory and real-life conditions. For this purpose, three representative state-of-the-art systems are chosen and implemented to classify the physical activities of twenty older subjects (76.4 ± 5.6 years). The performance in classifying four basic activities of daily life (sitting, standing, walking, and lying) is analyzed in controlled and free living conditions. To observe the performance of laboratory-based systems in field-based conditions, we trained the activity classification systems using data recorded in a laboratory environment and tested them in real-life conditions in the field. The findings show that the performance of all systems trained with data in the laboratory setting highly deteriorates when tested in real-life conditions, thus highlighting the need to train and test the classification systems in the real-life setting. Moreover, we tested the sensitivity of chosen systems to window size (from 1 s to 10 s) suggesting that overall accuracy decreases with increasing window size. Finally, to evaluate the impact of the number of sensors on the performance, chosen systems are modified considering only the sensing unit worn at the lower back. The results, similarly to the multi-sensor setup, indicate substantial degradation of the performance when laboratory-trained systems are tested in the real-life setting. This degradation is higher than in the multi-sensor setup. Still, the performance provided by the single-sensor approach, when trained and tested with real data, can be acceptable (with an accuracy above 80%). PMID:27973434

  6. In response to an open invitation for comments on AAAS project 2061's Benchmark books on science. Part 1: documentation of serious errors in cell biology.

    PubMed

    Ling, Gilbert

    2006-01-01

    Project 2061 was founded by the American Association for the Advancement of Science (AAAS) to improve secondary school science education. An in-depth study of ten 9 to 12th grade biology textbooks led to the verdict that none conveyed "Big Ideas" that would give coherence and meaning to the profusion of lavishly illustrated isolated details. However, neither the Project report itself nor the Benchmark books put out earlier by the Project carries what deserves the designation of "Big Ideas." Worse, in the two earliest-published Benchmark books, the basic unit of all life forms--the living cell--is described as a soup enclosed by a cell membrane, that determines what can enter or leave the cell. This is astonishing since extensive experimental evidence has unequivocally disproved this idea 60 years ago. A "new" version of the membrane theory brought in to replace the discredited (sieve) version is the pump model--currently taught as established truth in all high-school and college biology textbooks--was also unequivocally disproved 40 years ago. This comment is written partly in response to Bechmark's gracious open invitation for ideas to improve the books and through them, to improve US secondary school science education.

  7. Information-Theoretic Benchmarking of Land Surface Models

    NASA Astrophysics Data System (ADS)

    Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong

    2016-04-01

    Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed about 40%. There was relatively little difference between the different models. 1. G. Abramowitz, R. Leuning, M. Clark, A. Pitman, Evaluating the performance of land surface models. Journal of Climate 21, (2008). 2. W. Gong, H. V. Gupta, D. Yang, K. Sricharan, A. O. Hero, Estimating Epistemic & Aleatory Uncertainties During Hydrologic Modeling: An Information Theoretic Approach. Water Resources Research 49, 2253-2273 (2013). 3. G. S. Nearing, H. V. Gupta, The quantity and quality of information in hydrologic models. Water Resources Research 51, 524-538 (2015). 4. H. V. Gupta, G. S. Nearing, Using models and data to learn: A systems theoretic perspective on the future of hydrological science. Water Resources Research 50(6), 5351-5359 (2014). 5. H. V. Gupta et al., Large-sample hydrology: a need to balance depth with breadth. Hydrology and Earth System Sciences Discussions 10, 9147-9189 (2013).

  8. Lessons Learned over Four Benchmark Exercises from the Community Structure-Activity Resource

    PubMed Central

    Carlson, Heather A.

    2016-01-01

    Preparing datasets and analyzing the results is difficult and time-consuming, and I hope the points raised here will help other scientists avoid some of the thorny issues we wrestled with. PMID:27345761

  9. High-Strength Composite Fabric Tested at Structural Benchmark Test Facility

    NASA Technical Reports Server (NTRS)

    Krause, David L.

    2002-01-01

    Large sheets of ultrahigh strength fabric were put to the test at NASA Glenn Research Center's Structural Benchmark Test Facility. The material was stretched like a snare drum head until the last ounce of strength was reached, when it burst with a cacophonous release of tension. Along the way, the 3-ft square samples were also pulled, warped, tweaked, pinched, and yanked to predict the material's physical reactions to the many loads that it will experience during its proposed use. The material tested was a unique multi-ply composite fabric, reinforced with fibers that had a tensile strength eight times that of common carbon steel. The fiber plies were oriented at 0 and 90 to provide great membrane stiffness, as well as oriented at 45 to provide an unusually high resistance to shear distortion. The fabric's heritage is in astronaut space suits and other NASA programs.

  10. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins.

    PubMed

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)(2)-V(2), Modweb were used for the comparison and model generation. Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure.

  11. Interplate locking derived from seafloor geodetic measurement at the shallow subduction zone of the northernmost Suruga Trough, Japan

    NASA Astrophysics Data System (ADS)

    Yasuda, K.; Tadokoro, K.; Ikuta, R.; Watanabe, T.; Nagai, S.; Sayanagi, K.

    2013-12-01

    Observation of seafloor crustal deformation is crucial for megathrust earthquake because most of the focal areas are located below seafloor. Seafloor crustal deformation can be observed GPS/Acoustic technique, and this technique has been carried out at subduction margins in Japan, e.g., Japan Trench, Suruga Trough, and Nankai Trough. At the present, the accuracy of seafloor positioning is one to several centimeters for each epoch. Velocity vectors at seafloor site are estimated through repeated observations. Co- and post- seismic slip distribution and interseismic deformation are estimated from results of seafloor geodetic measurement (e.g., Iinuma et al., 2012; Tadokoro et al., 2012). We repeatedly observed seafloor crustal deformations at two sites across the Suruga Trough from 2005 to investigate interplate locking condition at the focal area of the anticipated megathrust, Tokai, earthquake. We observed 12 and 16 times at an east site of the Suruga Trough (SNE) and at an west site of the Suruga Trough (SNW), respectively. We reinstalled seafloor benchmarks at both sites because of run out of batteries in 2012. We calculated and removed the bias between the old and new seafloor benchmarks. Furthermore, we evaluated two type of analysis. One is Fixed triangular configuration Analysis (FTA). When we determine the seafloor benchmark position, we fix the triangular configuration of seafloor units averaging all the measurements to improve trade-off relation between seafloor benchmark position and sound speed structure. Sound speed structure is assumed to be horizontal layered structure. The other one is Fixed Triangle and Gradient structure of sound speed structure (FTGA). We fixed triangular configuration same as FTA. Sound speed structure is assumed to have gradient structure. Comparing FTA with FTGA, the RMS of horizontal position analyzed through FTA is smaller than that through FTGA at SNE site. On the other hand, the RMS of horizontal position analyzed through FTA is larger than that through FTGA at SNW site. We estimated the displacement velocities with relative to the Amurian plate from the result of repeated observation. The estimated displacement velocity vectors at SNE and SNW are 42×8 mm/y to N94W direction and 46×13 mm/y to N77W direction, respectively. The directions are the same as those measured at the on-land GPS stations. The magnitudes of velocity vector indicate significant shortening by approximately 11 mm/y between SNW and on-land GPS stations at the western part of the Suruga Trough. We also calculated the theoretical surface deformation pattern to depict the interplate locking condition. These results show that the plate interface at the shallow zone of the northernmost Suruga trough is strongly locked.

  12. Inference of time-delayed gene regulatory networks based on dynamic Bayesian network hybrid learning method

    PubMed Central

    Yu, Bin; Xu, Jia-Meng; Li, Shan; Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Zhang, Yan; Wang, Ming-Hui

    2017-01-01

    Gene regulatory networks (GRNs) research reveals complex life phenomena from the perspective of gene interaction, which is an important research field in systems biology. Traditional Bayesian networks have a high computational complexity, and the network structure scoring model has a single feature. Information-based approaches cannot identify the direction of regulation. In order to make up for the shortcomings of the above methods, this paper presents a novel hybrid learning method (DBNCS) based on dynamic Bayesian network (DBN) to construct the multiple time-delayed GRNs for the first time, combining the comprehensive score (CS) with the DBN model. DBNCS algorithm first uses CMI2NI (conditional mutual inclusive information-based network inference) algorithm for network structure profiles learning, namely the construction of search space. Then the redundant regulations are removed by using the recursive optimization algorithm (RO), thereby reduce the false positive rate. Secondly, the network structure profiles are decomposed into a set of cliques without loss, which can significantly reduce the computational complexity. Finally, DBN model is used to identify the direction of gene regulation within the cliques and search for the optimal network structure. The performance of DBNCS algorithm is evaluated by the benchmark GRN datasets from DREAM challenge as well as the SOS DNA repair network in Escherichia coli, and compared with other state-of-the-art methods. The experimental results show the rationality of the algorithm design and the outstanding performance of the GRNs. PMID:29113310

  13. Inference of time-delayed gene regulatory networks based on dynamic Bayesian network hybrid learning method.

    PubMed

    Yu, Bin; Xu, Jia-Meng; Li, Shan; Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Zhang, Yan; Wang, Ming-Hui

    2017-10-06

    Gene regulatory networks (GRNs) research reveals complex life phenomena from the perspective of gene interaction, which is an important research field in systems biology. Traditional Bayesian networks have a high computational complexity, and the network structure scoring model has a single feature. Information-based approaches cannot identify the direction of regulation. In order to make up for the shortcomings of the above methods, this paper presents a novel hybrid learning method (DBNCS) based on dynamic Bayesian network (DBN) to construct the multiple time-delayed GRNs for the first time, combining the comprehensive score (CS) with the DBN model. DBNCS algorithm first uses CMI2NI (conditional mutual inclusive information-based network inference) algorithm for network structure profiles learning, namely the construction of search space. Then the redundant regulations are removed by using the recursive optimization algorithm (RO), thereby reduce the false positive rate. Secondly, the network structure profiles are decomposed into a set of cliques without loss, which can significantly reduce the computational complexity. Finally, DBN model is used to identify the direction of gene regulation within the cliques and search for the optimal network structure. The performance of DBNCS algorithm is evaluated by the benchmark GRN datasets from DREAM challenge as well as the SOS DNA repair network in Escherichia coli , and compared with other state-of-the-art methods. The experimental results show the rationality of the algorithm design and the outstanding performance of the GRNs.

  14. Markov Dynamics as a Zooming Lens for Multiscale Community Detection: Non Clique-Like Communities and the Field-of-View Limit

    PubMed Central

    Schaub, Michael T.; Delvenne, Jean-Charles; Yaliraki, Sophia N.; Barahona, Mauricio

    2012-01-01

    In recent years, there has been a surge of interest in community detection algorithms for complex networks. A variety of computational heuristics, some with a long history, have been proposed for the identification of communities or, alternatively, of good graph partitions. In most cases, the algorithms maximize a particular objective function, thereby finding the ‘right’ split into communities. Although a thorough comparison of algorithms is still lacking, there has been an effort to design benchmarks, i.e., random graph models with known community structure against which algorithms can be evaluated. However, popular community detection methods and benchmarks normally assume an implicit notion of community based on clique-like subgraphs, a form of community structure that is not always characteristic of real networks. Specifically, networks that emerge from geometric constraints can have natural non clique-like substructures with large effective diameters, which can be interpreted as long-range communities. In this work, we show that long-range communities escape detection by popular methods, which are blinded by a restricted ‘field-of-view’ limit, an intrinsic upper scale on the communities they can detect. The field-of-view limit means that long-range communities tend to be overpartitioned. We show how by adopting a dynamical perspective towards community detection [1], [2], in which the evolution of a Markov process on the graph is used as a zooming lens over the structure of the network at all scales, one can detect both clique- or non clique-like communities without imposing an upper scale to the detection. Consequently, the performance of algorithms on inherently low-diameter, clique-like benchmarks may not always be indicative of equally good results in real networks with local, sparser connectivity. We illustrate our ideas with constructive examples and through the analysis of real-world networks from imaging, protein structures and the power grid, where a multiscale structure of non clique-like communities is revealed. PMID:22384178

  15. Benchmarking of protein descriptor sets in proteochemometric modeling (part 2): modeling performance of 13 amino acid descriptor sets

    PubMed Central

    2013-01-01

    Background While a large body of work exists on comparing and benchmarking descriptors of molecular structures, a similar comparison of protein descriptor sets is lacking. Hence, in the current work a total of 13 amino acid descriptor sets have been benchmarked with respect to their ability of establishing bioactivity models. The descriptor sets included in the study are Z-scales (3 variants), VHSE, T-scales, ST-scales, MS-WHIM, FASGAI, BLOSUM, a novel protein descriptor set (termed ProtFP (4 variants)), and in addition we created and benchmarked three pairs of descriptor combinations. Prediction performance was evaluated in seven structure-activity benchmarks which comprise Angiotensin Converting Enzyme (ACE) dipeptidic inhibitor data, and three proteochemometric data sets, namely (1) GPCR ligands modeled against a GPCR panel, (2) enzyme inhibitors (NNRTIs) with associated bioactivities against a set of HIV enzyme mutants, and (3) enzyme inhibitors (PIs) with associated bioactivities on a large set of HIV enzyme mutants. Results The amino acid descriptor sets compared here show similar performance (<0.1 log units RMSE difference and <0.1 difference in MCC), while errors for individual proteins were in some cases found to be larger than those resulting from descriptor set differences ( > 0.3 log units RMSE difference and >0.7 difference in MCC). Combining different descriptor sets generally leads to better modeling performance than utilizing individual sets. The best performers were Z-scales (3) combined with ProtFP (Feature), or Z-Scales (3) combined with an average Z-Scale value for each target, while ProtFP (PCA8), ST-Scales, and ProtFP (Feature) rank last. Conclusions While amino acid descriptor sets capture different aspects of amino acids their ability to be used for bioactivity modeling is still – on average – surprisingly similar. Still, combining sets describing complementary information consistently leads to small but consistent improvement in modeling performance (average MCC 0.01 better, average RMSE 0.01 log units lower). Finally, performance differences exist between the targets compared thereby underlining that choosing an appropriate descriptor set is of fundamental for bioactivity modeling, both from the ligand- as well as the protein side. PMID:24059743

  16. Benchmarking Inverse Statistical Approaches for Protein Structure and Design with Exactly Solvable Models.

    PubMed

    Jacquin, Hugo; Gilson, Amy; Shakhnovich, Eugene; Cocco, Simona; Monasson, Rémi

    2016-05-01

    Inverse statistical approaches to determine protein structure and function from Multiple Sequence Alignments (MSA) are emerging as powerful tools in computational biology. However the underlying assumptions of the relationship between the inferred effective Potts Hamiltonian and real protein structure and energetics remain untested so far. Here we use lattice protein model (LP) to benchmark those inverse statistical approaches. We build MSA of highly stable sequences in target LP structures, and infer the effective pairwise Potts Hamiltonians from those MSA. We find that inferred Potts Hamiltonians reproduce many important aspects of 'true' LP structures and energetics. Careful analysis reveals that effective pairwise couplings in inferred Potts Hamiltonians depend not only on the energetics of the native structure but also on competing folds; in particular, the coupling values reflect both positive design (stabilization of native conformation) and negative design (destabilization of competing folds). In addition to providing detailed structural information, the inferred Potts models used as protein Hamiltonian for design of new sequences are able to generate with high probability completely new sequences with the desired folds, which is not possible using independent-site models. Those are remarkable results as the effective LP Hamiltonians used to generate MSA are not simple pairwise models due to the competition between the folds. Our findings elucidate the reasons for the success of inverse approaches to the modelling of proteins from sequence data, and their limitations.

  17. LYRA, a webserver for lymphocyte receptor structural modeling.

    PubMed

    Klausen, Michael Schantz; Anderson, Mads Valdemar; Jespersen, Martin Closter; Nielsen, Morten; Marcatili, Paolo

    2015-07-01

    The accurate structural modeling of B- and T-cell receptors is fundamental to gain a detailed insight in the mechanisms underlying immunity and in developing new drugs and therapies. The LYRA (LYmphocyte Receptor Automated modeling) web server (http://www.cbs.dtu.dk/services/LYRA/) implements a complete and automated method for building of B- and T-cell receptor structural models starting from their amino acid sequence alone. The webserver is freely available and easy to use for non-specialists. Upon submission, LYRA automatically generates alignments using ad hoc profiles, predicts the structural class of each hypervariable loop, selects the best templates in an automatic fashion, and provides within minutes a complete 3D model that can be downloaded or inspected online. Experienced users can manually select or exclude template structures according to case specific information. LYRA is based on the canonical structure method, that in the last 30 years has been successfully used to generate antibody models of high accuracy, and in our benchmarks this approach proves to achieve similarly good results on TCR modeling, with a benchmarked average RMSD accuracy of 1.29 and 1.48 Å for B- and T-cell receptors, respectively. To the best of our knowledge, LYRA is the first automated server for the prediction of TCR structure. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Some experiences in aircraft aeroelastic design using Preliminary Aeroelastic Design of Structures (PAD)

    NASA Technical Reports Server (NTRS)

    Radovcich, N. A.

    1984-01-01

    The design experience associated with a benchmark aeroelastic design of an out of production transport aircraft is discussed. Current work being performed on a high aspect ratio wing design is reported. The Preliminary Aeroelastic Design of Structures (PADS) system is briefly summarized and some operational aspects of generating the design in an automated aeroelastic design environment are discussed.

  19. Structural and Sequence Similarity Makes a Significant Impact on Machine-Learning-Based Scoring Functions for Protein-Ligand Interactions.

    PubMed

    Li, Yang; Yang, Jianyi

    2017-04-24

    The prediction of protein-ligand binding affinity has recently been improved remarkably by machine-learning-based scoring functions. For example, using a set of simple descriptors representing the atomic distance counts, the RF-Score improves the Pearson correlation coefficient to about 0.8 on the core set of the PDBbind 2007 database, which is significantly higher than the performance of any conventional scoring function on the same benchmark. A few studies have been made to discuss the performance of machine-learning-based methods, but the reason for this improvement remains unclear. In this study, by systemically controlling the structural and sequence similarity between the training and test proteins of the PDBbind benchmark, we demonstrate that protein structural and sequence similarity makes a significant impact on machine-learning-based methods. After removal of training proteins that are highly similar to the test proteins identified by structure alignment and sequence alignment, machine-learning-based methods trained on the new training sets do not outperform the conventional scoring functions any more. On the contrary, the performance of conventional functions like X-Score is relatively stable no matter what training data are used to fit the weights of its energy terms.

  20. Copper doped TiO2 nanoparticles characterized by X-ray absorption spectroscopy, total scattering, and powder diffraction--a benchmark structure-property study.

    PubMed

    Lock, Nina; Jensen, Ellen M L; Mi, Jianli; Mamakhel, Aref; Norén, Katarina; Qingbo, Meng; Iversen, Bo B

    2013-07-14

    Metal functionalized nanoparticles potentially have improved properties e.g. in catalytic applications, but their precise structures are often very challenging to determine. Here we report a structural benchmark study based on tetragonal anatase TiO2 nanoparticles containing 0-2 wt% copper. The particles were synthesized by continuous flow synthesis under supercritical water-isopropanol conditions. Size determination using synchrotron PXRD, TEM, and X-ray total scattering reveals 5-7 nm monodisperse particles. The precise dopant structure and thermal stability of the highly crystalline powders were characterized by X-ray absorption spectroscopy and multi-temperature synchrotron PXRD (300-1000 K). The combined evidence reveals that copper is present as a dopant on the particle surfaces, most likely in an amorphous oxide or hydroxide shell. UV-VIS spectroscopy shows that copper presence at concentrations higher than 0.3 wt% lowers the band gap energy. The particles are unaffected by heating to 600 K, while growth and partial transformation to rutile TiO2 occur at higher temperatures. Anisotropic unit cell behavior of anatase is observed as a consequence of the particle growth (a decreases and c increases).

  1. Design and Optimization of Composite Automotive Hatchback Using Integrated Material-Structure-Process-Performance Method

    NASA Astrophysics Data System (ADS)

    Yang, Xudong; Sun, Lingyu; Zhang, Cheng; Li, Lijun; Dai, Zongmiao; Xiong, Zhenkai

    2018-03-01

    The application of polymer composites as a substitution of metal is an effective approach to reduce vehicle weight. However, the final performance of composite structures is determined not only by the material types, structural designs and manufacturing process, but also by their mutual restrict. Hence, an integrated "material-structure-process-performance" method is proposed for the conceptual and detail design of composite components. The material selection is based on the principle of composite mechanics such as rule of mixture for laminate. The design of component geometry, dimension and stacking sequence is determined by parametric modeling and size optimization. The selection of process parameters are based on multi-physical field simulation. The stiffness and modal constraint conditions were obtained from the numerical analysis of metal benchmark under typical load conditions. The optimal design was found by multi-discipline optimization. Finally, the proposed method was validated by an application case of automotive hatchback using carbon fiber reinforced polymer. Compared with the metal benchmark, the weight of composite one reduces 38.8%, simultaneously, its torsion and bending stiffness increases 3.75% and 33.23%, respectively, and the first frequency also increases 44.78%.

  2. Packing Boxes into Multiple Containers Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Menghani, Deepak; Guha, Anirban

    2016-07-01

    Container loading problems have been studied extensively in the literature and various analytical, heuristic and metaheuristic methods have been proposed. This paper presents two different variants of a genetic algorithm framework for the three-dimensional container loading problem for optimally loading boxes into multiple containers with constraints. The algorithms are designed so that it is easy to incorporate various constraints found in real life problems. The algorithms are tested on data of standard test cases from literature and are found to compare well with the benchmark algorithms in terms of utilization of containers. This, along with the ability to easily incorporate a wide range of practical constraints, makes them attractive for implementation in real life scenarios.

  3. The National Practice Benchmark for Oncology: 2015 Report for 2014 Data

    PubMed Central

    Balch, Carla; Ogle, John D.

    2016-01-01

    The National Practice Benchmark (NPB) is a unique tool used to measure oncology practices against others across the country in a meaningful way despite variations in practice demographics, size, and setting. In today’s challenging economic environment, each practice positions service offerings and competitive advantages to attract patients. Although the data in the NPB report are primarily reported by community oncology practices, the business structure and arrangements with regional health care systems are also reflected in the benchmark report. The ability to produce detailed metrics is an accomplishment of excellence in business and clinical management. With these metrics, a practice should be able to measure and analyze its current business practices and make appropriate changes, if necessary. In this report, we build on the foundation initially established by Oncology Metrics (acquired by Flatiron Health in 2014) over years of data collection and refine definitions to deliver the NPB, which is uniquely meaningful in the oncology market. PMID:27006357

  4. Docking and scoring with ICM: the benchmarking results and strategies for improvement

    PubMed Central

    Neves, Marco A. C.; Totrov, Maxim; Abagyan, Ruben

    2012-01-01

    Flexible docking and scoring using the Internal Coordinate Mechanics software (ICM) was benchmarked for ligand binding mode prediction against the 85 co-crystal structures in the modified Astex data set. The ICM virtual ligand screening was tested against the 40 DUD target benchmarks and 11-target WOMBAT sets. The self-docking accuracy was evaluated for the top 1 and top 3 scoring poses at each ligand binding site with near native conformations below 2 Å RMSD found in 91% and 95% of the predictions, respectively. The virtual ligand screening using single rigid pocket conformations provided the median area under the ROC curves equal to 69.4 with 22.0% true positives recovered at 2% false positive rate. Significant improvements up to ROC AUC= 82.2 and ROC(2%)= 45.2 were achieved following our best practices for flexible pocket refinement and out-of-pocket binding rescore. The virtual screening can be further improved by considering multiple conformations of the target. PMID:22569591

  5. Benchmarking a Soil Moisture Data Assimilation System for Agricultural Drought Monitoring

    NASA Technical Reports Server (NTRS)

    Hun, Eunjin; Crow, Wade T.; Holmes, Thomas; Bolten, John

    2014-01-01

    Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this need, this paper evaluates an LDAS for agricultural drought monitoring by benchmarking individual components of the system (i.e., a satellite soil moisture retrieval algorithm, a soil water balance model and a sequential data assimilation filter) against a series of linear models which perform the same function (i.e., have the same basic inputoutput structure) as the full system component. Benchmarking is based on the calculation of the lagged rank cross-correlation between the normalized difference vegetation index (NDVI) and soil moisture estimates acquired for various components of the system. Lagged soil moistureNDVI correlations obtained using individual LDAS components versus their linear analogs reveal the degree to which non-linearities andor complexities contained within each component actually contribute to the performance of the LDAS system as a whole. Here, a particular system based on surface soil moisture retrievals from the Land Parameter Retrieval Model (LPRM), a two-layer Palmer soil water balance model and an Ensemble Kalman filter (EnKF) is benchmarked. Results suggest significant room for improvement in each component of the system.

  6. MECHANICAL DESIGN CRITERIA FOR INTERVERTEBRAL DISC TISSUE ENGINEERING

    PubMed Central

    Nerurkar, Nandan L.; Elliott, Dawn M.; Mauck, Robert L.

    2009-01-01

    Due to the inability of current clinical practices to restore function to degenerated intervertebral discs, the arena of disc tissue engineering has received substantial attention in recent years. Despite tremendous growth and progress in this field, translation to clinical implementation has been hindered by a lack of well-defined functional benchmarks. Because successful replacement of the disc is contingent upon replication of some or all of its complex mechanical behaviour, it is critically important that disc mechanics be well characterized in order to establish discrete functional goals for tissue engineering. In this review, the key functional signatures of the intervertebral disc are discussed and used to propose a series of native tissue benchmarks to guide the development of engineered replacement tissues. These benchmarks include measures of mechanical function under tensile, compressive and shear deformations for the disc and its substructures. In some cases, important functional measures are identified that have yet to be measured in the native tissue. Ultimately, native tissue benchmark values are compared to measurements that have been made on engineered disc tissues, identifying measures where functional equivalence was achieved, and others where there remain opportunities for advancement. Several excellent reviews exist regarding disc composition and structure, as well as recent tissue engineering strategies; therefore this review will remain focused on the functional aspects of disc tissue engineering. PMID:20080239

  7. Benchmarking and Self-Assessment in the Wine Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galitsky, Christina; Radspieler, Anthony; Worrell, Ernst

    2005-12-01

    Not all industrial facilities have the staff or theopportunity to perform a detailed audit of their operations. The lack ofknowledge of energy efficiency opportunities provides an importantbarrier to improving efficiency. Benchmarking programs in the U.S. andabroad have shown to improve knowledge of the energy performance ofindustrial facilities and buildings and to fuel energy managementpractices. Benchmarking provides a fair way to compare the energyintensity of plants, while accounting for structural differences (e.g.,the mix of products produced, climate conditions) between differentfacilities. In California, the winemaking industry is not only one of theeconomic pillars of the economy; it is also a large energymore » consumer, witha considerable potential for energy-efficiency improvement. LawrenceBerkeley National Laboratory and Fetzer Vineyards developed the firstbenchmarking tool for the California wine industry called "BEST(Benchmarking and Energy and water Savings Tool) Winery". BEST Wineryenables a winery to compare its energy efficiency to a best practicereference winery. Besides overall performance, the tool enables the userto evaluate the impact of implementing efficiency measures. The toolfacilitates strategic planning of efficiency measures, based on theestimated impact of the measures, their costs and savings. The tool willraise awareness of current energy intensities and offer an efficient wayto evaluate the impact of future efficiency measures.« less

  8. Benchmarking reference services: step by step.

    PubMed

    Buchanan, H S; Marshall, J G

    1996-01-01

    This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.

  9. Watershed Regressions for Pesticides (WARP) for Predicting Annual Maximum and Annual Maximum Moving-Average Concentrations of Atrazine in Streams

    USGS Publications Warehouse

    Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.

    2008-01-01

    Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize the probable levels of atrazine for comparison to specific water-quality benchmarks. Sites with a high probability of exceeding a benchmark for human health or aquatic life can be prioritized for monitoring.

  10. Magnesium-binding architectures in RNA crystal structures: validation, binding preferences, classification and motif detection

    PubMed Central

    Zheng, Heping; Shabalin, Ivan G.; Handing, Katarzyna B.; Bujnicki, Janusz M.; Minor, Wladek

    2015-01-01

    The ubiquitous presence of magnesium ions in RNA has long been recognized as a key factor governing RNA folding, and is crucial for many diverse functions of RNA molecules. In this work, Mg2+-binding architectures in RNA were systematically studied using a database of RNA crystal structures from the Protein Data Bank (PDB). Due to the abundance of poorly modeled or incorrectly identified Mg2+ ions, the set of all sites was comprehensively validated and filtered to identify a benchmark dataset of 15 334 ‘reliable’ RNA-bound Mg2+ sites. The normalized frequencies by which specific RNA atoms coordinate Mg2+ were derived for both the inner and outer coordination spheres. A hierarchical classification system of Mg2+ sites in RNA structures was designed and applied to the benchmark dataset, yielding a set of 41 types of inner-sphere and 95 types of outer-sphere coordinating patterns. This classification system has also been applied to describe six previously reported Mg2+-binding motifs and detect them in new RNA structures. Investigation of the most populous site types resulted in the identification of seven novel Mg2+-binding motifs, and all RNA structures in the PDB were screened for the presence of these motifs. PMID:25800744

  11. Global structure of forked DNA in solution revealed by high-resolution single-molecule FRET.

    PubMed

    Sabir, Tara; Schröder, Gunnar F; Toulmin, Anita; McGlynn, Peter; Magennis, Steven W

    2011-02-09

    Branched DNA structures play critical roles in DNA replication, repair, and recombination in addition to being key building blocks for DNA nanotechnology. Here we combine single-molecule multiparameter fluorescence detection and molecular dynamics simulations to give a general approach to global structure determination of branched DNA in solution. We reveal an open, planar structure of a forked DNA molecule with three duplex arms and demonstrate an ion-induced conformational change. This structure will serve as a benchmark for DNA-protein interaction studies.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Ran; Feldman, David; Margolis, Robert

    NREL has been modeling U.S. photovoltaic (PV) system costs since 2009. This year, our report benchmarks costs of U.S. solar PV for residential, commercial, and utility-scale systems built in the first quarter of 2017 (Q1 2017). Costs are represented from the perspective of the developer/installer, thus all hardware costs represent the price at which components are purchased by the developer/installer, not accounting for preexisting supply agreements or other contracts. Importantly, the benchmark this year (2017) also represents the sales price paid to the installer; therefore, it includes profit in the cost of the hardware, along with the profit the installer/developermore » receives, as a separate cost category. However, it does not include any additional net profit, such as a developer fee or price gross-up, which are common in the marketplace. We adopt this approach owing to the wide variation in developer profits in all three sectors, where project pricing is highly dependent on region and project specifics such as local retail electricity rate structures, local rebate and incentive structures, competitive environment, and overall project or deal structures.« less

  13. Systematic chemical-genetic and chemical-chemical interaction datasets for prediction of compound synergism

    PubMed Central

    Wildenhain, Jan; Spitzer, Michaela; Dolma, Sonam; Jarvik, Nick; White, Rachel; Roy, Marcia; Griffiths, Emma; Bellows, David S.; Wright, Gerard D.; Tyers, Mike

    2016-01-01

    The network structure of biological systems suggests that effective therapeutic intervention may require combinations of agents that act synergistically. However, a dearth of systematic chemical combination datasets have limited the development of predictive algorithms for chemical synergism. Here, we report two large datasets of linked chemical-genetic and chemical-chemical interactions in the budding yeast Saccharomyces cerevisiae. We screened 5,518 unique compounds against 242 diverse yeast gene deletion strains to generate an extended chemical-genetic matrix (CGM) of 492,126 chemical-gene interaction measurements. This CGM dataset contained 1,434 genotype-specific inhibitors, termed cryptagens. We selected 128 structurally diverse cryptagens and tested all pairwise combinations to generate a benchmark dataset of 8,128 pairwise chemical-chemical interaction tests for synergy prediction, termed the cryptagen matrix (CM). An accompanying database resource called ChemGRID was developed to enable analysis, visualisation and downloads of all data. The CGM and CM datasets will facilitate the benchmarking of computational approaches for synergy prediction, as well as chemical structure-activity relationship models for anti-fungal drug discovery. PMID:27874849

  14. Augmented neural networks and problem structure-based heuristics for the bin-packing problem

    NASA Astrophysics Data System (ADS)

    Kasap, Nihat; Agarwal, Anurag

    2012-08-01

    In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.

  15. AlloRep: A Repository of Sequence, Structural and Mutagenesis Data for the LacI/GalR Transcription Regulators.

    PubMed

    Sousa, Filipa L; Parente, Daniel J; Shis, David L; Hessman, Jacob A; Chazelle, Allen; Bennett, Matthew R; Teichmann, Sarah A; Swint-Kruse, Liskin

    2016-02-22

    Protein families evolve functional variation by accumulating point mutations at functionally important amino acid positions. Homologs in the LacI/GalR family of transcription regulators have evolved to bind diverse DNA sequences and allosteric regulatory molecules. In addition to playing key roles in bacterial metabolism, these proteins have been widely used as a model family for benchmarking structural and functional prediction algorithms. We have collected manually curated sequence alignments for >3000 sequences, in vivo phenotypic and biochemical data for >5750 LacI/GalR mutational variants, and noncovalent residue contact networks for 65 LacI/GalR homolog structures. Using this rich data resource, we compared the noncovalent residue contact networks of the LacI/GalR subfamilies to design and experimentally validate an allosteric mutant of a synthetic LacI/GalR repressor for use in biotechnology. The AlloRep database (freely available at www.AlloRep.org) is a key resource for future evolutionary studies of LacI/GalR homologs and for benchmarking computational predictions of functional change. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Effects of gamma irradiation on the shelf-life of a dairy-like product

    NASA Astrophysics Data System (ADS)

    Odueke, Oluwakemi B.; Chadd, Stephen A.; Baines, Richard N.; Farag, Karim W.; Jansson, Jonathan

    2018-02-01

    This study was aimed to assess the effect of irradiation on the shelf-life of pseudo-dairy food product consisting of different concentration levels of the structural and energy-giving caloric component macronutrients (protein, fat and carbohydrate). Gamma irradiated products (1 kGy, 3 kGy, 5 kGy and 10 kGy) were compared to the current procedure used by the industry of non-irradiated dairy products. The study looked at the impact of different treatments on storage quality in respect to physicochemical (pH, acidity, macronutrients), and microbiological properties [total viable count (TVC)]. The products were aseptically packaged in plastic containers and analysed at regular weekly intervals up until 100 days during refrigerated storage at 4 ± 1 °C. The storage period did not bring about any significant change in physicochemical properties of the products throughout the period of study while the TVC displayed a linear regression for irradiated products stored at 4 ± 1 °C as well as the control (non-irradiated). At the end of the shelf-life trial (benchmarked at log 4.3 CFU/g), the total viable count did not exceed log 3.94 CFU/g for samples treated at 10 kGy after 100 days of analysis. These observations indicated that the product could be safely stored aerobically for > 100days (10 and 5 kGy), 56days at (3 kGy), 42 days at (1 kGy) for the irradiated samples' and 14-28 days for the non-irradiated samples without much change in physicochemical and microbiological properties using refrigerated storage.

  17. A Comparison of Coverage Restrictions for Biopharmaceuticals and Medical Procedures.

    PubMed

    Chambers, James; Pope, Elle; Bungay, Kathy; Cohen, Joshua; Ciarametaro, Michael; Dubois, Robert; Neumann, Peter J

    2018-04-01

    Differences in payer evaluation and coverage of pharmaceuticals and medical procedures suggest that coverage may differ for medications and procedures independent of their clinical benefit. We hypothesized that coverage for medications is more restricted than corresponding coverage for nonmedication interventions. We included top-selling medications and highly utilized procedures. For each intervention-indication pair, we classified value in terms of cost-effectiveness (incremental cost per quality-adjusted life-year), as reported by the Tufts Medical Center Cost-Effectiveness Analysis Registry. For each intervention-indication pair and for each of 10 large payers, we classified coverage, when available, as either "more restrictive" or as "not more restrictive," compared with a benchmark. The benchmark reflected the US Food and Drug Administration label information, when available, or pertinent clinical guidelines. We compared coverage policies and the benchmark in terms of step edits and clinical restrictions. Finally, we regressed coverage restrictiveness against intervention type (medication or nonmedication), controlling for value (cost-effectiveness more or less favorable than a designated threshold). We identified 392 medication and 185 procedure coverage decisions. A total of 26.3% of the medication coverage and 38.4% of the procedure coverage decisions were more restrictive than their corresponding benchmarks. After controlling for value, the odds of being more restrictive were 42% lower for medications than for procedures. Including unfavorable tier placement in the definition of "more restrictive" greatly increased the proportion of medication coverage decisions classified as "more restrictive" and reversed our findings. Therapy access depends on factors other than cost and clinical benefit, suggesting potential health care system inefficiency. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  18. Validation of molecular crystal structures from powder diffraction data with dispersion-corrected density functional theory (DFT-D).

    PubMed

    van de Streek, Jacco; Neumann, Marcus A

    2014-12-01

    In 2010 we energy-minimized 225 high-quality single-crystal (SX) structures with dispersion-corrected density functional theory (DFT-D) to establish a quantitative benchmark. For the current paper, 215 organic crystal structures determined from X-ray powder diffraction (XRPD) data and published in an IUCr journal were energy-minimized with DFT-D and compared to the SX benchmark. The on average slightly less accurate atomic coordinates of XRPD structures do lead to systematically higher root mean square Cartesian displacement (RMSCD) values upon energy minimization than for SX structures, but the RMSCD value is still a good indicator for the detection of structures that deserve a closer look. The upper RMSCD limit for a correct structure must be increased from 0.25 Å for SX structures to 0.35 Å for XRPD structures; the grey area must be extended from 0.30 to 0.40 Å. Based on the energy minimizations, three structures are re-refined to give more precise atomic coordinates. For six structures our calculations provide the missing positions for the H atoms, for five structures they provide corrected positions for some H atoms. Seven crystal structures showed a minor error for a non-H atom. For five structures the energy minimizations suggest a higher space-group symmetry. For the 225 SX structures, the only deviations observed upon energy minimization were three minor H-atom related issues. Preferred orientation is the most important cause of problems. A preferred-orientation correction is the only correction where the experimental data are modified to fit the model. We conclude that molecular crystal structures determined from powder diffraction data that are published in IUCr journals are of high quality, with less than 4% containing an error in a non-H atom.

  19. Validation of molecular crystal structures from powder diffraction data with dispersion-corrected density functional theory (DFT-D)

    PubMed Central

    van de Streek, Jacco; Neumann, Marcus A.

    2014-01-01

    In 2010 we energy-minimized 225 high-quality single-crystal (SX) structures with dispersion-corrected density functional theory (DFT-D) to establish a quantitative benchmark. For the current paper, 215 organic crystal structures determined from X-ray powder diffraction (XRPD) data and published in an IUCr journal were energy-minimized with DFT-D and compared to the SX benchmark. The on average slightly less accurate atomic coordinates of XRPD structures do lead to systematically higher root mean square Cartesian displacement (RMSCD) values upon energy minimization than for SX structures, but the RMSCD value is still a good indicator for the detection of structures that deserve a closer look. The upper RMSCD limit for a correct structure must be increased from 0.25 Å for SX structures to 0.35 Å for XRPD structures; the grey area must be extended from 0.30 to 0.40 Å. Based on the energy minimizations, three structures are re-refined to give more precise atomic coordinates. For six structures our calculations provide the missing positions for the H atoms, for five structures they provide corrected positions for some H atoms. Seven crystal structures showed a minor error for a non-H atom. For five structures the energy minimizations suggest a higher space-group symmetry. For the 225 SX structures, the only deviations observed upon energy minimization were three minor H-atom related issues. Preferred orientation is the most important cause of problems. A preferred-orientation correction is the only correction where the experimental data are modified to fit the model. We conclude that molecular crystal structures determined from powder diffraction data that are published in IUCr journals are of high quality, with less than 4% containing an error in a non-H atom. PMID:25449625

  20. Benchmarking protein-protein interface predictions: why you should care about protein size.

    PubMed

    Martin, Juliette

    2014-07-01

    A number of predictive methods have been developed to predict protein-protein binding sites. Each new method is traditionally benchmarked using sets of protein structures of various sizes, and global statistics are used to assess the quality of the prediction. Little attention has been paid to the potential bias due to protein size on these statistics. Indeed, small proteins involve proportionally more residues at interfaces than large ones. If a predictive method is biased toward small proteins, this can lead to an over-estimation of its performance. Here, we investigate the bias due to the size effect when benchmarking protein-protein interface prediction on the widely used docking benchmark 4.0. First, we simulate random scores that favor small proteins over large ones. Instead of the 0.5 AUC (Area Under the Curve) value expected by chance, these biased scores result in an AUC equal to 0.6 using hypergeometric distributions, and up to 0.65 using constant scores. We then use real prediction results to illustrate how to detect the size bias by shuffling, and subsequently correct it using a simple conversion of the scores into normalized ranks. In addition, we investigate the scores produced by eight published methods and show that they are all affected by the size effect, which can change their relative ranking. The size effect also has an impact on linear combination scores by modifying the relative contributions of each method. In the future, systematic corrections should be applied when benchmarking predictive methods using data sets with mixed protein sizes. © 2014 Wiley Periodicals, Inc.

  1. OrderRex: clinical order decision support and outcome predictions by data-mining electronic medical records

    PubMed Central

    Chen, Jonathan H; Podchiyska, Tanya

    2016-01-01

    Objective: To answer a “grand challenge” in clinical decision support, the authors produced a recommender system that automatically data-mines inpatient decision support from electronic medical records (EMR), analogous to Netflix or Amazon.com’s product recommender. Materials and Methods: EMR data were extracted from 1 year of hospitalizations (>18K patients with >5.4M structured items including clinical orders, lab results, and diagnosis codes). Association statistics were counted for the ∼1.5K most common items to drive an order recommender. The authors assessed the recommender’s ability to predict hospital admission orders and outcomes based on initial encounter data from separate validation patients. Results: Compared to a reference benchmark of using the overall most common orders, the recommender using temporal relationships improves precision at 10 recommendations from 33% to 38% (P < 10−10) for hospital admission orders. Relative risk-based association methods improve inverse frequency weighted recall from 4% to 16% (P < 10−16). The framework yields a prediction receiver operating characteristic area under curve (c-statistic) of 0.84 for 30 day mortality, 0.84 for 1 week need for ICU life support, 0.80 for 1 week hospital discharge, and 0.68 for 30-day readmission. Discussion: Recommender results quantitatively improve on reference benchmarks and qualitatively appear clinically reasonable. The method assumes that aggregate decision making converges appropriately, but ongoing evaluation is necessary to discern common behaviors from “correct” ones. Conclusions: Collaborative filtering recommender algorithms generate clinical decision support that is predictive of real practice patterns and clinical outcomes. Incorporating temporal relationships improves accuracy. Different evaluation metrics satisfy different goals (predicting likely events vs. “interesting” suggestions). PMID:26198303

  2. Oncology practice trends from the national practice benchmark.

    PubMed

    Barr, Thomas R; Towle, Elaine L

    2012-09-01

    In 2011, we made predictions on the basis of data from the National Practice Benchmark (NPB) reports from 2005 through 2010. With the new 2011 data in hand, we have revised last year's predictions and projected for the next 3 years. In addition, we make some new predictions that will be tracked in future benchmarking surveys. We also outline a conceptual framework for contemplating these data based on an ecological model of the oncology delivery system. The 2011 NPB data are consistent with last year's prediction of a decrease in the operating margins necessary to sustain a community oncology practice. With the new data in, we now predict these reductions to occur more slowly than previously forecast. We note an ease to the squeeze observed in last year's trend analysis, which will allow more time for practices to adapt their business models for survival and offer the best of these practices an opportunity to invest earnings into operations to prepare for the inevitable shift away from historic payment methodology for clinical service. This year, survey respondents reported changes in business structure, first measured in the 2010 data, indicating an increase in the percentage of respondents who believe that change is coming soon, but the majority still have confidence in the viability of their existing business structure. Although oncology practices are in for a bumpy ride, things are looking less dire this year for practices participating in our survey.

  3. HDOCK: a web server for protein–protein and protein–DNA/RNA docking based on a hybrid strategy

    PubMed Central

    Yan, Yumeng; Zhang, Di; Zhou, Pei; Li, Botong

    2017-01-01

    Abstract Protein–protein and protein–DNA/RNA interactions play a fundamental role in a variety of biological processes. Determining the complex structures of these interactions is valuable, in which molecular docking has played an important role. To automatically make use of the binding information from the PDB in docking, here we have presented HDOCK, a novel web server of our hybrid docking algorithm of template-based modeling and free docking, in which cases with misleading templates can be rescued by the free docking protocol. The server supports protein–protein and protein–DNA/RNA docking and accepts both sequence and structure inputs for proteins. The docking process is fast and consumes about 10–20 min for a docking run. Tested on the cases with weakly homologous complexes of <30% sequence identity from five docking benchmarks, the HDOCK pipeline tied with template-based modeling on the protein–protein and protein–DNA benchmarks and performed better than template-based modeling on the three protein–RNA benchmarks when the top 10 predictions were considered. The performance of HDOCK became better when more predictions were considered. Combining the results of HDOCK and template-based modeling by ranking first of the template-based model further improved the predictive power of the server. The HDOCK web server is available at http://hdock.phys.hust.edu.cn/. PMID:28521030

  4. Influencers on quality of life as reported by people living with dementia in long-term care: a descriptive exploratory approach.

    PubMed

    Moyle, Wendy; Fetherstonhaugh, Deirdre; Greben, Melissa; Beattie, Elizabeth

    2015-04-23

    Over half of the residents in long-term care have a diagnosis of dementia. Maintaining quality of life is important, as there is no cure for dementia. Quality of life may be used as a benchmark for caregiving, and can help to enhance respect for the person with dementia and to improve care provision. The purpose of this study was to describe quality of life as reported by people living with dementia in long-term care in terms of the influencers of, as well as the strategies needed, to improve quality of life. A descriptive exploratory approach. A subsample of twelve residents across two Australian states from a national quantitative study on quality of life was interviewed. Data were analysed thematically from a realist perspective. The approach to the thematic analysis was inductive and data-driven. Three themes emerged in relation to influencers and strategies related to quality of life: (a) maintaining independence, (b) having something to do, and (c) the importance of social interaction. The findings highlight the importance of understanding individual resident needs and consideration of the complexity of living in large group living situations, in particular in regard to resident decision-making.

  5. Designing and benchmarking the MULTICOM protein structure prediction system

    PubMed Central

    2013-01-01

    Background Predicting protein structure from sequence is one of the most significant and challenging problems in bioinformatics. Numerous bioinformatics techniques and tools have been developed to tackle almost every aspect of protein structure prediction ranging from structural feature prediction, template identification and query-template alignment to structure sampling, model quality assessment, and model refinement. How to synergistically select, integrate and improve the strengths of the complementary techniques at each prediction stage and build a high-performance system is becoming a critical issue for constructing a successful, competitive protein structure predictor. Results Over the past several years, we have constructed a standalone protein structure prediction system MULTICOM that combines multiple sources of information and complementary methods at all five stages of the protein structure prediction process including template identification, template combination, model generation, model assessment, and model refinement. The system was blindly tested during the ninth Critical Assessment of Techniques for Protein Structure Prediction (CASP9) in 2010 and yielded very good performance. In addition to studying the overall performance on the CASP9 benchmark, we thoroughly investigated the performance and contributions of each component at each stage of prediction. Conclusions Our comprehensive and comparative study not only provides useful and practical insights about how to select, improve, and integrate complementary methods to build a cutting-edge protein structure prediction system but also identifies a few new sources of information that may help improve the design of a protein structure prediction system. Several components used in the MULTICOM system are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:23442819

  6. Benchmark of four popular virtual screening programs: construction of the active/decoy dataset remains a major determinant of measured performance.

    PubMed

    Chaput, Ludovic; Martinez-Sanz, Juan; Saettel, Nicolas; Mouawad, Liliane

    2016-01-01

    In a structure-based virtual screening, the choice of the docking program is essential for the success of a hit identification. Benchmarks are meant to help in guiding this choice, especially when undertaken on a large variety of protein targets. Here, the performance of four popular virtual screening programs, Gold, Glide, Surflex and FlexX, is compared using the Directory of Useful Decoys-Enhanced database (DUD-E), which includes 102 targets with an average of 224 ligands per target and 50 decoys per ligand, generated to avoid biases in the benchmarking. Then, a relationship between these program performances and the properties of the targets or the small molecules was investigated. The comparison was based on two metrics, with three different parameters each. The BEDROC scores with α = 80.5, indicated that, on the overall database, Glide succeeded (score > 0.5) for 30 targets, Gold for 27, FlexX for 14 and Surflex for 11. The performance did not depend on the hydrophobicity nor the openness of the protein cavities, neither on the families to which the proteins belong. However, despite the care in the construction of the DUD-E database, the small differences that remain between the actives and the decoys likely explain the successes of Gold, Surflex and FlexX. Moreover, the similarity between the actives of a target and its crystal structure ligand seems to be at the basis of the good performance of Glide. When all targets with significant biases are removed from the benchmarking, a subset of 47 targets remains, for which Glide succeeded for only 5 targets, Gold for 4 and FlexX and Surflex for 2. The performance dramatic drop of all four programs when the biases are removed shows that we should beware of virtual screening benchmarks, because good performances may be due to wrong reasons. Therefore, benchmarking would hardly provide guidelines for virtual screening experiments, despite the tendency that is maintained, i.e., Glide and Gold display better performance than FlexX and Surflex. We recommend to always use several programs and combine their results. Graphical AbstractSummary of the results obtained by virtual screening with the four programs, Glide, Gold, Surflex and FlexX, on the 102 targets of the DUD-E database. The percentage of targets with successful results, i.e., with BDEROC(α = 80.5) > 0.5, when the entire database is considered are in Blue, and when targets with biased chemical libraries are removed are in Red.

  7. Practical application of the benchmarking technique to increase reliability and efficiency of power installations and main heat-mechanic equipment of thermal power plants

    NASA Astrophysics Data System (ADS)

    Rimov, A. A.; Chukanova, T. I.; Trofimov, Yu. V.

    2016-12-01

    Data on the comparative analysis variants of the quality of power installations (benchmarking) applied in the power industry is systematized. It is shown that the most efficient variant of implementation of the benchmarking technique is the analysis of statistical distributions of the indicators in the composed homogenous group of the uniform power installations. The benchmarking technique aimed at revealing the available reserves on improvement of the reliability and heat efficiency indicators of the power installations of the thermal power plants is developed in the furtherance of this approach. The technique provides a possibility of reliable comparison of the quality of the power installations in their homogenous group limited by the number and adoption of the adequate decision on improving some or other technical characteristics of this power installation. The technique provides structuring of the list of the comparison indicators and internal factors affecting them represented according to the requirements of the sectoral standards and taking into account the price formation characteristics in the Russian power industry. The mentioned structuring ensures traceability of the reasons of deviation of the internal influencing factors from the specified values. The starting point for further detail analysis of the delay of the certain power installation indicators from the best practice expressed in the specific money equivalent is positioning of this power installation on distribution of the key indicator being a convolution of the comparison indicators. The distribution of the key indicator is simulated by the Monte-Carlo method after receiving the actual distributions of the comparison indicators: specific lost profit due to the short supply of electric energy and short delivery of power, specific cost of losses due to the nonoptimal expenditures for repairs, and specific cost of excess fuel equivalent consumption. The quality loss indicators are developed facilitating the analysis of the benchmarking results permitting to represent the quality loss of this power installation in the form of the difference between the actual value of the key indicator or comparison indicator and the best quartile of the existing distribution. The uncertainty of the obtained values of the quality loss indicators was evaluated by transforming the standard uncertainties of the input values into the expanded uncertainties of the output values with the confidence level of 95%. The efficiency of the technique is demonstrated in terms of benchmarking of the main thermal and mechanical equipment of the extraction power-generating units T-250 and power installations of the thermal power plants with the main steam pressure 130 atm.

  8. Noise Reduction Retrofit for a "New Look" Flexible Transit Bus Service Bulletin

    DOT National Transportation Integrated Search

    1980-09-01

    This document presents instructions on how to apply a noise treatment to a contemporary city transit bus without extensive structural alteration. Baseline bus configuration, noise ratings, and performance benchmarks are presented for a Flexible 111DC...

  9. High Cycle Fatigue Prediction for Mistuned Bladed Disks with Fully Coupled Fluid-Structural Interaction

    DTIC Science & Technology

    2006-06-01

    response (time domain) structural vibration model for mistuned rotor bladed disk based on the efficient SNM model has been developed. The vi- bration...airfoil and 3D wing, unsteady vortex shedding of a stationary cylinder, induced vibration of a cylinder, forced vibration of a pitching airfoil, induced... vibration and flutter boundary of 2D NACA 64A010 transonic airfoil, 3D plate wing structural response. The predicted results agree well with benchmark

  10. Cavitation, Flow Structure and Turbulence in the Tip Region of a Rotor Blade

    NASA Technical Reports Server (NTRS)

    Wu, H.; Miorini, R.; Soranna, F.; Katz, J.; Michael, T.; Jessup, S.

    2010-01-01

    Objectives: Measure the flow structure and turbulence within a Naval, axial waterjet pump. Create a database for benchmarking and validation of parallel computational efforts. Address flow and turbulence modeling issues that are unique to this complex environment. Measure and model flow phenomena affecting cavitation within the pump and its effect on pump performance. This presentation focuses on cavitation phenomena and associated flow structure in the tip region of a rotor blade.

  11. Innately Split Model for Job-shop Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Ikeda, Kokolo; Kobayashi, Sigenobu

    Job-shop Scheduling Problem (JSP) is one of the most difficult benchmark problems. GA approaches often fail searching the global optimum because of the deception UV-structure of JSPs. In this paper, we introduce a novel framework model of GA, Innately Split Model (ISM) which prevents UV-phenomenon, and discuss on its power particularly. Next we analyze the structure of JSPs with the help of the UV-structure hypothesys, and finally we show ISM's excellent performance on JSP.

  12. Modeling the economic outcomes of immuno-oncology drugs: alternative model frameworks to capture clinical outcomes

    PubMed Central

    Gibson, EJ; Begum, N; Koblbauer, I; Dranitsaris, G; Liew, D; McEwan, P; Tahami Monfared, AA; Yuan, Y; Juarez-Garcia, A; Tyas, D; Lees, M

    2018-01-01

    Background Economic models in oncology are commonly based on the three-state partitioned survival model (PSM) distinguishing between progression-free and progressive states. However, the heterogeneity of responses observed in immuno-oncology (I-O) suggests that new approaches may be appropriate to reflect disease dynamics meaningfully. Materials and methods This study explored the impact of incorporating immune-specific health states into economic models of I-O therapy. Two variants of the PSM and a Markov model were populated with data from one clinical trial in metastatic melanoma patients. Short-term modeled outcomes were benchmarked to the clinical trial data and a lifetime model horizon provided estimates of life years and quality adjusted life years (QALYs). Results The PSM-based models produced short-term outcomes closely matching the trial outcomes. Adding health states generated increased QALYs while providing a more granular representation of outcomes for decision making. The Markov model gave the greatest level of detail on outcomes but gave short-term results which diverged from those of the trial (overstating year 1 progression-free survival by around 60%). Conclusion Increased sophistication in the representation of disease dynamics in economic models is desirable when attempting to model treatment response in I-O. However, the assumptions underlying different model structures and the availability of data for health state mapping may be important limiting factors. PMID:29563820

  13. First draft genome of an iconic clownfish species (Amphiprion frenatus).

    PubMed

    Marcionetti, Anna; Rossier, Victor; Bertrand, Joris A M; Litsios, Glenn; Salamin, Nicolas

    2018-02-17

    Clownfishes (or anemonefishes) form an iconic group of coral reef fishes, principally known for their mutualistic interaction with sea anemones. They are characterized by particular life history traits, such as a complex social structure and mating system involving sequential hermaphroditism, coupled with an exceptionally long lifespan. Additionally, clownfishes are considered to be one of the rare groups to have experienced an adaptive radiation in the marine environment. Here, we assembled and annotated the first genome of a clownfish species, the tomato clownfish (Amphiprion frenatus). We obtained 17,801 assembled scaffolds, containing a total of 26,917 genes. The completeness of the assembly and annotation was satisfying, with 96.5% of the Actinopterygii Benchmarking Universal Single-Copy Orthologs (BUSCOs) being retrieved in A. frenatus assembly. The quality of the resulting assembly is comparable to other bony fish assemblies. This resource is valuable for advancing studies of the particular life history traits of clownfishes, as well as being useful for population genetic studies and the development of new phylogenetic markers. It will also open the way to comparative genomics. Indeed, future genomic comparison among closely related fishes may provide means to identify genes related to the unique adaptations to different sea anemone hosts, as well as better characterize the genomic signatures of an adaptive radiation. © 2018 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.

  14. CALiPER Exploratory Study. Recessed Troffer Lighting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, N. J.; Royer, M. P.; Poplawski, M. E.

    This CALiPER study examines the problems and benefits likely to be encountered with LED products intended to replace linear fluorescent lamps. LED dedicated troffers, replacement tubes, and non-tube retrofit kits were evaluated against fluorescent benchmark troffers in a simulated office space for photometric distribution, uniformity of light on the task surface, suitability of light output, flicker, dimming performance, color quality, power quality, safety and certification issues, ease of installation, energy efficiency, and life-cycle cost.

  15. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  16. Tribological and Wear Performance of Nanocomposite PVD Hard Coatings Deposited on Aluminum Die Casting Tool.

    PubMed

    Paiva, Jose Mario; Fox-Rabinovich, German; Locks Junior, Edinei; Stolf, Pietro; Seid Ahmed, Yassmin; Matos Martins, Marcelo; Bork, Carlos; Veldhuis, Stephen

    2018-02-28

    In the aluminum die casting process, erosion, corrosion, soldering, and die sticking have a significant influence on tool life and product quality. A number of coatings such as TiN, CrN, and (Cr,Al)N deposited by physical vapor deposition (PVD) have been employed to act as protective coatings due to their high hardness and chemical stability. In this study, the wear performance of two nanocomposite AlTiN and AlCrN coatings with different structures were evaluated. These coatings were deposited on aluminum die casting mold tool substrates (AISI H13 hot work steel) by PVD using pulsed cathodic arc evaporation, equipped with three lateral arc-rotating cathodes (LARC) and one central rotating cathode (CERC). The research was performed in two stages: in the first stage, the outlined coatings were characterized regarding their chemical composition, morphology, and structure using glow discharge optical emission spectroscopy (GDOES), scanning electron microscopy (SEM), and X-ray diffraction (XRD), respectively. Surface morphology and mechanical properties were evaluated by atomic force microscopy (AFM) and nanoindentation. The coating adhesion was studied using Mersedes test and scratch testing. During the second stage, industrial tests were carried out for coated die casting molds. In parallel, tribological tests were also performed in order to determine if a correlation between laboratory and industrial tests can be drawn. All of the results were compared with a benchmark monolayer AlCrN coating. The data obtained show that the best performance was achieved for the AlCrN/Si₃N₄ nanocomposite coating that displays an optimum combination of hardness, adhesion, soldering behavior, oxidation resistance, and stress state. These characteristics are essential for improving the die mold service life. Therefore, this coating emerges as a novelty to be used to protect aluminum die casting molds.

  17. Treatment planning for spinal radiosurgery : A competitive multiplatform benchmark challenge.

    PubMed

    Moustakis, Christos; Chan, Mark K H; Kim, Jinkoo; Nilsson, Joakim; Bergman, Alanah; Bichay, Tewfik J; Palazon Cano, Isabel; Cilla, Savino; Deodato, Francesco; Doro, Raffaela; Dunst, Jürgen; Eich, Hans Theodor; Fau, Pierre; Fong, Ming; Haverkamp, Uwe; Heinze, Simon; Hildebrandt, Guido; Imhoff, Detlef; de Klerck, Erik; Köhn, Janett; Lambrecht, Ulrike; Loutfi-Krauss, Britta; Ebrahimi, Fatemeh; Masi, Laura; Mayville, Alan H; Mestrovic, Ante; Milder, Maaike; Morganti, Alessio G; Rades, Dirk; Ramm, Ulla; Rödel, Claus; Siebert, Frank-Andre; den Toom, Wilhelm; Wang, Lei; Wurster, Stefan; Schweikard, Achim; Soltys, Scott G; Ryu, Samuel; Blanck, Oliver

    2018-05-25

    To investigate the quality of treatment plans of spinal radiosurgery derived from different planning and delivery systems. The comparisons include robotic delivery and intensity modulated arc therapy (IMAT) approaches. Multiple centers with equal systems were used to reduce a bias based on individual's planning abilities. The study used a series of three complex spine lesions to maximize the difference in plan quality among the various approaches. Internationally recognized experts in the field of treatment planning and spinal radiosurgery from 12 centers with various treatment planning systems participated. For a complex spinal lesion, the results were compared against a previously published benchmark plan derived for CyberKnife radiosurgery (CKRS) using circular cones only. For two additional cases, one with multiple small lesions infiltrating three vertebrae and a single vertebra lesion treated with integrated boost, the results were compared against a benchmark plan generated using a best practice guideline for CKRS. All plans were rated based on a previously established ranking system. All 12 centers could reach equality (n = 4) or outperform (n = 8) the benchmark plan. For the multiple lesions and the single vertebra lesion plan only 5 and 3 of the 12 centers, respectively, reached equality or outperformed the best practice benchmark plan. However, the absolute differences in target and critical structure dosimetry were small and strongly planner-dependent rather than system-dependent. Overall, gantry-based IMAT with simple planning techniques (two coplanar arcs) produced faster treatments and significantly outperformed static gantry intensity modulated radiation therapy (IMRT) and multileaf collimator (MLC) or non-MLC CKRS treatment plan quality regardless of the system (mean rank out of 4 was 1.2 vs. 3.1, p = 0.002). High plan quality for complex spinal radiosurgery was achieved among all systems and all participating centers in this planning challenge. This study concludes that simple IMAT techniques can generate significantly better plan quality compared to previous established CKRS benchmarks.

  18. Estimating the Number of Organ Donors in Australian Hospitals—Implications for Monitoring Organ Donation Practices

    PubMed Central

    Pilcher, David; Gladkis, Laura; Arcia, Byron; Bailey, Michael; Cook, David; Cass, Yael; Opdam, Helen

    2015-01-01

    Background The Australian DonateLife Audit captures information on all deaths which occur in emergency departments, intensive care units and in those recently discharged from intensive care unit. This information provides the opportunity to estimate the number of donors expected, given present consent rates and contemporary donation practices. This may then allow benchmarking of performance between hospitals and jurisdictions. Our aim was to develop a method to estimate the number of donors using data from the DonateLife Audit on the basis of baseline patient characteristics alone. Methods All intubated patient deaths at contributing hospitals were analyzed. Univariate comparisons of donors to nondonors were performed. A logistic regression model was developed to estimate expected donor numbers from data collected between July 2012 and December 2013. This was validated using data from January to April 2014. Results Between July 2012 and April 2014, 6861 intubated patient deaths at 68 hospitals were listed on the DonateLife Audit of whom 553 (8.1%) were organ donors. Factors independently associated with organ donation included age, brain death, neurological diagnoses, chest x-ray findings, PaO2/FiO2, creatinine, alanine transaminase, cancer, cardiac arrest, chronic heart disease, and peripheral vascular disease. A highly discriminatory (area under the receiver operatory characteristic, 0.940 [95% confidence interval, 0.924-0.957]) and well-calibrated prediction model was developed which accurately estimated donor numbers. Three hospitals appeared to have higher numbers of actual donors than expected. Conclusions It is possible to estimate the expected number of organ donors. This may assist benchmarking of donation outcomes and interpretation of changes in donation rates over time. PMID:25919766

  19. Web Site Design Benchmarking within Industry Groups.

    ERIC Educational Resources Information Center

    Kim, Sung-Eon; Shaw, Thomas; Schneider, Helmut

    2003-01-01

    Discussion of electronic commerce focuses on Web site evaluation criteria and applies them to different industry groups in Korea. Defines six categories of Web site evaluation criteria: business function, corporate credibility, contents reliability, Web site attractiveness, systematic structure, and navigation; and discusses differences between…

  20. Organizational Alignment Supporting Distance Education in Post-Secondary Institutions.

    ERIC Educational Resources Information Center

    Prestera, Gustavo E.; Moller, Leslie A.

    2001-01-01

    Applies an established model of organizational alignment to distance education in postsecondary institutions and recommends performance-oriented approaches to support growth by analyzing goals, structure, and management practices across the organization. Presents performance improvement strategies such as benchmarking and documenting workflows,…

  1. Qualification of Commercial XIPS(R) Ion Thrusters for NASA Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Goebel, Dan M.; Polk, James E.; Wirz, Richard E.; Snyder, J.Steven; Mikellides, Ioannis G.; Katz, Ira; Anderson, John

    2008-01-01

    Electric propulsion systems based on commercial ion and Hall thrusters have the potential for significantly reducing the cost and schedule-risk of Ion Propulsion Systems (IPS) for deep space missions. The large fleet of geosynchronous communication satellites that use solar electric propulsion (SEP), which will approach 40 satellites by year-end, demonstrates the significant level of technical maturity and spaceflight heritage achieved by the commercial IPS systems. A program to delta-qualify XIPS(R) ion thrusters for deep space missions is underway at JPL. This program includes modeling of the thruster grid and cathode life, environmental testing of a 25-centimeter electromagnetic (EM) thruster over DAWN-like vibe and temperature profiles, and wear testing of the thruster cathodes to demonstrate the life and benchmark the model results. This paper will present the delta-qualification status of the XIPS thruster and discuss the life and reliability with respect to known failure mechanisms.

  2. High-Precision Half-Life and Branching Ratio Measurements for the Superallowed β+ Emitter 26Alm

    NASA Astrophysics Data System (ADS)

    Finlay, P.; Svensson, C. E.; Demand, G. A.; Garrett, P. E.; Green, K. L.; Leach, K. G.; Phillips, A. A.; Rand, E. T.; Ball, G.; Bandyopadhyay, D.; Djongolov, M.; Ettenauer, S.; Hackman, G.; Pearson, C. J.; Leslie, J. R.; Andreoiu, C.; Cross, D.; Austin, R. A. E.; Grinyer, G. F.; Sumithrarachchi, C. S.; Williams, S. J.; Triambak, S.

    2013-03-01

    High-precision half-life and branching-ratio measurements for the superallowed β+ emitter 26Alm were performed at the TRIUMF-ISAC radioactive ion beam facility. An upper limit of ≤ 15 ppm at 90% C.L. was determined for the sum of all possible non-analogue β+/EC decay branches of 26Alm, yielding a superallowed branching ratio of 100.0000+0-0.0015%. A value of T1/2 = 6:34654(76) s was determined for the 26Alm half-life which is consistent with, but 2.5 times more precise than, the previous world average. Combining these results with world-average measurements yields an ft value of 3037.58(60) s, the most precisely determined for any superallowed emitting nucleus to date. This high-precision ft value for 26Alm provides a new benchmark to refine theoretical models of isospin-symmetry-breaking effects in superallowed β decays.

  3. High-Precision Half-Life Measurement for the Superallowed {beta}{sup +} Emitter {sup 26}Al{sup m}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finlay, P.; Svensson, C. E.; Green, K. L.

    2011-01-21

    A high-precision half-life measurement for the superallowed {beta}{sup +} emitter {sup 26}Al{sup m} was performed at the TRIUMF-ISAC radioactive ion beam facility yielding T{sub 1/2}=6346.54{+-}0.46{sub stat{+-}}0.60{sub syst} ms, consistent with, but 2.5 times more precise than, the previous world average. The {sup 26}Al{sup m} half-life and ft value, 3037.53(61) s, are now the most precisely determined for any superallowed {beta} decay. Combined with recent theoretical corrections for isospin-symmetry-breaking and radiative effects, the corrected Ft value for {sup 26}Al{sup m}, 3073.0(12) s, sets a new benchmark for the high-precision superallowed Fermi {beta}-decay studies used to test the conserved vector current hypothesismore » and determine the V{sub ud} element of the Cabibbo-Kobayashi-Maskawa quark mixing matrix.« less

  4. Validation and Comparison of 2D and 3D Codes for Nearshore Motion of Long Waves Using Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey

    2016-04-01

    Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT-Japan Joint Call and Istanbul Metropolitan Municipality are all acknowledged.

  5. SPH modeling of fluid-structure interaction

    NASA Astrophysics Data System (ADS)

    Han, Luhui; Hu, Xiangyu

    2018-02-01

    This work concerns numerical modeling of fluid-structure interaction (FSI) problems in a uniform smoothed particle hydrodynamics (SPH) framework. It combines a transport-velocity SPH scheme, advancing fluid motions, with a total Lagrangian SPH formulation dealing with the structure deformations. Since both fluid and solid governing equations are solved in SPH framework, while coupling becomes straightforward, the momentum conservation of the FSI system is satisfied strictly. A well-known FSI benchmark test case has been performed to validate the modeling and to demonstrate its potential.

  6. All inclusive benchmarking.

    PubMed

    Ellis, Judith

    2006-07-01

    The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.

  7. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-02-15

    We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 millionmore » vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  8. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-05-29

    We present a new lock-free parallel algorithm for computing betweenness centrality of massive small-world networks. With minor changes to the data structures, our algorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in the HPCS SSCA#2 Graph Analysis benchmark, which has been extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the ThreadStorm processor, and a single-socket Sun multicore server with the UltraSparc T2 processor.more » For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  9. Development and testing of the VITAMIN-B7/BUGLE-B7 coupled neutron-gamma multigroup cross-section libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Risner, J.M.; Wiarda, D.; Miller, T.M.

    2011-07-01

    The U.S. Nuclear Regulatory Commission's Regulatory Guide 1.190 states that calculational methods used to estimate reactor pressure vessel (RPV) fluence should use the latest version of the evaluated nuclear data file (ENDF). The VITAMIN-B6 fine-group library and BUGLE-96 broad-group library, which are widely used for RPV fluence calculations, were generated using ENDF/B-VI.3 data, which was the most current data when Regulatory Guide 1.190 was issued. We have developed new fine-group (VITAMIN-B7) and broad-group (BUGLE-B7) libraries based on ENDF/B-VII.0. These new libraries, which were processed using the AMPX code system, maintain the same group structures as the VITAMIN-B6 and BUGLE-96 libraries.more » Verification and validation of the new libraries were accomplished using diagnostic checks in AMPX, 'unit tests' for each element in VITAMIN-B7, and a diverse set of benchmark experiments including critical evaluations for fast and thermal systems, a set of experimental benchmarks that are used for SCALE regression tests, and three RPV fluence benchmarks. The benchmark evaluation results demonstrate that VITAMIN-B7 and BUGLE-B7 are appropriate for use in RPV fluence calculations and meet the calculational uncertainty criterion in Regulatory Guide 1.190. (authors)« less

  10. Benchmarking density functionals for hydrogen-helium mixtures with quantum Monte Carlo: Energetics, pressures, and forces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.

    An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less

  11. Benchmarking density functionals for hydrogen-helium mixtures with quantum Monte Carlo: Energetics, pressures, and forces

    DOE PAGES

    Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.; ...

    2016-01-19

    An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less

  12. Mechanical design criteria for intervertebral disc tissue engineering.

    PubMed

    Nerurkar, Nandan L; Elliott, Dawn M; Mauck, Robert L

    2010-04-19

    Due to the inability of current clinical practices to restore function to degenerated intervertebral discs, the arena of disc tissue engineering has received substantial attention in recent years. Despite tremendous growth and progress in this field, translation to clinical implementation has been hindered by a lack of well-defined functional benchmarks. Because successful replacement of the disc is contingent upon replication of some or all of its complex mechanical behaviors, it is critically important that disc mechanics be well characterized in order to establish discrete functional goals for tissue engineering. In this review, the key functional signatures of the intervertebral disc are discussed and used to propose a series of native tissue benchmarks to guide the development of engineered replacement tissues. These benchmarks include measures of mechanical function under tensile, compressive, and shear deformations for the disc and its substructures. In some cases, important functional measures are identified that have yet to be measured in the native tissue. Ultimately, native tissue benchmark values are compared to measurements that have been made on engineered disc tissues, identifying where functional equivalence was achieved, and where there remain opportunities for advancement. Several excellent reviews exist regarding disc composition and structure, as well as recent tissue engineering strategies; therefore this review will remain focused on the functional aspects of disc tissue engineering. Copyright 2009 Elsevier Ltd. All rights reserved.

  13. Development and Testing of the VITAMIN-B7/BUGLE-B7 Coupled Neutron-Gamma Multigroup Cross-Section Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Risner, Joel M; Wiarda, Dorothea; Miller, Thomas Martin

    2011-01-01

    The U.S. Nuclear Regulatory Commission s Regulatory Guide 1.190 states that calculational methods used to estimate reactor pressure vessel (RPV) fluence should use the latest version of the Evaluated Nuclear Data File (ENDF). The VITAMIN-B6 fine-group library and BUGLE-96 broad-group library, which are widely used for RPV fluence calculations, were generated using ENDF/B-VI data, which was the most current data when Regulatory Guide 1.190 was issued. We have developed new fine-group (VITAMIN-B7) and broad-group (BUGLE-B7) libraries based on ENDF/B-VII. These new libraries, which were processed using the AMPX code system, maintain the same group structures as the VITAMIN-B6 and BUGLE-96more » libraries. Verification and validation of the new libraries was accomplished using diagnostic checks in AMPX, unit tests for each element in VITAMIN-B7, and a diverse set of benchmark experiments including critical evaluations for fast and thermal systems, a set of experimental benchmarks that are used for SCALE regression tests, and three RPV fluence benchmarks. The benchmark evaluation results demonstrate that VITAMIN-B7 and BUGLE-B7 are appropriate for use in LWR shielding applications, and meet the calculational uncertainty criterion in Regulatory Guide 1.190.« less

  14. Can Humans Fly Action Understanding with Multiple Classes of Actors

    DTIC Science & Technology

    2015-06-08

    recognition using structure from motion point clouds. In European Conference on Computer Vision, 2008. [5] R. Caruana. Multitask learning. Machine Learning...tonomous driving ? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. [12] L. Gorelick, M. Blank

  15. Lagrangian Descriptors: A Method for Revealing Phase Space Structures of General Time Dependent Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Mancho, Ana M.; Wiggins, Stephen; Curbelo, Jezabel; Mendoza, Carolina

    2013-11-01

    Lagrangian descriptors are a recent technique which reveals geometrical structures in phase space and which are valid for aperiodically time dependent dynamical systems. We discuss a general methodology for constructing them and we discuss a ``heuristic argument'' that explains why this method is successful. We support this argument by explicit calculations on a benchmark problem. Several other benchmark examples are considered that allow us to assess the performance of Lagrangian descriptors with both finite time Lyapunov exponents (FTLEs) and finite time averages of certain components of the vector field (``time averages''). In all cases Lagrangian descriptors are shown to be both more accurate and computationally efficient than these methods. We thank CESGA for computing facilities. This research was supported by MINECO grants: MTM2011-26696, I-Math C3-0104, ICMAT Severo Ochoa project SEV-2011-0087, and CSIC grant OCEANTECH. SW acknowledges the support of the ONR (Grant No. N00014-01-1-0769).

  16. Applying Quantum Monte Carlo to the Electronic Structure Problem

    NASA Astrophysics Data System (ADS)

    Powell, Andrew D.; Dawes, Richard

    2016-06-01

    Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).

  17. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins

    PubMed Central

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    Aim: This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)2-V2, Modweb were used for the comparison and model generation. Results: Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. Conclusion: This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure. PMID:24023424

  18. Supply network configuration—A benchmarking problem

    NASA Astrophysics Data System (ADS)

    Brandenburg, Marcus

    2018-03-01

    Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.

  19. Development of new geomagnetic storm ground response scaling factors for utilization in hazard assessments

    NASA Astrophysics Data System (ADS)

    Pulkkinen, A. A.; Bernabeu, E.; Weigel, R. S.; Kelbert, A.; Rigler, E. J.; Bedrosian, P.; Love, J. J.

    2017-12-01

    Development of realistic storm scenarios that can be played through the exposed systems is one of the key requirements for carrying out quantitative space weather hazards assessments. In the geomagnetically induced currents (GIC) and power grids context, these scenarios have to quantify the spatiotemporal evolution of the geoelectric field that drives the potentially hazardous currents in the system. In response to the Federal Energy Regulatory Commission (FERC) order 779, a team of scientists and engineers that worked under the auspices of North American Electric Reliability Corporation (NERC), has developed extreme geomagnetic storm and geoelectric field benchmark(s) that use various scaling factors that account for geomagnetic latitude and ground structure of the locations of interest. These benchmarks, together with the information generated in the National Space Weather Action Plan, are the foundation for the hazards assessments that the industry will be carrying out in response to the FERC order and under the auspices of the National Science and Technology Council. While the scaling factors developed in the past work were based on the best available information, there is now significant new information available for parts of the U.S. pertaining to the ground response to external geomagnetic field excitation. The significant new information includes the results magnetotelluric surveys that have been conducted over the past few years across the contiguous US and results from previous surveys that have been made available in a combined online database. In this paper, we distill this new information in the framework of the NERC benchmark and in terms of updated ground response scaling factors thereby allowing straightforward utilization in the hazard assessments. We also outline the path forward for improving the overall extreme event benchmark scenario(s) including generalization of the storm waveforms and geoelectric field spatial patterns.

  20. Assessing rural small community water supply in Limpopo, South Africa: water service benchmarks and reliability.

    PubMed

    Majuru, Batsirai; Jagals, Paul; Hunter, Paul R

    2012-10-01

    Although a number of studies have reported on water supply improvements, few have simultaneously taken into account the reliability of the water services. The study aimed to assess whether upgrading water supply systems in small rural communities improved access, availability and potability of water by assessing the water services against selected benchmarks from the World Health Organisation and South African Department of Water Affairs, and to determine the impact of unreliability on the services. These benchmarks were applied in three rural communities in Limpopo, South Africa where rudimentary water supply services were being upgraded to basic services. Data were collected through structured interviews, observations and measurement, and multi-level linear regression models were used to assess the impact of water service upgrades on key outcome measures of distance to source, daily per capita water quantity and Escherichia coli count. When the basic system was operational, 72% of households met the minimum benchmarks for distance and water quantity, but only 8% met both enhanced benchmarks. During non-operational periods of the basic service, daily per capita water consumption decreased by 5.19l (p<0.001, 95% CI 4.06-6.31) and distances to water sources were 639 m further (p ≤ 0.001, 95% CI 560-718). Although both rudimentary and basic systems delivered water that met potability criteria at the sources, the quality of stored water sampled in the home was still unacceptable throughout the various service levels. These results show that basic water services can make substantial improvements to water access, availability, potability, but only if such services are reliable. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Multi-Complementary Model for Long-Term Tracking

    PubMed Central

    Zhang, Deng; Zhang, Junchang; Xia, Chenyang

    2018-01-01

    In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed. PMID:29425170

  2. Surgeon's experiences of receiving peer benchmarked feedback using patient-reported outcome measures: a qualitative study.

    PubMed

    Boyce, Maria B; Browne, John P; Greenhalgh, Joanne

    2014-06-27

    The use of patient-reported outcome measures (PROMs) to provide healthcare professionals with peer benchmarked feedback is growing. However, there is little evidence on the opinions of professionals on the value of this information in practice. The purpose of this research is to explore surgeon's experiences of receiving peer benchmarked PROMs feedback and to examine whether this information led to changes in their practice. This qualitative research employed a Framework approach. Semi-structured interviews were undertaken with surgeons who received peer benchmarked PROMs feedback. The participants included eleven consultant orthopaedic surgeons in the Republic of Ireland. Five themes were identified: conceptual, methodological, practical, attitudinal, and impact. A typology was developed based on the attitudinal and impact themes from which three distinct groups emerged. 'Advocates' had positive attitudes towards PROMs and confirmed that the information promoted a self-reflective process. 'Converts' were uncertain about the value of PROMs, which reduced their inclination to use the data. 'Sceptics' had negative attitudes towards PROMs and claimed that the information had no impact on their behaviour. The conceptual, methodological and practical factors were linked to the typology. Surgeons had mixed opinions on the value of peer benchmarked PROMs data. Many appreciated the feedback as it reassured them that their practice was similar to their peers. However, PROMs information alone was considered insufficient to help identify opportunities for quality improvements. The reasons for the observed reluctance of participants to embrace PROMs can be categorised into conceptual, methodological, and practical factors. Policy makers and researchers need to increase professionals' awareness of the numerous purposes and benefits of using PROMs, challenge the current methods to measure performance using PROMs, and reduce the burden of data collection and information dissemination on routine practice.

  3. International health IT benchmarking: learning from cross-country comparisons.

    PubMed

    Zelmer, Jennifer; Ronchi, Elettra; Hyppönen, Hannele; Lupiáñez-Villanueva, Francisco; Codagnone, Cristiano; Nøhr, Christian; Huebner, Ursula; Fazzalari, Anne; Adler-Milstein, Julia

    2017-03-01

    To pilot benchmark measures of health information and communication technology (ICT) availability and use to facilitate cross-country learning. A prior Organization for Economic Cooperation and Development-led effort involving 30 countries selected and defined functionality-based measures for availability and use of electronic health records, health information exchange, personal health records, and telehealth. In this pilot, an Organization for Economic Cooperation and Development Working Group compiled results for 38 countries for a subset of measures with broad coverage using new and/or adapted country-specific or multinational surveys and other sources from 2012 to 2015. We also synthesized country learnings to inform future benchmarking. While electronic records are widely used to store and manage patient information at the point of care-all but 2 pilot countries reported use by at least half of primary care physicians; many had rates above 75%-patient information exchange across organizations/settings is less common. Large variations in the availability and use of telehealth and personal health records also exist. Pilot participation demonstrated interest in cross-national benchmarking. Using the most comparable measures available to date, it showed substantial diversity in health ICT availability and use in all domains. The project also identified methodological considerations (e.g., structural and health systems issues that can affect measurement) important for future comparisons. While health policies and priorities differ, many nations aim to increase access, quality, and/or efficiency of care through effective ICT use. By identifying variations and describing key contextual factors, benchmarking offers the potential to facilitate cross-national learning and accelerate the progress of individual countries. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  4. Combining Rosetta with molecular dynamics (MD): A benchmark of the MD-based ensemble protein design.

    PubMed

    Ludwiczak, Jan; Jarmula, Adam; Dunin-Horkawicz, Stanislaw

    2018-07-01

    Computational protein design is a set of procedures for computing amino acid sequences that will fold into a specified structure. Rosetta Design, a commonly used software for protein design, allows for the effective identification of sequences compatible with a given backbone structure, while molecular dynamics (MD) simulations can thoroughly sample near-native conformations. We benchmarked a procedure in which Rosetta design is started on MD-derived structural ensembles and showed that such a combined approach generates 20-30% more diverse sequences than currently available methods with only a slight increase in computation time. Importantly, the increase in diversity is achieved without a loss in the quality of the designed sequences assessed by their resemblance to natural sequences. We demonstrate that the MD-based procedure is also applicable to de novo design tasks started from backbone structures without any sequence information. In addition, we implemented a protocol that can be used to assess the stability of designed models and to select the best candidates for experimental validation. In sum our results demonstrate that the MD ensemble-based flexible backbone design can be a viable method for protein design, especially for tasks that require a large pool of diverse sequences. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Hierarchical kernel mixture models for the prediction of AIDS disease progression using HIV structural gp120 profiles

    PubMed Central

    2010-01-01

    Changes to the glycosylation profile on HIV gp120 can influence viral pathogenesis and alter AIDS disease progression. The characterization of glycosylation differences at the sequence level is inadequate as the placement of carbohydrates is structurally complex. However, no structural framework is available to date for the study of HIV disease progression. In this study, we propose a novel machine-learning based framework for the prediction of AIDS disease progression in three stages (RP, SP, and LTNP) using the HIV structural gp120 profile. This new intelligent framework proves to be accurate and provides an important benchmark for predicting AIDS disease progression computationally. The model is trained using a novel HIV gp120 glycosylation structural profile to detect possible stages of AIDS disease progression for the target sequences of HIV+ individuals. The performance of the proposed model was compared to seven existing different machine-learning models on newly proposed gp120-Benchmark_1 dataset in terms of error-rate (MSE), accuracy (CCI), stability (STD), and complexity (TBM). The novel framework showed better predictive performance with 67.82% CCI, 30.21 MSE, 0.8 STD, and 2.62 TBM on the three stages of AIDS disease progression of 50 HIV+ individuals. This framework is an invaluable bioinformatics tool that will be useful to the clinical assessment of viral pathogenesis. PMID:21143806

  6. Does standardised structured reporting contribute to quality in diagnostic pathology? The importance of evidence-based datasets.

    PubMed

    Ellis, D W; Srigley, J

    2016-01-01

    Key quality parameters in diagnostic pathology include timeliness, accuracy, completeness, conformance with current agreed standards, consistency and clarity in communication. In this review, we argue that with worldwide developments in eHealth and big data, generally, there are two further, often overlooked, parameters if our reports are to be fit for purpose. Firstly, population-level studies have clearly demonstrated the value of providing timely structured reporting data in standardised electronic format as part of system-wide quality improvement programmes. Moreover, when combined with multiple health data sources through eHealth and data linkage, structured pathology reports become central to population-level quality monitoring, benchmarking, interventions and benefit analyses in public health management. Secondly, population-level studies, particularly for benchmarking, require a single agreed international and evidence-based standard to ensure interoperability and comparability. This has been taken for granted in tumour classification and staging for many years, yet international standardisation of cancer datasets is only now underway through the International Collaboration on Cancer Reporting (ICCR). In this review, we present evidence supporting the role of structured pathology reporting in quality improvement for both clinical care and population-level health management. Although this review of available evidence largely relates to structured reporting of cancer, it is clear that the same principles can be applied throughout anatomical pathology generally, as they are elsewhere in the health system.

  7. An empiric estimate of the value of life: updating the renal dialysis cost-effectiveness standard.

    PubMed

    Lee, Chris P; Chertow, Glenn M; Zenios, Stefanos A

    2009-01-01

    Proposals to make decisions about coverage of new technology by comparing the technology's incremental cost-effectiveness with the traditional benchmark of dialysis imply that the incremental cost-effectiveness ratio of dialysis is seen a proxy for the value of a statistical year of life. The frequently used ratio for dialysis has, however, not been updated to reflect more recently available data on dialysis. We developed a computer simulation model for the end-stage renal disease population and compared cost, life expectancy, and quality adjusted life expectancy of current dialysis practice relative to three less costly alternatives and to no dialysis. We estimated incremental cost-effectiveness ratios for these alternatives relative to the next least costly alternative and no dialysis and analyzed the population distribution of the ratios. Model parameters and costs were estimated using data from the Medicare population and a large integrated health-care delivery system between 1996 and 2003. The sensitivity of results to model assumptions was tested using 38 scenarios of one-way sensitivity analysis, where parameters informing the cost, utility, mortality and morbidity, etc. components of the model were by perturbed +/-50%. The incremental cost-effectiveness ratio of dialysis of current practice relative to the next least costly alternative is on average $129,090 per quality-adjusted life-year (QALY) ($61,294 per year), but its distribution within the population is wide; the interquartile range is $71,890 per QALY, while the 1st and 99th percentiles are $65,496 and $488,360 per QALY, respectively. Higher incremental cost-effectiveness ratios were associated with older age and more comorbid conditions. Sensitivity to model parameters was comparatively small, with most of the scenarios leading to a change of less than 10% in the ratio. The value of a statistical year of life implied by dialysis practice currently averages $129,090 per QALY ($61,294 per year), but is distributed widely within the dialysis population. The spread suggests that coverage decisions using dialysis as the benchmark may need to incorporate percentile values (which are higher than the average) to be consistent with the Rawlsian principles of justice of preserving the rights and interests of society's most vulnerable patient groups.

  8. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  9. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    NASA Astrophysics Data System (ADS)

    Chitarin, G.; Agostinetti, P.; Gallo, A.; Marconato, N.; Nakano, H.; Serianni, G.; Takeiri, Y.; Tsumori, K.

    2011-09-01

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of the BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.

  10. Radiation Coupling with the FUN3D Unstructured-Grid CFD Code

    NASA Technical Reports Server (NTRS)

    Wood, William A.

    2012-01-01

    The HARA radiation code is fully-coupled to the FUN3D unstructured-grid CFD code for the purpose of simulating high-energy hypersonic flows. The radiation energy source terms and surface heat transfer, under the tangent slab approximation, are included within the fluid dynamic ow solver. The Fire II flight test, at the Mach-31 1643-second trajectory point, is used as a demonstration case. Comparisons are made with an existing structured-grid capability, the LAURA/HARA coupling. The radiative surface heat transfer rates from the present approach match the benchmark values within 6%. Although radiation coupling is the focus of the present work, convective surface heat transfer rates are also reported, and are seen to vary depending upon the choice of mesh connectivity and FUN3D ux reconstruction algorithm. On a tetrahedral-element mesh the convective heating matches the benchmark at the stagnation point, but under-predicts by 15% on the Fire II shoulder. Conversely, on a mixed-element mesh the convective heating over-predicts at the stagnation point by 20%, but matches the benchmark away from the stagnation region.

  11. A determination of the external forces required to move the benchmark active controls testing model in pure plunge and pure pitch

    NASA Technical Reports Server (NTRS)

    Dcruz, Jonathan

    1993-01-01

    In view of the strong need for a well-documented set of experimental data which is suitable for the validation and/or calibration of modern Computational Fluid Dynamics codes, the Benchmark Models Program was initiated by the Structural Dynamics Division of the NASA Langley Research Center. One of the models in the program, the Benchmark Active Controls Testing Model, consists of a rigid wing of rectangular planform with a NACA 0012 profile and three control surfaces (a trailing-edge control surface, a lower-surface spoiler, and an upper-surface spoiler). The model is affixed to a flexible mount system which allows only plunging and/or pitching motion. An approximate analytical determination of the forces required to move this model, with its control surfaces fixed, in pure plunge and pure pitch at a number of test conditions is included. This provides a good indication of the type of actuator system required to generate the aerodynamic data resulting from pure plunging and pure pitching motion, in which much interest was expressed. The analysis makes use of previously obtained numerical results.

  12. Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Jin, Haoqiang; anMey, Dieter; Hatay, Ferhat F.

    2003-01-01

    With the advent of parallel hardware and software technologies users are faced with the challenge to choose a programming paradigm best suited for the underlying computer architecture. With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors (SMP), parallel programming techniques have evolved to support parallelism beyond a single level. Which programming paradigm is the best will depend on the nature of the given problem, the hardware architecture, and the available software. In this study we will compare different programming paradigms for the parallelization of a selected benchmark application on a cluster of SMP nodes. We compare the timings of different implementations of the same CFD benchmark application employing the same numerical algorithm on a cluster of Sun Fire SMP nodes. The rest of the paper is structured as follows: In section 2 we briefly discuss the programming models under consideration. We describe our compute platform in section 3. The different implementations of our benchmark code are described in section 4 and the performance results are presented in section 5. We conclude our study in section 6.

  13. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chitarin, G.; University of Padova, Dept. of Management and Engineering, strad. S. Nicola, 36100 Vicenza; Agostinetti, P.

    2011-09-26

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of themore » BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.« less

  14. Additive Manufacturing of Thermoplastic Matrix Composites Using Ultrasonics

    NASA Astrophysics Data System (ADS)

    Olson, Meghan

    Advanced composite materials have great potential for facilitating energy efficient product design and their manufacture if improvements are made to current composite manufacturing processes. This thesis focuses on the development of a novel manufacturing process for thermoplastic composite structures entitled Laser-Ultrasonic Additive Manufacturing ('LUAM'), which is intended to combine the benefits of laser processing technology, developed by Automated Dynamics Inc., with ultrasonic bonding technology that is used commercially for unreinforced polymers. These technologies used together have the potential to significantly reduce the energy consumption and void content of thermoplastic composites made using Automated Fiber Placement (AFP). To develop LUAM in a methodical manner with minimal risk, a staged approach was devised whereby coupon-level mechanical testing and prototyping utilizing existing equipment was accomplished. Four key tasks have been identified for this effort: Benchmarking, Ultrasonic Compaction, Laser Assisted Ultrasonic Compaction, and Demonstration and Characterization of LUAM. This thesis specifically addresses Tasks 1 and 2, i.e. Benchmarking and Ultrasonic Compaction, respectively. Task 1, fabricating test specimens using two traditional processes (autoclave and thermal press) and testing structural performance and dimensional accuracy, provide results of a benchmarking study by which the performance of all future phases will be gauged. Task 2, fabricating test specimens using a non-traditional process (ultrasonic conpaction) and evaluating in a similar fashion, explores the the role of ultrasonic processing parameters using three different thermoplastic composite materials. Further development of LUAM, although beyond the scope of this thesis, will combine laser and ultrasonic technology and eventually demonstrate a working system.

  15. Oncology Practice Trends From the National Practice Benchmark

    PubMed Central

    Barr, Thomas R.; Towle, Elaine L.

    2012-01-01

    In 2011, we made predictions on the basis of data from the National Practice Benchmark (NPB) reports from 2005 through 2010. With the new 2011 data in hand, we have revised last year's predictions and projected for the next 3 years. In addition, we make some new predictions that will be tracked in future benchmarking surveys. We also outline a conceptual framework for contemplating these data based on an ecological model of the oncology delivery system. The 2011 NPB data are consistent with last year's prediction of a decrease in the operating margins necessary to sustain a community oncology practice. With the new data in, we now predict these reductions to occur more slowly than previously forecast. We note an ease to the squeeze observed in last year's trend analysis, which will allow more time for practices to adapt their business models for survival and offer the best of these practices an opportunity to invest earnings into operations to prepare for the inevitable shift away from historic payment methodology for clinical service. This year, survey respondents reported changes in business structure, first measured in the 2010 data, indicating an increase in the percentage of respondents who believe that change is coming soon, but the majority still have confidence in the viability of their existing business structure. Although oncology practices are in for a bumpy ride, things are looking less dire this year for practices participating in our survey. PMID:23277766

  16. HDOCK: a web server for protein-protein and protein-DNA/RNA docking based on a hybrid strategy.

    PubMed

    Yan, Yumeng; Zhang, Di; Zhou, Pei; Li, Botong; Huang, Sheng-You

    2017-07-03

    Protein-protein and protein-DNA/RNA interactions play a fundamental role in a variety of biological processes. Determining the complex structures of these interactions is valuable, in which molecular docking has played an important role. To automatically make use of the binding information from the PDB in docking, here we have presented HDOCK, a novel web server of our hybrid docking algorithm of template-based modeling and free docking, in which cases with misleading templates can be rescued by the free docking protocol. The server supports protein-protein and protein-DNA/RNA docking and accepts both sequence and structure inputs for proteins. The docking process is fast and consumes about 10-20 min for a docking run. Tested on the cases with weakly homologous complexes of <30% sequence identity from five docking benchmarks, the HDOCK pipeline tied with template-based modeling on the protein-protein and protein-DNA benchmarks and performed better than template-based modeling on the three protein-RNA benchmarks when the top 10 predictions were considered. The performance of HDOCK became better when more predictions were considered. Combining the results of HDOCK and template-based modeling by ranking first of the template-based model further improved the predictive power of the server. The HDOCK web server is available at http://hdock.phys.hust.edu.cn/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Blurring out hydrogen: The dynamical structure of teflic acid

    NASA Astrophysics Data System (ADS)

    Herbers, S.; Obenchain, D. A.; Kraus, P.; Wachsmuth, D.; Grabow, J.-U.

    2018-05-01

    The microwave spectra of 10 teflic acid isotopologues were recorded in the frequency range of 3-25 GHz using supersonic jet-expansion Fourier transform microwave spectroscopy. Despite being asymmetric in its equilibrium structure, the delocalization of the hydrogen atom leads to a symmetric top vibrational ground state structure. In this work, we present the zero point structure obtained from the experimental rotational constants and an approach to determine the semi-experimental equilibrium structure aided by ab initio data. The Te-O bond length determined in the equilibrium structure is accurate to the picometer and can be used as a benchmark for computational methods treating relativistic effects.

  18. Optimization of the Manufacturing Process of Conical Shell Structures Using Prepreg Laminatees

    NASA Astrophysics Data System (ADS)

    Khakimova, Regina; Zimmermann, Rolf; Burau, Florian; Siebert, Marc; Arbelo, Mariano; Castro, Saullo; Degenhardt, Richard

    2014-06-01

    The design and manufacture of an unstiffened composite conical structure which is a scaled-down version of the Ariane 5 Midlife Evolution Equipment Bay Structure is presented. For such benchmarking structures the fiber orientation error is critical and then the manufacturing process becomes a big challenge. The paper therefore is focused on the implementation of a tailoring study and on the manufacturing process. The conical structure will be tested to validate a new design approach.This study contributes to the European Union (EU) project DESICOS, whose aim is to develop less conservative design guidelines for imperfection sensitive thin-walled structures.

  19. Civil Courts.

    ERIC Educational Resources Information Center

    Eaneman, Paulette S.; And Others

    These materials are part of the Project Benchmark series designed to teach secondary students about our legal concepts and systems. This unit focuses on the structure and procedures of the civil court systems. The materials outline common law heritage, kinds of cases, jurisdiction, civil pretrial procedure, trial procedure, and a sample automobile…

  20. Benchmarking in emergency health systems.

    PubMed

    Kennedy, Marcus P; Allen, Jacqueline; Allen, Greg

    2002-12-01

    This paper discusses the role of benchmarking as a component of quality management. It describes the historical background of benchmarking, its competitive origin and the requirement in today's health environment for a more collaborative approach. The classical 'functional and generic' types of benchmarking are discussed with a suggestion to adopt a different terminology that describes the purpose and practicalities of benchmarking. Benchmarking is not without risks. The consequence of inappropriate focus and the need for a balanced overview of process is explored. The competition that is intrinsic to benchmarking is questioned and the negative impact it may have on improvement strategies in poorly performing organizations is recognized. The difficulty in achieving cross-organizational validity in benchmarking is emphasized, as is the need to scrutinize benchmarking measures. The cost effectiveness of benchmarking projects is questioned and the concept of 'best value, best practice' in an environment of fixed resources is examined.

  1. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  2. A stable partitioned FSI algorithm for incompressible flow and deforming beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L., E-mail: lil19@rpi.edu; Henshaw, W.D., E-mail: henshw@rpi.edu; Banks, J.W., E-mail: banksj3@rpi.edu

    2016-05-01

    An added-mass partitioned (AMP) algorithm is described for solving fluid–structure interaction (FSI) problems coupling incompressible flows with thin elastic structures undergoing finite deformations. The new AMP scheme is fully second-order accurate and stable, without sub-time-step iterations, even for very light structures when added-mass effects are strong. The fluid, governed by the incompressible Navier–Stokes equations, is solved in velocity-pressure form using a fractional-step method; large deformations are treated with a mixed Eulerian-Lagrangian approach on deforming composite grids. The motion of the thin structure is governed by a generalized Euler–Bernoulli beam model, and these equations are solved in a Lagrangian frame usingmore » two approaches, one based on finite differences and the other on finite elements. The key AMP interface condition is a generalized Robin (mixed) condition on the fluid pressure. This condition, which is derived at a continuous level, has no adjustable parameters and is applied at the discrete level to couple the partitioned domain solvers. Special treatment of the AMP condition is required to couple the finite-element beam solver with the finite-difference-based fluid solver, and two coupling approaches are described. A normal-mode stability analysis is performed for a linearized model problem involving a beam separating two fluid domains, and it is shown that the AMP scheme is stable independent of the ratio of the mass of the fluid to that of the structure. A traditional partitioned (TP) scheme using a Dirichlet–Neumann coupling for the same model problem is shown to be unconditionally unstable if the added mass of the fluid is too large. A series of benchmark problems of increasing complexity are considered to illustrate the behavior of the AMP algorithm, and to compare the behavior with that of the TP scheme. The results of all these benchmark problems verify the stability and accuracy of the AMP scheme. Results for one benchmark problem modeling blood flow in a deforming artery are also compared with corresponding results available in the literature.« less

  3. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  4. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  5. Summary of the Tandem Cylinder Solutions from the Benchmark Problems for Airframe Noise Computations-I Workshop

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2011-01-01

    Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.

  6. Web-client based distributed generalization and geoprocessing

    USGS Publications Warehouse

    Wolf, E.B.; Howe, K.

    2009-01-01

    Generalization and geoprocessing operations on geospatial information were once the domain of complex software running on high-performance workstations. Currently, these computationally intensive processes are the domain of desktop applications. Recent efforts have been made to move geoprocessing operations server-side in a distributed, web accessible environment. This paper initiates research into portable client-side generalization and geoprocessing operations as part of a larger effort in user-centered design for the US Geological Survey's The National Map. An implementation of the Ramer-Douglas-Peucker (RDP) line simplification algorithm was created in the open source OpenLayers geoweb client. This algorithm implementation was benchmarked using differing data structures and browser platforms. The implementation and results of the benchmarks are discussed in the general context of client-side geoprocessing. (Abstract).

  7. Using 171,173Yb(d,p) to benchmark a surrogate reaction for neutron capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatarik, R; Bersntein, L; Burke, J

    2008-08-08

    Neutron capture cross sections on unstable nuclei are important for many applications in nuclear structure and astrophysics. Measuring these cross sections directly is a major challenge and often impossible. An indirect approach for measuring these cross sections is the surrogate reaction method, which makes it possible to relate the desired cross section to a cross section of an alternate reaction that proceeds through the same compound nucleus. To benchmark the validity of using the (d,p{gamma}) reaction as a surrogate for (n,{gamma}), the {sup 171,173}Yb(d,p{gamma}) reactions were measured with the goal to reproduce the known [1] neutron capture cross section ratiosmore » of these nuclei.« less

  8. LipidQC: Method Validation Tool for Visual Comparison to SRM 1950 Using NIST Interlaboratory Comparison Exercise Lipid Consensus Mean Estimate Values.

    PubMed

    Ulmer, Candice Z; Ragland, Jared M; Koelmel, Jeremy P; Heckert, Alan; Jones, Christina M; Garrett, Timothy J; Yost, Richard A; Bowden, John A

    2017-12-19

    As advances in analytical separation techniques, mass spectrometry instrumentation, and data processing platforms continue to spur growth in the lipidomics field, more structurally unique lipid species are detected and annotated. The lipidomics community is in need of benchmark reference values to assess the validity of various lipidomics workflows in providing accurate quantitative measurements across the diverse lipidome. LipidQC addresses the harmonization challenge in lipid quantitation by providing a semiautomated process, independent of analytical platform, for visual comparison of experimental results of National Institute of Standards and Technology Standard Reference Material (SRM) 1950, "Metabolites in Frozen Human Plasma", against benchmark consensus mean concentrations derived from the NIST Lipidomics Interlaboratory Comparison Exercise.

  9. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  10. On the Helix Propensity in Generalized Born Solvent Descriptions of Modeling the Dark Proteome

    DTIC Science & Technology

    2017-01-10

    benchmarks of conformational sampling methods and their all-atom force fields plus solvent descriptions to accurately model structural transitions on a...atom simulations of proteins is the replacement of explicit water interactions with a continuum description of treating implicitly the bulk physical... structure was reported by Amarasinghe and coworkers (Leung et al., 2015) of the Ebola nucleoprotein NP in complex with a 28-residue peptide extracted

  11. Broad frequency band full field measurements for advanced applications: Point-wise comparisons between optical technologies

    NASA Astrophysics Data System (ADS)

    Zanarini, Alessandro

    2018-01-01

    The progress of optical systems gives nowadays at disposal on lightweight structures complex dynamic measurements and modal tests, each with its own advantages, drawbacks and preferred usage domains. It is thus more easy than before to obtain highly spatially defined vibration patterns for many applications in vibration engineering, testing and general product development. The potential of three completely different technologies is here benchmarked on a common test rig and advanced applications. SLDV, dynamic ESPI and hi-speed DIC are here first deployed in a complex and unique test on the estimation of FRFs with high spatial accuracy from a thin vibrating plate. The latter exhibits a broad band dynamics and high modal density in the common frequency domain where the techniques can find an operative intersection. A peculiar point-wise comparison is here addressed by means of discrete geometry transforms to put all the three technologies on trial at each physical point of the surface. Full field measurement technologies cannot estimate only displacement fields on a refined grid, but can exploit the spatial consistency of the results through neighbouring locations by means of numerical differentiation operators in the spatial domain to obtain rotational degrees of freedom and superficial dynamic strain distributions, with enhanced quality, compared to other technologies in literature. Approaching the task with the aid of superior quality receptance maps from the three different full field gears, this work calculates and compares rotational and dynamic strain FRFs. Dynamic stress FRFs can be modelled directly from the latter, by means of a constitutive model, avoiding the costly and time-consuming steps of building and tuning a numerical dynamic model of a flexible component or a structure in real life conditions. Once dynamic stress FRFs are obtained, spectral fatigue approaches can try to predict the life of a component in many excitation conditions. Different spectral shaping of the excitation can easily be used to enhance the comparison in the framework of any of the spectral approaches for fatigue life calculations, highlighting benefits and drawbacks of a direct experimental approach to failure and risk assessment in structural dynamics when dealing with complex patterns in real-life testing. Are optical measurements and spatially dense datasets really effective in advanced model updating of lightweight structures with complex structural dynamics? The noise shown in the raw signal of some experiments may pose issues in proficiently exploiting the added data in a fruitful model updating procedure. Model updating results are here compared between scanning and native full field technologies, with comments and details on the test rig, on the advantages and drawbacks of the approaches. The identification of EMA models highlights the increasing quality of shapes that can be obtained from native full field high resolution gears, against that (some time unexpectedly poor) of SLDV tested.

  12. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.

    1991-01-01

    A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  13. Linking user and staff perspectives in the evaluation of innovative transition projects for youth with disabilities.

    PubMed

    McAnaney, Donal F; Wynne, Richard F

    2016-06-01

    A key challenge in formative evaluation is to gather appropriate evidence to inform the continuous improvement of initiatives. In the absence of outcome data, the programme evaluator often must rely on the perceptions of beneficiaries and staff in generating insight into what is making a difference. The article describes the approach adopted in an evaluation of 15 innovative projects supporting school-leavers with disabilities in making the transition to education, work and life in community settings. Two complementary processes provided an insight into what project staff and leadership viewed as the key project activities and features that facilitated successful transition as well as the areas of quality of life (QOL) that participants perceived as having been impacted positively by the projects. A comparison was made between participants' perceptions of QOL impact with the views of participants in services normally offered by the wider system. This revealed that project participants were significantly more positive in their views than participants in traditional services. In addition, the processes and activities of the more highly rated projects were benchmarked against less highly rated projects and also with usually available services. Even in the context of a range of intervening variables such as level and complexity of participant needs and variations in the stage of development of individual projects, the benchmarking process indicated a number of project characteristics that were highly valued by participants. © The Author(s) 2016.

  14. Using Toyota's A3 Thinking for Analyzing MBA Business Cases

    ERIC Educational Resources Information Center

    Anderson, Joe S.; Morgan, James N.; Williams, Susan K.

    2011-01-01

    A3 Thinking is fundamental to Toyota's benchmark management philosophy and to their lean production system. It is used to solve problems, gain agreement, mentor team members, and lead organizational improvements. A structured problem-solving approach, A3 Thinking builds improvement opportunities through experience. We used "The Toyota…

  15. Object Recognition Memory and the Rodent Hippocampus

    ERIC Educational Resources Information Center

    Broadbent, Nicola J.; Gaskin, Stephane; Squire, Larry R.; Clark, Robert E.

    2010-01-01

    In rodents, the novel object recognition task (NOR) has become a benchmark task for assessing recognition memory. Yet, despite its widespread use, a consensus has not developed about which brain structures are important for task performance. We assessed both the anterograde and retrograde effects of hippocampal lesions on performance in the NOR…

  16. Engineering Education as a Complex System

    ERIC Educational Resources Information Center

    Gattie, David K.; Kellam, Nadia N.; Schramski, John R.; Walther, Joachim

    2011-01-01

    This paper presents a theoretical basis for cultivating engineering education as a complex system that will prepare students to think critically and make decisions with regard to poorly understood, ill-structured issues. Integral to this theoretical basis is a solution space construct developed and presented as a benchmark for evaluating…

  17. Selecting Peer Institutions with IPEDS and Other Nationally Available Data

    ERIC Educational Resources Information Center

    Carrigan, Sarah D.

    2012-01-01

    The process of identifying and selecting peers for a college or university is one of this volume's definitions for "benchmarking": "a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to identify opportunities for…

  18. Solving Boltzmann and Fokker-Planck Equations Using Sparse Representation

    DTIC Science & Technology

    2011-05-31

    material science. We have com- puted the electronic structure of 2D quantum dot system, and compared the efficiency with the benchmark software OCTOPUS . For...one self-consistent iteration step with 512 electrons, OCTOPUS costs 1091 sec, and selected inversion costs 9.76 sec. The algorithm exhibits

  19. Cascades/Aleutian Play Fairway Analysis: Data and Map Files

    DOE Data Explorer

    Lisa Shevenell

    2015-11-15

    Contains Excel data files used to quantifiably rank the geothermal potential of each of the young volcanic centers of the Cascade and Aleutian Arcs using world power production volcanic centers as benchmarks. Also contains shapefiles used in play fairway analysis with power plant, volcano, geochemistry and structural data.

  20. 7 CFR 1717.1204 - Policies and conditions applicable to settlements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and action plans by the members to change their operations, management, and organizational structure... to meet its financial obligations will be based on analyses and documentation by RUS of the borrower... based on comparisons with benchmark electric utilities; and (H) The accuracy and completeness of the...

  1. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...

  2. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...

  3. Functional annotation by sequence-weighted structure alignments: statistical analysis and case studies from the Protein 3000 structural genomics project in Japan.

    PubMed

    Standley, Daron M; Toh, Hiroyuki; Nakamura, Haruki

    2008-09-01

    A method to functionally annotate structural genomics targets, based on a novel structural alignment scoring function, is proposed. In the proposed score, position-specific scoring matrices are used to weight structurally aligned residue pairs to highlight evolutionarily conserved motifs. The functional form of the score is first optimized for discriminating domains belonging to the same Pfam family from domains belonging to different families but the same CATH or SCOP superfamily. In the optimization stage, we consider four standard weighting functions as well as our own, the "maximum substitution probability," and combinations of these functions. The optimized score achieves an area of 0.87 under the receiver-operating characteristic curve with respect to identifying Pfam families within a sequence-unique benchmark set of domain pairs. Confidence measures are then derived from the benchmark distribution of true-positive scores. The alignment method is next applied to the task of functionally annotating 230 query proteins released to the public as part of the Protein 3000 structural genomics project in Japan. Of these queries, 78 were found to align to templates with the same Pfam family as the query or had sequence identities > or = 30%. Another 49 queries were found to match more distantly related templates. Within this group, the template predicted by our method to be the closest functional relative was often not the most structurally similar. Several nontrivial cases are discussed in detail. Finally, 103 queries matched templates at the fold level, but not the family or superfamily level, and remain functionally uncharacterized. 2008 Wiley-Liss, Inc.

  4. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...

  5. Stochastic metallic-glass cellular structures exhibiting benchmark strength.

    PubMed

    Demetriou, Marios D; Veazey, Chris; Harmon, John S; Schramm, Joseph P; Johnson, William L

    2008-10-03

    By identifying the key characteristic "structural scales" that dictate the resistance of a porous metallic glass against buckling and fracture, stochastic highly porous metallic-glass structures are designed capable of yielding plastically and inheriting the high plastic yield strength of the amorphous metal. The strengths attainable by the present foams appear to equal or exceed those by highly engineered metal foams such as Ti-6Al-4V or ferrous-metal foams at comparable levels of porosity, placing the present metallic-glass foams among the strongest foams known to date.

  6. A socio-technical approach for improving a Brazilian shoe manufacturing system.

    PubMed

    Renner, J S; de M Guimarães, L B; de Oliveira, P A B

    2012-01-01

    This article presents a macroergonomic intervention in a footwear company in Rio Grande do Sul, Brazil, to improve both the quality of life of the employees and productivity by optimizing the traditional Taylor/Ford work organization. Multi-functionality and team working were implemented as means of making tasks more flexible and richer and the working hours were changed. The results showed a reduction in human and material resource costs and a consequent improvement in health and workers quality of life. Although middle managerial staff displayed strong resistance to the project and to breaking traditional production paradigms, the socio-technical system has been implemented throughout the plant and is expected to end up becoming the benchmark for other companies in the sector. Macro-ergonomics, footwear industry, organization work.

  7. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking

    PubMed Central

    Kreibich, Heidi; Franco, Guillermo; Marechal, David

    2016-01-01

    Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss–or flood vulnerability–relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents an approach for a quantitative comparison of disparate models via the reduction to the joint input variables of all models. Harmonization of models for benchmarking and comparison requires profound insight into the model structures, mechanisms and underlying assumptions. Possibilities and challenges are discussed that exist in model harmonization and the application of the inventory in a benchmarking framework. PMID:27454604

  8. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking.

    PubMed

    Gerl, Tina; Kreibich, Heidi; Franco, Guillermo; Marechal, David; Schröter, Kai

    2016-01-01

    Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss-or flood vulnerability-relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents an approach for a quantitative comparison of disparate models via the reduction to the joint input variables of all models. Harmonization of models for benchmarking and comparison requires profound insight into the model structures, mechanisms and underlying assumptions. Possibilities and challenges are discussed that exist in model harmonization and the application of the inventory in a benchmarking framework.

  9. Recalibrated Equations for Determining Effect of Oil Filtration on Rolling Bearing Life

    NASA Technical Reports Server (NTRS)

    Needelman, William M.; Zaretsky, Erwin V.

    2014-01-01

    In 1991, Needelman and Zaretsky presented a set of empirically derived equations for bearing fatigue life (adjustment) factors (LFs) as a function of oil filter ratings. These equations for life factors were incorporated into the reference book, "STLE Life Factors for Rolling Bearings." These equations were normalized (LF = 1) to a 10-micrometer filter rating at Beta(sub x) = 200 (normal cleanliness) as it was then defined. Over the past 20 years, these life factors based on oil filtration have been used in conjunction with ANSI/ABMA standards and bearing computer codes to predict rolling bearing life. Also, additional experimental studies have been made by other investigators into the relationship between rolling bearing life and the size, number, and type of particle contamination. During this time period filter ratings have also been revised and improved, and they now use particle counting calibrated to a new National Institute of Standards and Technology (NIST) reference material, NIST SRM 2806, 1997. This paper reviews the relevant bearing life studies and describes the new filter ratings. New filter ratings, Beta(sub x(c)) = 200 and Beta(sub x(c)) = 1000, are benchmarked to old filter ratings, Beta(sub x) = 200, and vice versa. Two separate sets of filter LF values were derived based on the new filter ratings for roller bearings and ball bearings, respectively. Filter LFs can be calculated for the new filter ratings.

  10. Estimation of vanadium water quality benchmarks for the protection of aquatic life with relevance to the Athabasca Oil Sands region using species sensitivity distributions.

    PubMed

    Schiffer, Stephanie; Liber, Karsten

    2017-11-01

    Elevated vanadium (V) concentrations in oil sands coke, which is produced and stored on site of some major Athabasca Oil Sands companies, could pose a risk to aquatic ecosystems in northern Alberta, Canada, depending on its future storage and utilization. In the present study, V toxicity was determined in reconstituted Athabasca River water to various freshwater organisms, including 2 midge species (Chironomus dilutus and Chironomus riparius; 4-d and 30-d to 40-d exposures) and 2 freshwater fish species (Oncorhynchus mykiss and Pimephales promelas; 4-d and 28-d exposures) to facilitate estimation of water quality benchmarks. The acute toxicity of V was 52.0 and 63.2 mg/L for C. dilutus and C. riparius, respectively, and 4.0 and 14.8 mg V/L for P. promelas and O. mykiss, respectively. Vanadium exposure significantly impaired adult emergence of C. dilutus and C. riparius at concentrations ≥16.7 (31.6% reduction) and 8.3 (18.0% reduction) mg/L, respectively. Chronic toxicity in fish presented as lethality, with chronic 28-d LC50s of 0.5 and 4.3 mg/L for P. promelas and O. mykiss, respectively. These data were combined with data from the peer-reviewed literature, and separate acute and chronic species sensitivity distributions (SSDs) were constructed. The acute and chronic hazardous concentrations endangering only 5% of species (HC5) were estimated as 0.64 and 0.05 mg V/L, respectively. These new data for V toxicity to aquatic organisms ensure that there are now adequate data available for regulatory agencies to develop appropriate water quality guidelines for use in the Athabasca Oil Sands region and elsewhere. Until then, the HC5 values presented in the present study could serve as interim benchmarks for the protection of aquatic life from exposure to hazardous levels of V in local aquatic environments. Environ Toxicol Chem 2017;36:3034-3044. © 2017 SETAC. © 2017 SETAC.

  11. A New Perspective on Fatigue Performance of Advanced High- Strength Steels (AHSS) GMAW Joints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Zhili; Chiang, Dr. John; Kuo, Dr. Min

    2008-01-01

    Weld fatigue performance is a critical aspect for application of advanced high-strength steels (AHSS) in automotive body structures. A comparative study has been conducted to evaluate the fatigue life of AHSS welds. The material studied included seven AHSS of various strength levels - DP 600, DP 780, DP 980, M130, M220, solution annealed boron and fully hardened boron steels. Two conventional steels, HSLA 590 and DR 210, were also included for baseline comparison. Lap fillet welds were made on 2-mm nominal thick sheets by the gas metal arc welding process (GMAW). Fatigue test was conducted under a number of stressmore » levels to obtain the S/N curves of the weld joints. It was found that, unlike in the static and impact loading conditions, the fatigue performance of AHSS is not influenced by the HAZ softening in AHSS. There are appreciable differences in the fatigue lives among different AHSS. Changes in weld parameters can influence the fatigue life of the weld joints, particularly of these of higher strength AHSS. A model is developed to predict the fatigue performance of AHSS welds. The validity of the model is benchmarked with the experimental results. This model is capable to capture the effects of weld geometry and weld microstructure and strength on the fatigue performance experimentally observed. The theoretical basis and application of the newly developed fatigue modeling methodology will be discussed.« less

  12. Applying DEKOIS 2.0 in structure-based virtual screening to probe the impact of preparation procedures and score normalization.

    PubMed

    Ibrahim, Tamer M; Bauer, Matthias R; Boeckler, Frank M

    2015-01-01

    Structure-based virtual screening techniques can help to identify new lead structures and complement other screening approaches in drug discovery. Prior to docking, the data (protein crystal structures and ligands) should be prepared with great attention to molecular and chemical details. Using a subset of 18 diverse targets from the recently introduced DEKOIS 2.0 benchmark set library, we found differences in the virtual screening performance of two popular docking tools (GOLD and Glide) when employing two different commercial packages (e.g. MOE and Maestro) for preparing input data. We systematically investigated the possible factors that can be responsible for the found differences in selected sets. For the Angiotensin-I-converting enzyme dataset, preparation of the bioactive molecules clearly exerted the highest influence on VS performance compared to preparation of the decoys or the target structure. The major contributing factors were different protonation states, molecular flexibility, and differences in the input conformation (particularly for cyclic moieties) of bioactives. In addition, score normalization strategies eliminated the biased docking scores shown by GOLD (ChemPLP) for the larger bioactives and produced a better performance. Generalizing these normalization strategies on the 18 DEKOIS 2.0 sets, improved the performances for the majority of GOLD (ChemPLP) docking, while it showed detrimental performances for the majority of Glide (SP) docking. In conclusion, we exemplify herein possible issues particularly during the preparation stage of molecular data and demonstrate to which extent these issues can cause perturbations in the virtual screening performance. We provide insights into what problems can occur and should be avoided, when generating benchmarks to characterize the virtual screening performance. Particularly, careful selection of an appropriate molecular preparation setup for the bioactive set and the use of score normalization for docking with GOLD (ChemPLP) appear to have a great importance for the screening performance. For virtual screening campaigns, we recommend to invest time and effort into including alternative preparation workflows into the generation of the master library, even at the cost of including multiple representations of each molecule. Graphical AbstractUsing DEKOIS 2.0 benchmark sets in structure-based virtual screening to probe the impact of molecular preparation and score normalization.

  13. Incremental cost effectiveness evaluation in clinical research.

    PubMed

    Krummenauer, Frank; Landwehr, I

    2005-01-28

    The health economic evaluation of therapeutic and diagnostic strategies is of increasing importance in clinical research. Therefore also clinical trialists have to involve health economic aspects more frequently. However, whereas they are quite familiar with classical effect measures in clinical trials, the corresponding parameters in health economic evaluation of therapeutic and diagnostic procedures are still not this common. The concepts of incremental cost effectiveness ratios (ICERs) and incremental net health benefit (INHB) will be illustrated and contrasted along the cost effectiveness evaluation of cataract surgery with monofocal and multifocal intraocular lenses. ICERs relate the costs of a treatment to its clinical benefit in terms of a ratio expression (indexed as Euro per clinical benefit unit). Therefore ICERs can be directly compared to a pre-specified willingness to pay (WTP) benchmark, which represents the maximum costs, health insurers would invest to achieve one clinical benefit unit. INHBs estimate a treatment's net clinical benefit after accounting for its cost increase versus an established therapeutic standard. Resource allocation rules can be formulated by means of both effect measures. Both the ICER and the INHB approach enable the definition of directional resource allocation rules. The allocation decisions arising from these rules are identical, as long as the willingness to pay benchmark is fixed in advance. Therefore both strategies crucially call for a priori determination of both the underlying clinical benefit endpoint (such as gain in vision lines after cataract surgery or gain in quality-adjusted life years) and the corresponding willingness to pay benchmark. The use of incremental cost effectiveness and net health benefit estimates provides a rationale for health economic allocation discussions and founding decisions. It implies the same requirements on trial protocols as yet established for clinical trials, that is the a priori definition of primary hypotheses (formulated as an allocation rule involving a pre-specified willingness to pay benchmark) and the primary clinical benefit endpoint (as a rationale for effectiveness evaluation).

  14. Designing nanomaterials to maximize performance and minimize undesirable implications guided by the Principles of Green Chemistry.

    PubMed

    Gilbertson, Leanne M; Zimmerman, Julie B; Plata, Desiree L; Hutchison, James E; Anastas, Paul T

    2015-08-21

    The Twelve Principles of Green Chemistry were first published in 1998 and provide a framework that has been adopted not only by chemists, but also by design practitioners and decision-makers (e.g., materials scientists and regulators). The development of the Principles was initially motivated by the need to address decades of unintended environmental pollution and human health impacts from the production and use of hazardous chemicals. Yet, for over a decade now, the Principles have been applied to the synthesis and production of engineered nanomaterials (ENMs) and the products they enable. While the combined efforts of the global scientific community have led to promising advances in the field of nanotechnology, there remain significant research gaps and the opportunity to leverage the potential global economic, societal and environmental benefits of ENMs safely and sustainably. As such, this tutorial review benchmarks the successes to date and identifies critical research gaps to be considered as future opportunities for the community to address. A sustainable material design framework is proposed that emphasizes the importance of establishing structure-property-function (SPF) and structure-property-hazard (SPH) relationships to guide the rational design of ENMs. The goal is to achieve or exceed the functional performance of current materials and the technologies they enable, while minimizing inherent hazard to avoid risk to human health and the environment at all stages of the life cycle.

  15. Nutrient and pesticide contamination bias estimated from field blanks collected at surface-water sites in U.S. Geological Survey Water-Quality Networks, 2002–12

    USGS Publications Warehouse

    Medalie, Laura; Martin, Jeffrey D.

    2017-08-14

    Potential contamination bias was estimated for 8 nutrient analytes and 40 pesticides in stream water collected by the U.S. Geological Survey at 147 stream sites from across the United States, and representing a variety of hydrologic conditions and site types, for water years 2002–12. This study updates previous U.S. Geological Survey evaluations of potential contamination bias for nutrients and pesticides. Contamination is potentially introduced to water samples by exposure to airborne gases and particulates, from inadequate cleaning of sampling or analytic equipment, and from inadvertent sources during sample collection, field processing, shipment, and laboratory analysis. Potential contamination bias, based on frequency and magnitude of detections in field blanks, is used to determine whether or under what conditions environmental data might need to be qualified for the interpretation of results in the context of comparisons with background levels, drinking-water standards, aquatic-life criteria or benchmarks, or human-health benchmarks. Environmental samples for which contamination bias as determined in this report applies are those from historical U.S. Geological Survey water-quality networks or programs that were collected during the same time frame and according to the same protocols and that were analyzed in the same laboratory as field blanks described in this report.Results from field blanks for ammonia, nitrite, nitrite plus nitrate, orthophosphate, and total phosphorus were partitioned by analytical method; results from the most commonly used analytical method for total phosphorus were further partitioned by date. Depending on the analytical method, 3.8, 9.2, or 26.9 percent of environmental samples, the last of these percentages pertaining to all results from 2007 through 2012, were potentially affected by ammonia contamination. Nitrite contamination potentially affected up to 2.6 percent of environmental samples collected between 2002 and 2006 and affected about 3.3 percent of samples collected between 2007 and 2012. The percentages of environmental samples collected between 2002 and 2011 that were potentially affected by nitrite plus nitrate contamination were 7.3 for samples analyzed with the low-level method and 0.4 for samples analyzed with the standard-level method. These percentages increased to 14.8 and 2.2 for samples collected in 2012 and analyzed using replacement low- and standard-level methods, respectively. The maximum potentially affected concentrations for nitrite and for nitrite plus nitrate were much less than their respective maximum contamination levels for drinking-water standards. Although contamination from particulate nitrogen can potentially affect up to 21.2 percent and that from total Kjeldahl nitrogen can affect up to 16.5 percent of environmental samples, there are no critical or background levels for these substances.For total nitrogen, orthophosphate, and total phosphorus, contamination in a small percentage of environmental samples might be consequential for comparisons relative to impairment risks or background levels. At the low ends of the respective ranges of impairment risk for these nutrients, contamination in up to 5 percent of stream samples could account for at least 23 percent of measured concentrations of total nitrogen, for at least 40 or 90 percent of concentrations of orthophosphate, depending on the analytical method, and for 31 to 76 percent of concentrations of total phosphorus, depending on the time period.Twenty-six pesticides had no detections in field blanks. Atrazine with 12 and metolachlor with 11 had the highest number of detections, mostly occurring in spring or early summer. At a 99-percent level of confidence, contamination was estimated to be no greater than the detection limit in at least 98 percent of all samples for 38 of 40 pesticides. For metolachlor and atrazine, potential contamination was no greater than 0.0053 and 0.0093 micrograms per liter in 98 percent of samples. For 11 of 14 pesticides with at least one detection, the maximum potentially affected concentration of the environmental sample was less than their respective human-health or aquatic-life benchmarks. Small percentages of environmental samples had concentrations high enough that atrazine contamination potentially could account for the entire aquatic-life benchmark for acute effects on nonvascular plants, that dieldrin contamination could account for up to 100 percent of the cancer health-based screening level, or that chlorpyrifos contamination could account for 13 or 12 percent of the concentrations in the aquatic-life benchmarks for chronic effects on invertebrates or the criterion continuous concentration for chronic effects on aquatic life.

  16. Raising Quality and Achievement. A College Guide to Benchmarking.

    ERIC Educational Resources Information Center

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  17. Benchmarking in Education: Tech Prep, a Case in Point. IEE Brief Number 8.

    ERIC Educational Resources Information Center

    Inger, Morton

    Benchmarking is a process by which organizations compare their practices, processes, and outcomes to standards of excellence in a systematic way. The benchmarking process entails the following essential steps: determining what to benchmark and establishing internal baseline data; identifying the benchmark; determining how that standard has been…

  18. Benchmarks: The Development of a New Approach to Student Evaluation.

    ERIC Educational Resources Information Center

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  19. Structural weights analysis of advanced aerospace vehicles using finite element analysis

    NASA Technical Reports Server (NTRS)

    Bush, Lance B.; Lentz, Christopher A.; Rehder, John J.; Naftel, J. Chris; Cerro, Jeffrey A.

    1989-01-01

    A conceptual/preliminary level structural design system has been developed for structural integrity analysis and weight estimation of advanced space transportation vehicles. The system includes a three-dimensional interactive geometry modeler, a finite element pre- and post-processor, a finite element analyzer, and a structural sizing program. Inputs to the system include the geometry, surface temperature, material constants, construction methods, and aerodynamic and inertial loads. The results are a sized vehicle structure capable of withstanding the static loads incurred during assembly, transportation, operations, and missions, and a corresponding structural weight. An analysis of the Space Shuttle external tank is included in this paper as a validation and benchmark case of the system.

  20. Observation time scale, free-energy landscapes, and molecular symmetry

    PubMed Central

    Wales, David J.; Salamon, Peter

    2014-01-01

    When structures that interconvert on a given time scale are lumped together, the corresponding free-energy surface becomes a function of the observation time. This view is equivalent to grouping structures that are connected by free-energy barriers below a certain threshold. We illustrate this time dependence for some benchmark systems, namely atomic clusters and alanine dipeptide, highlighting the connections to broken ergodicity, local equilibrium, and “feasible” symmetry operations of the molecular Hamiltonian. PMID:24374625

  1. CHIMERA: Top-down model for hierarchical, overlapping and directed cluster structures in directed and weighted complex networks

    NASA Astrophysics Data System (ADS)

    Franke, R.

    2016-11-01

    In many networks discovered in biology, medicine, neuroscience and other disciplines special properties like a certain degree distribution and hierarchical cluster structure (also called communities) can be observed as general organizing principles. Detecting the cluster structure of an unknown network promises to identify functional subdivisions, hierarchy and interactions on a mesoscale. It is not trivial choosing an appropriate detection algorithm because there are multiple network, cluster and algorithmic properties to be considered. Edges can be weighted and/or directed, clusters overlap or build a hierarchy in several ways. Algorithms differ not only in runtime, memory requirements but also in allowed network and cluster properties. They are based on a specific definition of what a cluster is, too. On the one hand, a comprehensive network creation model is needed to build a large variety of benchmark networks with different reasonable structures to compare algorithms. On the other hand, if a cluster structure is already known, it is desirable to separate effects of this structure from other network properties. This can be done with null model networks that mimic an observed cluster structure to improve statistics on other network features. A third important application is the general study of properties in networks with different cluster structures, possibly evolving over time. Currently there are good benchmark and creation models available. But what is left is a precise sandbox model to build hierarchical, overlapping and directed clusters for undirected or directed, binary or weighted complex random networks on basis of a sophisticated blueprint. This gap shall be closed by the model CHIMERA (Cluster Hierarchy Interconnection Model for Evaluation, Research and Analysis) which will be introduced and described here for the first time.

  2. Quantum molecular dynamics study on the proton exchange, ionic structures, and transport properties of warm dense hydrogen-deuterium mixtures

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Li, Zhi-Guo; Dai, Jia-Yu; Chen, Qi-Feng; Chen, Xiang-Rong

    2018-06-01

    Comprehensive knowledge of physical properties such as equation of state (EOS), proton exchange, dynamic structures, diffusion coefficients, and viscosities of hydrogen-deuterium mixtures with densities from 0.1 to 5 g /cm3 and temperatures from 1 to 50 kK has been presented via quantum molecular dynamics (QMD) simulations. The existing multi-shock experimental EOS provides an important benchmark to evaluate exchange-correlation functionals. The comparison of simulations with experiments indicates that a nonlocal van der Waals density functional (vdW-DF1) produces excellent results. Fraction analysis of molecules using a weighted integral over pair distribution functions was performed. A dissociation diagram together with a boundary where the proton exchange (H2+D2⇌2 HD ) occurs was generated, which shows evidence that the HD molecules form as the H2 and D2 molecules are almost 50% dissociated. The mechanism of proton exchange can be interpreted as a process of dissociation followed by recombination. The ionic structures at extreme conditions were analyzed by the effective coordination number model. High-order cluster, circle, and chain structures can be founded in the strongly coupled warm dense regime. The present QMD diffusion coefficient and viscosity can be used to benchmark two analytical one-component plasma (OCP) models: the Coulomb and Yukawa OCP models.

  3. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  4. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    PubMed

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  5. The General Concept of Benchmarking and Its Application in Higher Education in Europe

    ERIC Educational Resources Information Center

    Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna

    2009-01-01

    The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…

  6. Temporal Variation of Chemical Persistence in a Swedish Lake Assessed by Benchmarking.

    PubMed

    Zou, Hongyan; Radke, Michael; Kierkegaard, Amelie; McLachlan, Michael S

    2015-08-18

    Chemical benchmarking was used to investigate the temporal variation of the persistence of chemical contaminants in a Swedish lake. The chemicals studied included 12 pharmaceuticals, an artificial sweetener, and an X-ray contrast agent. Measurements were conducted in late spring, late autumn, and winter. The transformation half-life in the lake could be quantified for 7 of the chemicals. It ranged from several days to hundreds of days. For 5 of the chemicals (bezafibrate, climbazole, diclofenac, furosemide, and hydrochlorothiazide), the measured persistence was lower in late spring than in late autumn. This may have been caused by lower temperatures and/or less irradiation during late autumn. The seasonality in chemical persistence contributed to changes in chemical concentrations in the lake during the year. The impact of seasonality of persistence was compared with the impact of other important variables determining concentrations in the lake: chemical inputs and water flow/dilution. The strongest seasonal variability in chemical concentration in lake water was observed for hydrochlorothiazide (over a factor of 10), and this was attributable to the seasonality in its persistence.

  7. Antibiotic reimbursement in a model delinked from sales: a benchmark-based worldwide approach.

    PubMed

    Rex, John H; Outterson, Kevin

    2016-04-01

    Despite the life-saving ability of antibiotics and their importance as a key enabler of all of modern health care, their effectiveness is now threatened by a rising tide of resistance. Unfortunately, the antibiotic pipeline does not match health needs because of challenges in discovery and development, as well as the poor economics of antibiotics. Discovery and development are being addressed by a range of public-private partnerships; however, correcting the poor economics of antibiotics will need an overhaul of the present business model on a worldwide scale. Discussions are now converging on delinking reward from antibiotic sales through prizes, milestone payments, or insurance-like models in which innovation is rewarded with a fixed series of payments of a predictable size. Rewarding all drugs with the same payments could create perverse incentives to produce drugs that provide the least possible innovation. Thus, we propose a payment model using a graded array of benchmarked rewards designed to encourage the development of antibiotics with the greatest societal value, together with appropriate worldwide access to antibiotics to maximise human health. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Potential of mean force for electrical conductivity of dense plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Starrett, C. E.

    The electrical conductivity in dense plasmas can be calculated with the relaxation-time approximation provided that the interaction potential between the scattering electron and the ion is known. To date there has been considerable uncertainty as to the best way to define this interaction potential so that it correctly includes the effects of ionic structure, screening by electrons and partial ionization. The current approximations lead to significantly different results with varying levels of agreement when compared to bench-mark calculations and experiments. Here, we present a new way to define this potential, drawing on ideas from classical fluid theory to define amore » potential of mean force. This new potential results in significantly improved agreement with experiments and bench-mark calculations, and includes all the aforementioned physics self-consistently.« less

  9. Conceptual Soundness, Metric Development, Benchmarking, and Targeting for PATH Subprogram Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosey. G.; Doris, E.; Coggeshall, C.

    The objective of this study is to evaluate the conceptual soundness of the U.S. Department of Housing and Urban Development (HUD) Partnership for Advancing Technology in Housing (PATH) program's revised goals and establish and apply a framework to identify and recommend metrics that are the most useful for measuring PATH's progress. This report provides an evaluative review of PATH's revised goals, outlines a structured method for identifying and selecting metrics, proposes metrics and benchmarks for a sampling of individual PATH programs, and discusses other metrics that potentially could be developed that may add value to the evaluation process. The frameworkmore » and individual program metrics can be used for ongoing management improvement efforts and to inform broader program-level metrics for government reporting requirements.« less

  10. Potential of mean force for electrical conductivity of dense plasmas

    DOE PAGES

    Starrett, C. E.

    2017-09-28

    The electrical conductivity in dense plasmas can be calculated with the relaxation-time approximation provided that the interaction potential between the scattering electron and the ion is known. To date there has been considerable uncertainty as to the best way to define this interaction potential so that it correctly includes the effects of ionic structure, screening by electrons and partial ionization. The current approximations lead to significantly different results with varying levels of agreement when compared to bench-mark calculations and experiments. Here, we present a new way to define this potential, drawing on ideas from classical fluid theory to define amore » potential of mean force. This new potential results in significantly improved agreement with experiments and bench-mark calculations, and includes all the aforementioned physics self-consistently.« less

  11. Potential of mean force for electrical conductivity of dense plasmas

    NASA Astrophysics Data System (ADS)

    Starrett, C. E.

    2017-12-01

    The electrical conductivity in dense plasmas can be calculated with the relaxation-time approximation provided that the interaction potential between the scattering electron and the ion is known. To date there has been considerable uncertainty as to the best way to define this interaction potential so that it correctly includes the effects of ionic structure, screening by electrons and partial ionization. Current approximations lead to significantly different results with varying levels of agreement when compared to bench-mark calculations and experiments. We present a new way to define this potential, drawing on ideas from classical fluid theory to define a potential of mean force. This new potential results in significantly improved agreement with experiments and bench-mark calculations, and includes all the aforementioned physics self-consistently.

  12. Benchmarking the D-Wave Two

    NASA Astrophysics Data System (ADS)

    Job, Joshua; Wang, Zhihui; Rønnow, Troels; Troyer, Matthias; Lidar, Daniel

    2014-03-01

    We report on experimental work benchmarking the performance of the D-Wave Two programmable annealer on its native Ising problem, and a comparison to available classical algorithms. In this talk we will focus on the comparison with an algorithm originally proposed and implemented by Alex Selby. This algorithm uses dynamic programming to repeatedly optimize over randomly selected maximal induced trees of the problem graph starting from a random initial state. If one is looking for a quantum advantage over classical algorithms, one should compare to classical algorithms which are designed and optimized to maximally take advantage of the structure of the type of problem one is using for the comparison. In that light, this classical algorithm should serve as a good gauge for any potential quantum speedup for the D-Wave Two.

  13. The gender gap in mortality: How much is explained by behavior?

    PubMed

    Schünemann, Johannes; Strulik, Holger; Trimborn, Timo

    2017-07-01

    In developed countries, women are expected to live about 4-5 years longer than men. In this paper, we develop a novel approach to gauge the extent to which gender differences in longevity can be attributed to gender-specific preferences and health behavior. We set up a physiologically founded model of health deficit accumulation and calibrate it using recent insights from gerontology. From fitting life cycle health expenditure and life expectancy, we obtain estimates of the gender-specific preference parameters. We then perform the counterfactual experiment of endowing women with the preferences of men. In our benchmark scenario, this reduces the gender gap in life expectancy from 4.6 to 1.4 years. When we add gender-specific preferences for unhealthy consumption, the model can motivate up to 89 percent of the gender gap. Our theory offers also an economic explanation for why the gender gap declines with rising income. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Data Intensive Systems (DIS) Benchmark Performance Summary

    DTIC Science & Technology

    2003-08-01

    models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures

  15. Benchmarking Alumni Relations in Community Colleges: Findings from a 2015 CASE Survey. CASE White Paper

    ERIC Educational Resources Information Center

    Paradise, Andrew

    2016-01-01

    Building on the inaugural survey conducted three years prior, the 2015 CASE Community College Alumni Relations survey collected additional insightful data on staffing, structure, communications, engagement, and fundraising. This white paper features key data on alumni relations programs at community colleges across the United States. The paper…

  16. Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions

    NASA Technical Reports Server (NTRS)

    Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong

    2016-01-01

    Model benchmarking allows us to separate uncertainty in model predictions caused 1 by model inputs from uncertainty due to model structural error. We extend this method with a large-sample approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.

  17. Combining self- and cross-docking as benchmark tools: the performance of DockBench in the D3R Grand Challenge 2

    NASA Astrophysics Data System (ADS)

    Salmaso, Veronica; Sturlese, Mattia; Cuzzolin, Alberto; Moro, Stefano

    2018-01-01

    Molecular docking is a powerful tool in the field of computer-aided molecular design. In particular, it is the technique of choice for the prediction of a ligand pose within its target binding site. A multitude of docking methods is available nowadays, whose performance may vary depending on the data set. Therefore, some non-trivial choices should be made before starting a docking simulation. In the same framework, the selection of the target structure to use could be challenging, since the number of available experimental structures is increasing. Both issues have been explored within this work. The pose prediction of a pool of 36 compounds provided by D3R Grand Challenge 2 organizers was preceded by a pipeline to choose the best protein/docking-method couple for each blind ligand. An integrated benchmark approach including ligand shape comparison and cross-docking evaluations was implemented inside our DockBench software. The results are encouraging and show that bringing attention to the choice of the docking simulation fundamental components improves the results of the binding mode predictions.

  18. Bias-Free Chemically Diverse Test Sets from Machine Learning.

    PubMed

    Swann, Ellen T; Fernandez, Michael; Coote, Michelle L; Barnard, Amanda S

    2017-08-14

    Current benchmarking methods in quantum chemistry rely on databases that are built using a chemist's intuition. It is not fully understood how diverse or representative these databases truly are. Multivariate statistical techniques like archetypal analysis and K-means clustering have previously been used to summarize large sets of nanoparticles however molecules are more diverse and not as easily characterized by descriptors. In this work, we compare three sets of descriptors based on the one-, two-, and three-dimensional structure of a molecule. Using data from the NIST Computational Chemistry Comparison and Benchmark Database and machine learning techniques, we demonstrate the functional relationship between these structural descriptors and the electronic energy of molecules. Archetypes and prototypes found with topological or Coulomb matrix descriptors can be used to identify smaller, statistically significant test sets that better capture the diversity of chemical space. We apply this same method to find a diverse subset of organic molecules to demonstrate how the methods can easily be reapplied to individual research projects. Finally, we use our bias-free test sets to assess the performance of density functional theory and quantum Monte Carlo methods.

  19. Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions

    PubMed Central

    Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong

    2018-01-01

    Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a “large-sample” approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances. PMID:29697706

  20. Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions.

    PubMed

    Nearing, Grey S; Mocko, David M; Peters-Lidard, Christa D; Kumar, Sujay V; Xia, Youlong

    2016-03-01

    Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a "large-sample" approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.

  1. Recovery of time evolution of Grad-Shafranov equilibria from single-spacecraft data: Benchmarking and application to a flux transfer event

    NASA Astrophysics Data System (ADS)

    Hasegawa, Hiroshi; Sonnerup, Bengt U. Ã.-.; Nakamura, Takuma K. M.

    2010-11-01

    First results are presented of a method, developed by Sonnerup and Hasegawa (2010), for analyzing time evolution of magnetohydrostatic Grad-Shafranov (GS) equilibria, using data recorded by an observing probe as it traverses a quasi-static, two-dimensional (2D), magnetic-field/plasma structure. The method recovers spatial initial values used in the classical GS reconstruction for an interval before and after the time of actual measurements, by advancing them backward and forward in time based on a set of equations for an incompressible plasma; the consequence is generation of multiple GS maps or a movie of the 2D field structure. The method is successfully benchmarked by use of a 2D magnetohydrodynamic simulation of time-dependent magnetic reconnection, and then is applied to a flux transfer event (FTE) seen by the Cluster spacecraft at the dayside high-latitude magnetopause. The application shows that the field lines constituting the FTE flux rope were contracting toward its center as a result of modest convective flow in the region around the core of the flux rope.

  2. Benchmarking reference services: an introduction.

    PubMed

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  3. A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.

    1998-01-01

    This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.

  4. Benchmarking an unstructured grid sediment model in an energetic estuary

    DOE PAGES

    Lopez, Jesse E.; Baptista, António M.

    2016-12-14

    A sediment model coupled to the hydrodynamic model SELFE is validated against a benchmark combining a set of idealized tests and an application to a field-data rich energetic estuary. After sensitivity studies, model results for the idealized tests largely agree with previously reported results from other models in addition to analytical, semi-analytical, or laboratory results. Results of suspended sediment in an open channel test with fixed bottom are sensitive to turbulence closure and treatment for hydrodynamic bottom boundary. Results for the migration of a trench are very sensitive to critical stress and erosion rate, but largely insensitive to turbulence closure.more » The model is able to qualitatively represent sediment dynamics associated with estuarine turbidity maxima in an idealized estuary. Applied to the Columbia River estuary, the model qualitatively captures sediment dynamics observed by fixed stations and shipborne profiles. Representation of the vertical structure of suspended sediment degrades when stratification is underpredicted. Across all tests, skill metrics of suspended sediments lag those of hydrodynamics even when qualitatively representing dynamics. The benchmark is fully documented in an openly available repository to encourage unambiguous comparisons against other models.« less

  5. Benchmarking Ligand-Based Virtual High-Throughput Screening with the PubChem Database

    PubMed Central

    Butkiewicz, Mariusz; Lowe, Edward W.; Mueller, Ralf; Mendenhall, Jeffrey L.; Teixeira, Pedro L.; Weaver, C. David; Meiler, Jens

    2013-01-01

    With the rapidly increasing availability of High-Throughput Screening (HTS) data in the public domain, such as the PubChem database, methods for ligand-based computer-aided drug discovery (LB-CADD) have the potential to accelerate and reduce the cost of probe development and drug discovery efforts in academia. We assemble nine data sets from realistic HTS campaigns representing major families of drug target proteins for benchmarking LB-CADD methods. Each data set is public domain through PubChem and carefully collated through confirmation screens validating active compounds. These data sets provide the foundation for benchmarking a new cheminformatics framework BCL::ChemInfo, which is freely available for non-commercial use. Quantitative structure activity relationship (QSAR) models are built using Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Decision Trees (DTs), and Kohonen networks (KNs). Problem-specific descriptor optimization protocols are assessed including Sequential Feature Forward Selection (SFFS) and various information content measures. Measures of predictive power and confidence are evaluated through cross-validation, and a consensus prediction scheme is tested that combines orthogonal machine learning algorithms into a single predictor. Enrichments ranging from 15 to 101 for a TPR cutoff of 25% are observed. PMID:23299552

  6. Achieving excellence in veterans healthcare--a balanced scorecard approach.

    PubMed

    Biro, Lawrence A; Moreland, Michael E; Cowgill, David E

    2003-01-01

    This article provides healthcare administrators and managers with a framework and model for developing a balanced scorecard and demonstrates the remarkable success of this process, which brings focus to leadership decisions about the allocation of resources. This scorecard was developed as a top management tool designed to structure multiple priorities of a large, complex, integrated healthcare system and to establish benchmarks to measure success in achieving targets for performance in identified areas. Significant benefits and positive results were derived from the implementation of the balanced scorecard, based upon benchmarks considered to be critical success factors. The network's chief executive officer and top leadership team set and articulated the network's primary operating principles: quality and efficiency in the provision of comprehensive healthcare and support services. Under the weighted benchmarks of the balanced scorecard, the facilities in the network were mandated to adhere to one non-negotiable tenet: providing care that is second to none. The balanced scorecard approach to leadership continuously ensures that this is the primary goal and focal point for all activity within the network. To that end, systems are always in place to ensure that the network is fully successful on all performance measures relating to quality.

  7. Model benchmarking and reference signals for angled-beam shear wave ultrasonic nondestructive evaluation (NDE) inspections

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Hopkins, Deborah; Datuin, Marvin; Warchol, Mark; Warchol, Lyudmila; Forsyth, David S.; Buynak, Charlie; Lindgren, Eric A.

    2017-02-01

    For model benchmark studies, the accuracy of the model is typically evaluated based on the change in response relative to a selected reference signal. The use of a side drilled hole (SDH) in a plate was investigated as a reference signal for angled beam shear wave inspection for aircraft structure inspections of fastener sites. Systematic studies were performed with varying SDH depth and size, and varying the ultrasonic probe frequency, focal depth, and probe height. Increased error was observed with the simulation of angled shear wave beams in the near-field. Even more significant, asymmetry in real probes and the inherent sensitivity of signals in the near-field to subtle test conditions were found to provide a greater challenge with achieving model agreement. To achieve quality model benchmark results for this problem, it is critical to carefully align the probe with the part geometry, to verify symmetry in probe response, and ideally avoid using reference signals from the near-field response. Suggested reference signals for angled beam shear wave inspections include using the `through hole' corner specular reflection signal and the full skip' signal off of the far wall from the side drilled hole.

  8. Taking the Battle Upstream: Towards a Benchmarking Role for NATO

    DTIC Science & Technology

    2012-09-01

    Benchmark.........................................................................................14 Figure 8. World Bank Benchmarking Work on Quality...Search of a Benchmarking Theory for the Public Sector.” 16     Figure 8. World Bank Benchmarking Work on Quality of Governance One of the most...the Ministries of Defense in the countries in which it works ). Another interesting innovation is that for comparison purposes, McKinsey categorized

  9. Benchmarks--Standards Comparisons. Math Competencies: EFF Benchmarks Comparison [and] Reading Competencies: EFF Benchmarks Comparison [and] Writing Competencies: EFF Benchmarks Comparison.

    ERIC Educational Resources Information Center

    Kent State Univ., OH. Ohio Literacy Resource Center.

    This document is intended to show the relationship between Ohio's Standards and Competencies, Equipped for the Future's (EFF's) Standards and Components of Performance, and Ohio's Revised Benchmarks. The document is divided into three parts, with Part 1 covering mathematics instruction, Part 2 covering reading instruction, and Part 3 covering…

  10. How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction

    NASA Astrophysics Data System (ADS)

    Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.

    2015-03-01

    The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.

  11. Structured Uncertainty Bound Determination From Data for Control and Performance Validation

    NASA Technical Reports Server (NTRS)

    Lim, Kyong B.

    2003-01-01

    This report attempts to document the broad scope of issues that must be satisfactorily resolved before one can expect to methodically obtain, with a reasonable confidence, a near-optimal robust closed loop performance in physical applications. These include elements of signal processing, noise identification, system identification, model validation, and uncertainty modeling. Based on a recently developed methodology involving a parameterization of all model validating uncertainty sets for a given linear fractional transformation (LFT) structure and noise allowance, a new software, Uncertainty Bound Identification (UBID) toolbox, which conveniently executes model validation tests and determine uncertainty bounds from data, has been designed and is currently available. This toolbox also serves to benchmark the current state-of-the-art in uncertainty bound determination and in turn facilitate benchmarking of robust control technology. To help clarify the methodology and use of the new software, two tutorial examples are provided. The first involves the uncertainty characterization of a flexible structure dynamics, and the second example involves a closed loop performance validation of a ducted fan based on an uncertainty bound from data. These examples, along with other simulation and experimental results, also help describe the many factors and assumptions that determine the degree of success in applying robust control theory to practical problems.

  12. A comprehensive study of the delay vector variance method for quantification of nonlinearity in dynamical systems

    PubMed Central

    Mandic, D. P.; Ryan, K.; Basu, B.; Pakrashi, V.

    2016-01-01

    Although vibration monitoring is a popular method to monitor and assess dynamic structures, quantification of linearity or nonlinearity of the dynamic responses remains a challenging problem. We investigate the delay vector variance (DVV) method in this regard in a comprehensive manner to establish the degree to which a change in signal nonlinearity can be related to system nonlinearity and how a change in system parameters affects the nonlinearity in the dynamic response of the system. A wide range of theoretical situations are considered in this regard using a single degree of freedom (SDOF) system to obtain numerical benchmarks. A number of experiments are then carried out using a physical SDOF model in the laboratory. Finally, a composite wind turbine blade is tested for different excitations and the dynamic responses are measured at a number of points to extend the investigation to continuum structures. The dynamic responses were measured using accelerometers, strain gauges and a Laser Doppler vibrometer. This comprehensive study creates a numerical and experimental benchmark for structurally dynamical systems where output-only information is typically available, especially in the context of DVV. The study also allows for comparative analysis between different systems driven by the similar input. PMID:26909175

  13. Determinants of success in Shared Savings Programs: An analysis of ACO and market characteristics.

    PubMed

    Ouayogodé, Mariétou H; Colla, Carrie H; Lewis, Valerie A

    2017-03-01

    Medicare's Accountable Care Organization (ACO) programs introduced shared savings to traditional Medicare, which allow providers who reduce health care costs for their patients to retain a percentage of the savings they generate. To examine ACO and market factors associated with superior financial performance in Medicare ACO programs. We obtained financial performance data from the Centers for Medicare and Medicaid Services (CMS); we derived market-level characteristics from Medicare claims; and we collected ACO characteristics from the National Survey of ACOs for 215 ACOs. We examined the association between ACO financial performance and ACO provider composition, leadership structure, beneficiary characteristics, risk bearing experience, quality and process improvement capabilities, physician performance management, market competition, CMS-assigned financial benchmark, and ACO contract start date. We examined two outcomes from Medicare ACOs' first performance year: savings per Medicare beneficiary and earning shared savings payments (a dichotomous variable). When modeling the ACO ability to save and earn shared savings payments, we estimated positive regression coefficients for a greater proportion of primary care providers in the ACO, more practicing physicians on the governing board, physician leadership, active engagement in reducing hospital re-admissions, a greater proportion of disabled Medicare beneficiaries assigned to the ACO, financial incentives offered to physicians, a larger financial benchmark, and greater ACO market penetration. No characteristic of organizational structure was significantly associated with both outcomes of savings per beneficiary and likelihood of achieving shared savings. ACO prior experience with risk-bearing contracts was positively correlated with savings and significantly increased the likelihood of receiving shared savings payments. In the first year, performance is quite heterogeneous, yet organizational structure does not consistently predict performance. Organizations with large financial benchmarks at baseline have greater opportunities to achieve savings. Findings on prior risk bearing suggest that ACOs learn over time under risk-bearing contracts. Given the lack of predictive power for organizational characteristics, CMS should continue to encourage diversity in organizational structures for ACO participants, and provide alternative funding and risk bearing mechanisms to continue to allow a diverse group of organizations to participate. III. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Determinants of Success in Shared Savings Programs: An Analysis of ACO and Market Characteristics

    PubMed Central

    Colla, Carrie H.; Lewis, Valerie A.

    2016-01-01

    Background Medicare’s Accountable Care Organization (ACO) programs introduced shared savings to traditional Medicare, which allow providers who reduce health care costs for their patients to retain a percentage of the savings they generate. Objective To examine ACO and market factors associated with superior financial performance in Medicare ACO programs. Methods We obtained financial performance data from the Centers for Medicare and Medicaid Services (CMS); we derived market-level characteristics from Medicare claims; and we collected ACO characteristics from the National Survey of ACOs for 215 ACOs. We examined the association between ACO financial performance and ACO provider composition, leadership structure, beneficiary characteristics, risk bearing experience, quality and process improvement capabilities, physician performance management, market competition, CMS-assigned financial benchmark, and ACO contract start date. We examined two outcomes from Medicare ACOs’ first performance year: savings per Medicare beneficiary and earning shared savings payments (a dichotomous variable). Results When modeling the ACO ability to save and earn shared savings payments, we estimated positive regression coefficients for a greater proportion of primary care providers in the ACO, more practicing physicians on the governing board, physician leadership, active engagement in reducing hospital re-admissions, a greater proportion of disabled Medicare beneficiaries assigned to the ACO, financial incentives offered to physicians, a larger financial benchmark, and greater ACO market penetration. No characteristic of organizational structure was significantly associated with both outcomes of savings per beneficiary and likelihood of achieving shared savings. ACO prior experience with risk-bearing contracts was positively correlated with savings and significantly increased the likelihood of receiving shared savings payments. Conclusions In the first year performance is quite heterogeneous, yet organizational structure does not consistently predict performance. Organizations with large financial benchmarks at baseline have greater opportunities to achieve savings. Findings on prior risk bearing suggest that ACOs learn over time under risk-bearing contracts. Implications Given the lack of predictive power for organizational characteristics, CMS should continue to encourage diversity in organizational structures for ACO participants, and provide alternative funding and risk bearing mechanisms to continue to allow a diverse group of organizations to participate. Level of evidence III PMID:27687917

  15. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGES

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  16. A suite of benchmark and challenge problems for enhanced geothermal systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark; Fu, Pengcheng; McClure, Mark

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilitiesmore » to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners. We present the suite of benchmark and challenge problems developed for the GTO-CCS, providing problem descriptions and sample solutions.« less

  17. Can consistent benchmarking within a standardized pain management concept decrease postoperative pain after total hip arthroplasty? A prospective cohort study including 367 patients

    PubMed Central

    Benditz, Achim; Greimel, Felix; Auer, Patrick; Zeman, Florian; Göttermann, Antje; Grifka, Joachim; Meissner, Winfried; von Kunow, Frederik

    2016-01-01

    Background The number of total hip replacement surgeries has steadily increased over recent years. Reduction in postoperative pain increases patient satisfaction and enables better mobilization. Thus, pain management needs to be continuously improved. Problems are often caused not only by medical issues but also by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent, benchmarking. Methods All patients included in the study had undergone total hip arthroplasty (THA). Outcome parameters were analyzed 24 hours after surgery by means of the questionnaires from the German-wide project “Quality Improvement in Postoperative Pain Management” (QUIPS). A pain nurse interviewed patients and continuously assessed outcome quality parameters. A multidisciplinary team of anesthetists, orthopedic surgeons, and nurses implemented a regular procedure of data analysis and internal benchmarking. The health care team was informed of any results, and suggested improvements. Every staff member involved in pain management participated in educational lessons, and a special pain nurse was trained in each ward. Results From 2014 to 2015, 367 patients were included. The mean maximal pain score 24 hours after surgery was 4.0 (±3.0) on an 11-point numeric rating scale, and patient satisfaction was 9.0 (±1.2). Over time, the maximum pain score decreased (mean 3.0, ±2.0), whereas patient satisfaction significantly increased (mean 9.8, ±0.4; p<0.05). Among 49 anonymized hospitals, our clinic stayed on first rank in terms of lowest maximum pain and patient satisfaction over the period. Conclusion Results were already acceptable at the beginning of benchmarking a standardized pain management concept. But regular benchmarking, implementation of feedback mechanisms, and staff education made the pain management concept even more successful. Multidisciplinary teamwork and flexibility in adapting processes seem to be highly important for successful pain management. PMID:28031727

  18. Can consistent benchmarking within a standardized pain management concept decrease postoperative pain after total hip arthroplasty? A prospective cohort study including 367 patients.

    PubMed

    Benditz, Achim; Greimel, Felix; Auer, Patrick; Zeman, Florian; Göttermann, Antje; Grifka, Joachim; Meissner, Winfried; von Kunow, Frederik

    2016-01-01

    The number of total hip replacement surgeries has steadily increased over recent years. Reduction in postoperative pain increases patient satisfaction and enables better mobilization. Thus, pain management needs to be continuously improved. Problems are often caused not only by medical issues but also by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent, benchmarking. All patients included in the study had undergone total hip arthroplasty (THA). Outcome parameters were analyzed 24 hours after surgery by means of the questionnaires from the German-wide project "Quality Improvement in Postoperative Pain Management" (QUIPS). A pain nurse interviewed patients and continuously assessed outcome quality parameters. A multidisciplinary team of anesthetists, orthopedic surgeons, and nurses implemented a regular procedure of data analysis and internal benchmarking. The health care team was informed of any results, and suggested improvements. Every staff member involved in pain management participated in educational lessons, and a special pain nurse was trained in each ward. From 2014 to 2015, 367 patients were included. The mean maximal pain score 24 hours after surgery was 4.0 (±3.0) on an 11-point numeric rating scale, and patient satisfaction was 9.0 (±1.2). Over time, the maximum pain score decreased (mean 3.0, ±2.0), whereas patient satisfaction significantly increased (mean 9.8, ±0.4; p <0.05). Among 49 anonymized hospitals, our clinic stayed on first rank in terms of lowest maximum pain and patient satisfaction over the period. Results were already acceptable at the beginning of benchmarking a standardized pain management concept. But regular benchmarking, implementation of feedback mechanisms, and staff education made the pain management concept even more successful. Multidisciplinary teamwork and flexibility in adapting processes seem to be highly important for successful pain management.

  19. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    PubMed

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  20. Highly Enriched Uranium Metal Cylinders Surrounded by Various Reflector Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard Jones; J. Blair Briggs; Leland Monteirth

    A series of experiments was performed at Los Alamos Scientific Laboratory in 1958 to determine critical masses of cylinders of Oralloy (Oy) reflected by a number of materials. The experiments were all performed on the Comet Universal Critical Assembly Machine, and consisted of discs of highly enriched uranium (93.3 wt.% 235U) reflected by half-inch and one-inch-thick cylindrical shells of various reflector materials. The experiments were performed by members of Group N-2, particularly K. W. Gallup, G. E. Hansen, H. C. Paxton, and R. H. White. This experiment was intended to ascertain critical masses for criticality safety purposes, as well asmore » to compare neutron transport cross sections to those obtained from danger coefficient measurements with the Topsy Oralloy-Tuballoy reflected and Godiva unreflected critical assemblies. The reflector materials examined in this series of experiments are as follows: magnesium, titanium, aluminum, graphite, mild steel, nickel, copper, cobalt, molybdenum, natural uranium, tungsten, beryllium, aluminum oxide, molybdenum carbide, and polythene (polyethylene). Also included are two special configurations of composite beryllium and iron reflectors. Analyses were performed in which uncertainty associated with six different parameters was evaluated; namely, extrapolation to the uranium critical mass, uranium density, 235U enrichment, reflector density, reflector thickness, and reflector impurities. In addition to the idealizations made by the experimenters (removal of the platen and diaphragm), two simplifications were also made to the benchmark models that resulted in a small bias and additional uncertainty. First of all, since impurities in core and reflector materials are only estimated, they are not included in the benchmark models. Secondly, the room, support structure, and other possible surrounding equipment were not included in the model. Bias values that result from these two simplifications were determined and associated uncertainty in the bias values were included in the overall uncertainty in benchmark keff values. Bias values were very small, ranging from 0.0004 ?k low to 0.0007 ?k low. Overall uncertainties range from ? 0.0018 to ? 0.0030. Major contributors to the overall uncertainty include uncertainty in the extrapolation to the uranium critical mass and the uranium density. Results are summarized in Figure 1. Figure 1. Experimental, Benchmark-Model, and MCNP/KENO Calculated Results The 32 configurations described and evaluated under ICSBEP Identifier HEU-MET-FAST-084 are judged to be acceptable for use as criticality safety benchmark experiments and should be valuable integral benchmarks for nuclear data testing of the various reflector materials. Details of the benchmark models, uncertainty analyses, and final results are given in this paper.« less

  1. Yong-Ki Kim — His Life and Recent Work

    NASA Astrophysics Data System (ADS)

    Stone, Philip M.

    2007-08-01

    Dr. Kim made internationally recognized contributions in many areas of atomic physics research and applications, and was still very active when he was killed in an automobile accident. He joined NIST in 1983 after 17 years at the Argonne National Laboratory following his Ph.D. work at the University of Chicago. Much of his early work at Argonne and especially at NIST was the elucidation and detailed analysis of the structure of highly charged ions. He developed a sophisticated, fully relativistic atomic structure theory that accurately predicts atomic energy levels, transition wavelengths, lifetimes, and transition probabilities for a large number of ions. This information has been vital to model the properties of the hot interior of fusion research plasmas, where atomic ions must be described with relativistic atomic structure calculations. In recent years, Dr. Kim worked on the precise calculation of ionization and excitation cross sections of numerous atoms, ions, and molecules that are important in fusion research and in plasma processing for manufacturing semiconductor chips. Dr. Kim greatly advanced the state-of-the-art of calculations for these cross sections through development and implementation of highly innovative methods, including his Binary-Encounter-Bethe (BEB) theory and a scaled plane wave Born (scaled PWB) theory. His methods, using closed quantum mechanical formulas and no adjustable parameters, avoid tedious large-scale computations with main-frame computers. His calculations closely reproduce the results of benchmark experiments as well as large-scale calculations requiring hours of computer time. This recent work on BEB and scaled PWB is reviewed and examples of its capabilities are shown.

  2. Benchmarking clinical photography services in the NHS.

    PubMed

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  3. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone. Using a ROV to place and remove sensors on the benchmarks will significantly reduce the number of sensors required by the community to monitor offshore strain in subduction zones.

  4. A Visual Evaluation Study of Graph Sampling Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Fangyan; Zhang, Song; Wong, Pak C.

    2017-01-29

    We evaluate a dozen prevailing graph-sampling techniques with an ultimate goal to better visualize and understand big and complex graphs that exhibit different properties and structures. The evaluation uses eight benchmark datasets with four different graph types collected from Stanford Network Analysis Platform and NetworkX to give a comprehensive comparison of various types of graphs. The study provides a practical guideline for visualizing big graphs of different sizes and structures. The paper discusses results and important observations from the study.

  5. Linear Scaling Density Functional Calculations with Gaussian Orbitals

    NASA Technical Reports Server (NTRS)

    Scuseria, Gustavo E.

    1999-01-01

    Recent advances in linear scaling algorithms that circumvent the computational bottlenecks of large-scale electronic structure simulations make it possible to carry out density functional calculations with Gaussian orbitals on molecules containing more than 1000 atoms and 15000 basis functions using current workstations and personal computers. This paper discusses the recent theoretical developments that have led to these advances and demonstrates in a series of benchmark calculations the present capabilities of state-of-the-art computational quantum chemistry programs for the prediction of molecular structure and properties.

  6. Resource requirements of inclusive urban development in India: insights from ten cities

    NASA Astrophysics Data System (ADS)

    Singh Nagpure, Ajay; Reiner, Mark; Ramaswami, Anu

    2018-02-01

    This paper develops a methodology to assess the resource requirements of inclusive urban development in India and compares those requirements to current community-wide material and energy flows. Methods include: (a) identifying minimum service level benchmarks for the provision of infrastructure services including housing, electricity and clean cooking fuels; (b) assessing the percentage of homes that lack access to infrastructure or that consume infrastructure services below the identified benchmarks; (c) quantifying the material requirements to provide basic infrastructure services using India-specific design data; and (d) computing material and energy requirements for inclusive development and comparing it with current community-wide material and energy flows. Applying the method to ten Indian cities, we find that: 1%-6% of households do not have electricity, 14%-71% use electricity below the benchmark of 25 kWh capita-month-1 4%-16% lack structurally sound housing; 50%-75% live in floor area less than the benchmark of 8.75 m2 floor area/capita; 10%-65% lack clean cooking fuel; and 6%-60% lack connection to a sewerage system. Across the ten cities examined, to provide basic electricity (25 kWh capita-month-1) to all will require an addition of only 1%-10% in current community-wide electricity use. To provide basic clean LPG fuel (1.2 kg capita-month-1) to all requires an increase of 5%-40% in current community-wide LPG use. Providing permanent shelter (implemented over a ten year period) to populations living in non-permanent housing in Delhi and Chandigarh would require a 6%-14% increase over current annual community-wide cement use. Conversely, to provide permanent housing to all people living in structurally unsound housing and those living in overcrowded housing (<5 m cap-2) would require 32%-115% of current community-wide cement flows. Except for the last scenario, these results suggest that social policies that seek to provide basic infrastructure provisioning for all residents would not dramatically increasing current community-wide resource flows.

  7. The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool

    EPA Pesticide Factsheets

    Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.

  8. Benchmarking--Measuring and Comparing for Continuous Improvement.

    ERIC Educational Resources Information Center

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  9. Dietary Interventions to Extend Life Span and Health Span Based on Calorie Restriction

    PubMed Central

    Minor, Robin K.; Allard, Joanne S.; Younts, Caitlin M.; Ward, Theresa M.

    2010-01-01

    The societal impact of obesity, diabetes, and other metabolic disorders continues to rise despite increasing evidence of their negative long-term consequences on health span, longevity, and aging. Unfortunately, dietary management and exercise frequently fail as remedies, underscoring the need for the development of alternative interventions to successfully treat metabolic disorders and enhance life span and health span. Using calorie restriction (CR)—which is well known to improve both health and longevity in controlled studies—as their benchmark, gerontologists are coming closer to identifying dietary and pharmacological therapies that may be applicable to aging humans. This review covers some of the more promising interventions targeted to affect pathways implicated in the aging process as well as variations on classical CR that may be better suited to human adaptation. PMID:20371545

  10. Thick-film acoustic emission sensors for use in structurally integrated condition-monitoring applications.

    PubMed

    Pickwell, Andrew J; Dorey, Robert A; Mba, David

    2011-09-01

    Monitoring the condition of complex engineering structures is an important aspect of modern engineering, eliminating unnecessary work and enabling planned maintenance, preventing failure. Acoustic emissions (AE) testing is one method of implementing continuous nondestructive structural health monitoring. A novel thick-film (17.6 μm) AE sensor is presented. Lead zirconate titanate thick films were fabricated using a powder/sol composite ink deposition technique and mechanically patterned to form a discrete thick-film piezoelectric AE sensor. The thick-film sensor was benchmarked against a commercial AE device and was found to exhibit comparable responses to simulated acoustic emissions.

  11. Fuzzy Structures Analysis of Aircraft Panels in NASTRAN

    NASA Technical Reports Server (NTRS)

    Sparrow, Victor W.; Buehrle, Ralph D.

    2001-01-01

    This paper concerns an application of the fuzzy structures analysis (FSA) procedures of Soize to prototypical aerospace panels in MSC/NASTRAN, a large commercial finite element program. A brief introduction to the FSA procedures is first provided. The implementation of the FSA methods is then disclosed, and the method is validated by comparison to published results for the forced vibrations of a fuzzy beam. The results of the new implementation show excellent agreement to the benchmark results. The ongoing effort at NASA Langley and Penn State to apply these fuzzy structures analysis procedures to real aircraft panels is then described.

  12. Developing Benchmarks for Solar Radio Bursts

    NASA Astrophysics Data System (ADS)

    Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Domm, P.; Love, J. J.; Pierson, J.

    2016-12-01

    Solar radio bursts can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan has asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The solar radio benchmark team was also asked to define the wavelength/frequency bands of interest. The benchmark team developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks and the basis used to derive them. We will also present the work that needs to be done in order to complete the final, or phase 2 benchmarks.

  13. Benchmarking in national health service procurement in Scotland.

    PubMed

    Walker, Scott; Masson, Ron; Telford, Ronnie; White, David

    2007-11-01

    The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland.

  14. Targeting the affordability of cigarettes: a new benchmark for taxation policy in low-income and-middle-income countries.

    PubMed

    Blecher, Evan

    2010-08-01

    To investigate the appropriateness of tax incidence (the percentage of the retail price occupied by taxes) benchmarking in low-income and-middle-income countries (LMICs) with rapidly growing economies and to explore the viability of an alternative tax policy rule based on the affordability of cigarettes. The paper outlines criticisms of tax incidence benchmarking, particularly in the context of LMICs. It then considers an affordability-based benchmark using relative income price (RIP) as a measure of affordability. The RIP measures the percentage of annual per capita GDP required to purchase 100 packs of cigarettes. Using South Africa as a case study of an LMIC, future consumption is simulated using both tax incidence benchmarks and affordability benchmarks. I show that a tax incidence benchmark is not an optimal policy tool in South Africa and that an affordability benchmark could be a more effective means of reducing tobacco consumption in the future. Although a tax incidence benchmark was successful in increasing prices and reducing tobacco consumption in South Africa in the past, this approach has drawbacks, particularly in the context of a rapidly growing LMIC economy. An affordability benchmark represents an appropriate alternative that would be more effective in reducing future cigarette consumption.

  15. Multidisciplinary breast centres in Germany: a review and update of quality assurance through benchmarking and certification.

    PubMed

    Wallwiener, Markus; Brucker, Sara Y; Wallwiener, Diethelm

    2012-06-01

    This review summarizes the rationale for the creation of breast centres and discusses the studies conducted in Germany to obtain proof of principle for a voluntary, external benchmarking programme and proof of concept for third-party dual certification of breast centres and their mandatory quality management systems to the German Cancer Society (DKG) and German Society of Senology (DGS) Requirements of Breast Centres and ISO 9001 or similar. In addition, we report the most recent data on benchmarking and certification of breast centres in Germany. Review and summary of pertinent publications. Literature searches to identify additional relevant studies. Updates from the DKG/DGS programmes. Improvements in surrogate parameters as represented by structural and process quality indicators suggest that outcome quality is improving. The voluntary benchmarking programme has gained wide acceptance among DKG/DGS-certified breast centres. This is evidenced by early results from one of the largest studies in multidisciplinary cancer services research, initiated by the DKG and DGS to implement certified breast centres. The goal of establishing a nationwide network of certified breast centres in Germany can be considered largely achieved. Nonetheless the network still needs to be improved, and there is potential for optimization along the chain of care from mammography screening, interventional diagnosis and treatment through to follow-up. Specialization, guideline-concordant procedures as well as certification and recertification of breast centres remain essential to achieve further improvements in quality of breast cancer care and to stabilize and enhance the nationwide provision of high-quality breast cancer care.

  16. OrderRex: clinical order decision support and outcome predictions by data-mining electronic medical records.

    PubMed

    Chen, Jonathan H; Podchiyska, Tanya; Altman, Russ B

    2016-03-01

    To answer a "grand challenge" in clinical decision support, the authors produced a recommender system that automatically data-mines inpatient decision support from electronic medical records (EMR), analogous to Netflix or Amazon.com's product recommender. EMR data were extracted from 1 year of hospitalizations (>18K patients with >5.4M structured items including clinical orders, lab results, and diagnosis codes). Association statistics were counted for the ∼1.5K most common items to drive an order recommender. The authors assessed the recommender's ability to predict hospital admission orders and outcomes based on initial encounter data from separate validation patients. Compared to a reference benchmark of using the overall most common orders, the recommender using temporal relationships improves precision at 10 recommendations from 33% to 38% (P < 10(-10)) for hospital admission orders. Relative risk-based association methods improve inverse frequency weighted recall from 4% to 16% (P < 10(-16)). The framework yields a prediction receiver operating characteristic area under curve (c-statistic) of 0.84 for 30 day mortality, 0.84 for 1 week need for ICU life support, 0.80 for 1 week hospital discharge, and 0.68 for 30-day readmission. Recommender results quantitatively improve on reference benchmarks and qualitatively appear clinically reasonable. The method assumes that aggregate decision making converges appropriately, but ongoing evaluation is necessary to discern common behaviors from "correct" ones. Collaborative filtering recommender algorithms generate clinical decision support that is predictive of real practice patterns and clinical outcomes. Incorporating temporal relationships improves accuracy. Different evaluation metrics satisfy different goals (predicting likely events vs. "interesting" suggestions). Published by Oxford University Press on behalf of the American Medical Informatics Association 2015. This work is written by US Government employees and is in the public domain in the US.

  17. Watershed Regressions for Pesticides (WARP) models for predicting stream concentrations of multiple pesticides

    USGS Publications Warehouse

    Stone, Wesley W.; Crawford, Charles G.; Gilliom, Robert J.

    2013-01-01

    Watershed Regressions for Pesticides for multiple pesticides (WARP-MP) are statistical models developed to predict concentration statistics for a wide range of pesticides in unmonitored streams. The WARP-MP models use the national atrazine WARP models in conjunction with an adjustment factor for each additional pesticide. The WARP-MP models perform best for pesticides with application timing and methods similar to those used with atrazine. For other pesticides, WARP-MP models tend to overpredict concentration statistics for the model development sites. For WARP and WARP-MP, the less-than-ideal sampling frequency for the model development sites leads to underestimation of the shorter-duration concentration; hence, the WARP models tend to underpredict 4- and 21-d maximum moving-average concentrations, with median errors ranging from 9 to 38% As a result of this sampling bias, pesticides that performed well with the model development sites are expected to have predictions that are biased low for these shorter-duration concentration statistics. The overprediction by WARP-MP apparent for some of the pesticides is variably offset by underestimation of the model development concentration statistics. Of the 112 pesticides used in the WARP-MP application to stream segments nationwide, 25 were predicted to have concentration statistics with a 50% or greater probability of exceeding one or more aquatic life benchmarks in one or more stream segments. Geographically, many of the modeled streams in the Corn Belt Region were predicted to have one or more pesticides that exceeded an aquatic life benchmark during 2009, indicating the potential vulnerability of streams in this region.

  18. Microgravity Vibration Control and Civil Applications

    NASA Technical Reports Server (NTRS)

    Whorton, Mark Stephen; Alhorn, Dean Carl

    1998-01-01

    Controlling vibration of structures is essential for both space structures as well as terrestrial structures. Due to the ambient acceleration levels anticipated for the International Space Station, active vibration isolation is required to provide a quiescent acceleration environment for many science experiments. An overview is given of systems developed and flight tested in orbit for microgravity vibration isolation. Technology developed for vibration control of flexible space structures may also be applied to control of terrestrial structures such as buildings and bridges subject to wind loading or earthquake excitation. Recent developments in modern robust control for flexible space structures are shown to provide good structural vibration control while maintaining robustness to model uncertainties. Results of a mixed H-2/H-infinity control design are provided for a benchmark problem in structural control for earthquake resistant buildings.

  19. Analogue experiments as benchmarks for models of lava flow emplacement

    NASA Astrophysics Data System (ADS)

    Garel, F.; Kaminski, E. C.; Tait, S.; Limare, A.

    2013-12-01

    During an effusive volcanic eruption, the crisis management is mainly based on the prediction of lava flow advance and its velocity. The spreading of a lava flow, seen as a gravity current, depends on its "effective rheology" and on the effusion rate. Fast-computing models have arisen in the past decade in order to predict in near real time lava flow path and rate of advance. This type of model, crucial to mitigate volcanic hazards and organize potential evacuation, has been mainly compared a posteriori to real cases of emplaced lava flows. The input parameters of such simulations applied to natural eruptions, especially effusion rate and topography, are often not known precisely, and are difficult to evaluate after the eruption. It is therefore not straightforward to identify the causes of discrepancies between model outputs and observed lava emplacement, whereas the comparison of models with controlled laboratory experiments appears easier. The challenge for numerical simulations of lava flow emplacement is to model the simultaneous advance and thermal structure of viscous lava flows. To provide original constraints later to be used in benchmark numerical simulations, we have performed lab-scale experiments investigating the cooling of isoviscous gravity currents. The simplest experimental set-up is as follows: silicone oil, whose viscosity, around 5 Pa.s, varies less than a factor of 2 in the temperature range studied, is injected from a point source onto a horizontal plate and spreads axisymmetrically. The oil is injected hot, and progressively cools down to ambient temperature away from the source. Once the flow is developed, it presents a stationary radial thermal structure whose characteristics depend on the input flow rate. In addition to the experimental observations, we have developed in Garel et al., JGR, 2012 a theoretical model confirming the relationship between supply rate, flow advance and stationary surface thermal structure. We also provide experimental observations of the effect of wind the surface thermal structure of a viscous flow, that could be used to benchmark a thermal heat loss model. We will also briefly present more complex analogue experiments using wax material. These experiments present discontinuous advance behavior, and a dual surface thermal structure with low (solidified) vs. high (hot liquid exposed at the surface) surface temperatures regions. Emplacement models should tend to reproduce these two features, also observed on lava flows, to better predict the hazard of lava inundation.

  20. Selection of appropriate tumour data sets for Benchmark Dose Modelling (BMD) and derivation of a Margin of Exposure (MoE) for substances that are genotoxic and carcinogenic: considerations of biological relevance of tumour type, data quality and uncertainty assessment.

    PubMed

    Edler, Lutz; Hart, Andy; Greaves, Peter; Carthew, Philip; Coulet, Myriam; Boobis, Alan; Williams, Gary M; Smith, Benjamin

    2014-08-01

    This article addresses a number of concepts related to the selection and modelling of carcinogenicity data for the calculation of a Margin of Exposure. It follows up on the recommendations put forward by the International Life Sciences Institute - European branch in 2010 on the application of the Margin of Exposure (MoE) approach to substances in food that are genotoxic and carcinogenic. The aims are to provide practical guidance on the relevance of animal tumour data for human carcinogenic hazard assessment, appropriate selection of tumour data for Benchmark Dose Modelling, and approaches for dealing with the uncertainty associated with the selection of data for modelling and, consequently, the derived Point of Departure (PoD) used to calculate the MoE. Although the concepts outlined in this article are interrelated, the background expertise needed to address each topic varies. For instance, the expertise needed to make a judgement on biological relevance of a specific tumour type is clearly different to that needed to determine the statistical uncertainty around the data used for modelling a benchmark dose. As such, each topic is dealt with separately to allow those with specialised knowledge to target key areas of guidance and provide a more in-depth discussion on each subject for those new to the concept of the Margin of Exposure approach. Copyright © 2013 ILSI Europe. Published by Elsevier Ltd.. All rights reserved.

  1. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  2. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  3. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  4. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  5. The application of ab initio calculations to molecular spectroscopy

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.

    1989-01-01

    The state of the art in ab initio molecular structure calculations is reviewed with an emphasis on recent developments, such as full configuration-interaction benchmark calculations and atomic natural orbital basis sets. It is found that new developments in methodology, combined with improvements in computer hardware, are leading to unprecedented accuracy in solving problems in spectroscopy.

  6. The application of ab initio calculations to molecular spectroscopy

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.

    1989-01-01

    The state of the art in ab initio molecular structure calculations is reviewed, with an emphasis on recent developments such as full configuration-interaction benchmark calculations and atomic natural orbital basis sets. It is shown that new developments in methodology combined with improvements in computer hardware are leading to unprecedented accuracy in solving problems in spectroscopy.

  7. Benchmarking Campus Communications and Marketing Programs: A Look at Policies, Structures, Tools and Audiences. CASE White Paper

    ERIC Educational Resources Information Center

    Brounley, Lindy

    2010-01-01

    The University of Florida (UF) established a Strategic Communications Planning Committee in May 2009 to coordinate a campuswide effort to promote strategic communications planning, strengthen the university's brand, unify key themes and messages, maximize use of available research and resources, and identify and propagate best practices and…

  8. Using benchmarking techniques and the 2011 maternity practices infant nutrition and care (mPINC) survey to improve performance among peer groups across the United States.

    PubMed

    Edwards, Roger A; Dee, Deborah; Umer, Amna; Perrine, Cria G; Shealy, Katherine R; Grummer-Strawn, Laurence M

    2014-02-01

    A substantial proportion of US maternity care facilities engage in practices that are not evidence-based and that interfere with breastfeeding. The CDC Survey of Maternity Practices in Infant Nutrition and Care (mPINC) showed significant variation in maternity practices among US states. The purpose of this article is to use benchmarking techniques to identify states within relevant peer groups that were top performers on mPINC survey indicators related to breastfeeding support. We used 11 indicators of breastfeeding-related maternity care from the 2011 mPINC survey and benchmarking techniques to organize and compare hospital-based maternity practices across the 50 states and Washington, DC. We created peer categories for benchmarking first by region (grouping states by West, Midwest, South, and Northeast) and then by size (grouping states by the number of maternity facilities and dividing each region into approximately equal halves based on the number of facilities). Thirty-four states had scores high enough to serve as benchmarks, and 32 states had scores low enough to reflect the lowest score gap from the benchmark on at least 1 indicator. No state served as the benchmark on more than 5 indicators and no state was furthest from the benchmark on more than 7 indicators. The small peer group benchmarks in the South, West, and Midwest were better than the large peer group benchmarks on 91%, 82%, and 36% of the indicators, respectively. In the West large, the Midwest large, the Midwest small, and the South large peer groups, 4-6 benchmarks showed that less than 50% of hospitals have ideal practice in all states. The evaluation presents benchmarks for peer group state comparisons that provide potential and feasible targets for improvement.

  9. Hospital benchmarking: are U.S. eye hospitals ready?

    PubMed

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added "public" value of benchmarking in health care is questionable.

  10. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    USGS Publications Warehouse

    Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.

    2016-01-01

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics of sediment, and uncertainty in TEB values. Additional evaluations of benchmarks in relation to sediment chemistry and toxicity are ongoing.

  11. 40 CFR 141.172 - Disinfection profiling and benchmarking.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... benchmarking. 141.172 Section 141.172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving 10,000 or More People § 141.172 Disinfection profiling and benchmarking. (a... sanitary surveys conducted by the State. (c) Disinfection benchmarking. (1) Any system required to develop...

  12. 42 CFR 440.390 - Assurance of transportation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...

  13. 42 CFR 440.390 - Assurance of transportation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...

  14. 42 CFR 440.390 - Assurance of transportation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...

  15. 42 CFR 440.390 - Assurance of transportation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...

  16. 42 CFR 440.390 - Assurance of transportation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...

  17. The Isprs Benchmark on Indoor Modelling

    NASA Astrophysics Data System (ADS)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  18. Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ye; Ma, Xiaosong; Liu, Qing Gary

    2015-01-01

    Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less

  19. Benchmarking in Academic Pharmacy Departments

    PubMed Central

    Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O.; Ross, Leigh Ann

    2010-01-01

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation. PMID:21179251

  20. Benchmarking in academic pharmacy departments.

    PubMed

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  1. Engine dynamic analysis with general nonlinear finite element codes. II - Bearing element implementation, overall numerical characteristics and benchmarking

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Adams, M.; Lam, P.; Fertis, D.; Zeid, I.

    1982-01-01

    Second-year efforts within a three-year study to develop and extend finite element (FE) methodology to efficiently handle the transient/steady state response of rotor-bearing-stator structure associated with gas turbine engines are outlined. The two main areas aim at (1) implanting the squeeze film damper element into a general purpose FE code for testing and evaluation; and (2) determining the numerical characteristics of the FE-generated rotor-bearing-stator simulation scheme. The governing FE field equations are set out and the solution methodology is presented. The choice of ADINA as the general-purpose FE code is explained, and the numerical operational characteristics of the direct integration approach of FE-generated rotor-bearing-stator simulations is determined, including benchmarking, comparison of explicit vs. implicit methodologies of direct integration, and demonstration problems.

  2. Levelized cost of energy for a Backward Bent Duct Buoy

    DOE PAGES

    Bull, Diana; Jenne, D. Scott; Smith, Christopher S.; ...

    2016-07-18

    The Reference Model Project, supported by the U.S. Department of Energy, was developed to provide publicly available technical and economic benchmarks for a variety of marine energy converters. The methodology to achieve these benchmarks is to develop public domain designs that incorporate power performance estimates, structural models, anchor and mooring designs, power conversion chain designs, and estimates of the operations and maintenance, installation, and environmental permitting required. The reference model designs are intended to be conservative, robust, and experimentally verified. The Backward Bent Duct Buoy (BBDB) presented in this paper is one of three wave energy conversion devices studied withinmore » the Reference Model Project. Furthermore, comprehensive modeling of the BBDB in a Northern California climate has enabled a full levelized cost of energy (LCOE) analysis to be completed on this device.« less

  3. Levelized cost of energy for a Backward Bent Duct Buoy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bull, Diana; Jenne, D. Scott; Smith, Christopher S.

    2016-12-01

    The Reference Model Project, supported by the U.S. Department of Energy, was developed to provide publically available technical and economic benchmarks for a variety of marine energy converters. The methodology to achieve these benchmarks is to develop public domain designs that incorporate power performance estimates, structural models, anchor and mooring designs, power conversion chain designs, and estimates of the operations and maintenance, installation, and environmental permitting required. The reference model designs are intended to be conservative, robust, and experimentally verified. The Backward Bent Duct Buoy (BBDB) presented in this paper is one of three wave energy conversion devices studied withinmore » the Reference Model Project. Comprehensive modeling of the BBDB in a Northern California climate has enabled a full levelized cost of energy (LCOE) analysis to be completed on this device.« less

  4. RBscore&NBench: a high-level web server for nucleic acid binding residues prediction with a large-scale benchmarking database.

    PubMed

    Miao, Zhichao; Westhof, Eric

    2016-07-08

    RBscore&NBench combines a web server, RBscore and a database, NBench. RBscore predicts RNA-/DNA-binding residues in proteins and visualizes the prediction scores and features on protein structures. The scoring scheme of RBscore directly links feature values to nucleic acid binding probabilities and illustrates the nucleic acid binding energy funnel on the protein surface. To avoid dataset, binding site definition and assessment metric biases, we compared RBscore with 18 web servers and 3 stand-alone programs on 41 datasets, which demonstrated the high and stable accuracy of RBscore. A comprehensive comparison led us to develop a benchmark database named NBench. The web server is available on: http://ahsoka.u-strasbg.fr/rbscorenbench/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Evaluating the Effect of Labeled Benchmarks on Children’s Number Line Estimation Performance and Strategy Use

    PubMed Central

    Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen

    2017-01-01

    Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children’s strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders’ NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children’s NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders’ NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children’s benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children’s age and familiarity with the number range, these additional external benchmarks might need to be labeled. PMID:28713302

  6. Evaluating the Effect of Labeled Benchmarks on Children's Number Line Estimation Performance and Strategy Use.

    PubMed

    Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen

    2017-01-01

    Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children's age and familiarity with the number range, these additional external benchmarks might need to be labeled.

  7. Tribological and Wear Performance of Nanocomposite PVD Hard Coatings Deposited on Aluminum Die Casting Tool

    PubMed Central

    Fox-Rabinovich, German; Locks Junior, Edinei; Stolf, Pietro; Matos Martins, Marcelo

    2018-01-01

    In the aluminum die casting process, erosion, corrosion, soldering, and die sticking have a significant influence on tool life and product quality. A number of coatings such as TiN, CrN, and (Cr,Al)N deposited by physical vapor deposition (PVD) have been employed to act as protective coatings due to their high hardness and chemical stability. In this study, the wear performance of two nanocomposite AlTiN and AlCrN coatings with different structures were evaluated. These coatings were deposited on aluminum die casting mold tool substrates (AISI H13 hot work steel) by PVD using pulsed cathodic arc evaporation, equipped with three lateral arc-rotating cathodes (LARC) and one central rotating cathode (CERC). The research was performed in two stages: in the first stage, the outlined coatings were characterized regarding their chemical composition, morphology, and structure using glow discharge optical emission spectroscopy (GDOES), scanning electron microscopy (SEM), and X-ray diffraction (XRD), respectively. Surface morphology and mechanical properties were evaluated by atomic force microscopy (AFM) and nanoindentation. The coating adhesion was studied using Mersedes test and scratch testing. During the second stage, industrial tests were carried out for coated die casting molds. In parallel, tribological tests were also performed in order to determine if a correlation between laboratory and industrial tests can be drawn. All of the results were compared with a benchmark monolayer AlCrN coating. The data obtained show that the best performance was achieved for the AlCrN/Si3N4 nanocomposite coating that displays an optimum combination of hardness, adhesion, soldering behavior, oxidation resistance, and stress state. These characteristics are essential for improving the die mold service life. Therefore, this coating emerges as a novelty to be used to protect aluminum die casting molds. PMID:29495620

  8. Medical school benchmarking - from tools to programmes.

    PubMed

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  9. 42 CFR 457.430 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.430 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has...

  10. 42 CFR 457.430 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.430 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has...

  11. 42 CFR 457.430 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark-equivalent health benefits coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.430 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has...

  12. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark-equivalent health benefits coverage. 440.335 Section 440.335 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a...

  13. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Benchmark-equivalent health benefits coverage. 440.335 Section 440.335 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a...

  14. Performance Evaluation and Improvement of Ferroelectric Field-Effect Transistor Memory

    NASA Astrophysics Data System (ADS)

    Yu, Hyung Suk

    Flash memory is reaching scaling limitations rapidly due to reduction of charge in floating gates, charge leakage and capacitive coupling between cells which cause threshold voltage fluctuations, short retention times, and interference. Many new memory technologies are being considered as alternatives to flash memory in an effort to overcome these limitations. Ferroelectric Field-Effect Transistor (FeFET) is one of the main emerging candidates because of its structural similarity to conventional FETs and fast switching speed. Nevertheless, the performance of FeFETs have not been systematically compared and analyzed against other competing technologies. In this work, we first benchmark the intrinsic performance of FeFETs and other memories by simulations in order to identify the strengths and weaknesses of FeFETs. To simulate realistic memory applications, we compare memories on an array structure. For the comparisons, we construct an accurate delay model and verify it by benchmarking against exact HSPICE simulations. Second, we propose an accurate model for FeFET memory window since the existing model has limitations. The existing model assumes symmetric operation voltages but it is not valid for the practical asymmetric operation voltages. In this modeling, we consider practical operation voltages and device dimensions. Also, we investigate realistic changes of memory window over time and retention time of FeFETs. Last, to improve memory window and subthreshold swing, we suggest nonplanar junctionless structures for FeFETs. Using the suggested structures, we study the dimensional dependences of crucial parameters like memory window and subthreshold swing and also analyze key interference mechanisms.

  15. Neutron spectra measurement and calculations using data libraries CIELO, JEFF-3.2 and ENDF/B-VII.1 in iron benchmark assemblies

    NASA Astrophysics Data System (ADS)

    Jansky, Bohumil; Rejchrt, Jiri; Novak, Evzen; Losa, Evzen; Blokhin, Anatoly I.; Mitenkova, Elena

    2017-09-01

    The leakage neutron spectra measurements have been done on benchmark spherical assemblies - iron spheres with diameter of 20, 30, 50 and 100 cm. The Cf-252 neutron source was placed into the centre of iron sphere. The proton recoil method was used for neutron spectra measurement using spherical hydrogen proportional counters with diameter of 4 cm and with pressure of 400 and 1000 kPa. The neutron energy range of spectrometer is from 0.1 to 1.3 MeV. This energy interval represents about 85 % of all leakage neutrons from Fe sphere of diameter 50 cm and about of 74% for Fe sphere of diameter 100 cm. The adequate MCNP neutron spectra calculations based on data libraries CIELO, JEFF-3.2 and ENDF/B-VII.1 were done. Two calculations were done with CIELO library. The first one used data for all Fe-isotopes from CIELO and the second one (CIELO-56) used only Fe-56 data from CIELO and data for other Fe isotopes were from ENDF/B-VII.1. The energy structure used for calculations and measurements was 40 gpd (groups per decade) and 200 gpd. Structure 200 gpd represents lethargy step about of 1%. This relatively fine energy structure enables to analyze the Fe resonance neutron energy structure. The evaluated cross section data of Fe were validated on comparisons between the calculated and experimental spectra.

  16. Great interactions: How binding incorrect partners can teach us about protein recognition and function.

    PubMed

    Vamparys, Lydie; Laurent, Benoist; Carbone, Alessandra; Sacquin-Mora, Sophie

    2016-10-01

    Protein-protein interactions play a key part in most biological processes and understanding their mechanism is a fundamental problem leading to numerous practical applications. The prediction of protein binding sites in particular is of paramount importance since proteins now represent a major class of therapeutic targets. Amongst others methods, docking simulations between two proteins known to interact can be a useful tool for the prediction of likely binding patches on a protein surface. From the analysis of the protein interfaces generated by a massive cross-docking experiment using the 168 proteins of the Docking Benchmark 2.0, where all possible protein pairs, and not only experimental ones, have been docked together, we show that it is also possible to predict a protein's binding residues without having any prior knowledge regarding its potential interaction partners. Evaluating the performance of cross-docking predictions using the area under the specificity-sensitivity ROC curve (AUC) leads to an AUC value of 0.77 for the complete benchmark (compared to the 0.5 AUC value obtained for random predictions). Furthermore, a new clustering analysis performed on the binding patches that are scattered on the protein surface show that their distribution and growth will depend on the protein's functional group. Finally, in several cases, the binding-site predictions resulting from the cross-docking simulations will lead to the identification of an alternate interface, which corresponds to the interaction with a biomolecular partner that is not included in the original benchmark. Proteins 2016; 84:1408-1421. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc.

  17. Valence and charge-transfer optical properties for some SinCm (m, n ≤ 12) clusters: Comparing TD-DFT, complete-basis-limit EOMCC, and benchmarks from spectroscopy

    NASA Astrophysics Data System (ADS)

    Lutz, Jesse J.; Duan, Xiaofeng F.; Ranasinghe, Duminda S.; Jin, Yifan; Margraf, Johannes T.; Perera, Ajith; Burggraf, Larry W.; Bartlett, Rodney J.

    2018-05-01

    Accurate optical characterization of the closo-Si12C12 molecule is important to guide experimental efforts toward the synthesis of nano-wires, cyclic nano-arrays, and related array structures, which are anticipated to be robust and efficient exciton materials for opto-electronic devices. Working toward calibrated methods for the description of closo-Si12C12 oligomers, various electronic structure approaches are evaluated for their ability to reproduce measured optical transitions of the SiC2, Si2Cn (n = 1-3), and Si3Cn (n = 1, 2) clusters reported earlier by Steglich and Maier [Astrophys. J. 801, 119 (2015)]. Complete-basis-limit equation-of-motion coupled-cluster (EOMCC) results are presented and a comparison is made between perturbative and renormalized non-iterative triples corrections. The effect of adding a renormalized correction for quadruples is also tested. Benchmark test sets derived from both measurement and high-level EOMCC calculations are then used to evaluate the performance of a variety of density functionals within the time-dependent density functional theory (TD-DFT) framework. The best-performing functionals are subsequently applied to predict valence TD-DFT excitation energies for the lowest-energy isomers of SinC and Sin-1C7-n (n = 4-6). TD-DFT approaches are then applied to the SinCn (n = 4-12) clusters and unique spectroscopic signatures of closo-Si12C12 are discussed. Finally, various long-range corrected density functionals, including those from the CAM-QTP family, are applied to a charge-transfer excitation in a cyclic (Si4C4)4 oligomer. Approaches for gauging the extent of charge-transfer character are also tested and EOMCC results are used to benchmark functionals and make recommendations.

  18. LIPS database with LIPService: a microscopic image database of intracellular structures in Arabidopsis guard cells.

    PubMed

    Higaki, Takumi; Kutsuna, Natsumaro; Hasezawa, Seiichiro

    2013-05-16

    Intracellular configuration is an important feature of cell status. Recent advances in microscopic imaging techniques allow us to easily obtain a large number of microscopic images of intracellular structures. In this circumstance, automated microscopic image recognition techniques are of extreme importance to future phenomics/visible screening approaches. However, there was no benchmark microscopic image dataset for intracellular organelles in a specified plant cell type. We previously established the Live Images of Plant Stomata (LIPS) database, a publicly available collection of optical-section images of various intracellular structures of plant guard cells, as a model system of environmental signal perception and transduction. Here we report recent updates to the LIPS database and the establishment of a database table, LIPService. We updated the LIPS dataset and established a new interface named LIPService to promote efficient inspection of intracellular structure configurations. Cell nuclei, microtubules, actin microfilaments, mitochondria, chloroplasts, endoplasmic reticulum, peroxisomes, endosomes, Golgi bodies, and vacuoles can be filtered using probe names or morphometric parameters such as stomatal aperture. In addition to the serial optical sectional images of the original LIPS database, new volume-rendering data for easy web browsing of three-dimensional intracellular structures have been released to allow easy inspection of their configurations or relationships with cell status/morphology. We also demonstrated the utility of the new LIPS image database for automated organelle recognition of images from another plant cell image database with image clustering analyses. The updated LIPS database provides a benchmark image dataset for representative intracellular structures in Arabidopsis guard cells. The newly released LIPService allows users to inspect the relationship between organellar three-dimensional configurations and morphometrical parameters.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandor, Debra; Chung, Donald; Keyser, David

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  20. Benchmarking for Higher Education.

    ERIC Educational Resources Information Center

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  1. Benchmark Study of Global Clean Energy Manufacturing | Advanced

    Science.gov Websites

    Manufacturing Research | NREL Benchmark Study of Global Clean Energy Manufacturing Benchmark Study of Global Clean Energy Manufacturing Through a first-of-its-kind benchmark study, the Clean Energy Technology End Product.' The study examined four clean energy technologies: wind turbine components

  2. Cross-industry benchmarking: is it applicable to the operating room?

    PubMed

    Marco, A P; Hart, S

    2001-01-01

    The use of benchmarking has been growing in nonmedical industries. This concept is being increasingly applied to medicine as the industry strives to improve quality and improve financial performance. Benchmarks can be either internal (set by the institution) or external (use other's performance as a goal). In some industries, benchmarking has crossed industry lines to identify breakthroughs in thinking. In this article, we examine whether the airline industry can be used as a source of external process benchmarking for the operating room.

  3. Overview of TPC Benchmark E: The Next Generation of OLTP Benchmarks

    NASA Astrophysics Data System (ADS)

    Hogan, Trish

    Set to replace the aging TPC-C, the TPC Benchmark E is the next generation OLTP benchmark, which more accurately models client database usage. TPC-E addresses the shortcomings of TPC-C. It has a much more complex workload, requires the use of RAID-protected storage, generates much less I/O, and is much cheaper and easier to set up, run, and audit. After a period of overlap, it is expected that TPC-E will become the de facto OLTP benchmark.

  4. Unified constitutive models for high-temperature structural applications

    NASA Technical Reports Server (NTRS)

    Lindholm, U. S.; Chan, K. S.; Bodner, S. R.; Weber, R. M.; Walker, K. P.

    1988-01-01

    Unified constitutive models are characterized by the use of a single inelastic strain rate term for treating all aspects of inelastic deformation, including plasticity, creep, and stress relaxation under monotonic or cyclic loading. The structure of this class of constitutive theory pertinent for high temperature structural applications is first outlined and discussed. The effectiveness of the unified approach for representing high temperature deformation of Ni-base alloys is then evaluated by extensive comparison of experimental data and predictions of the Bodner-Partom and the Walker models. The use of the unified approach for hot section structural component analyses is demonstrated by applying the Walker model in finite element analyses of a benchmark notch problem and a turbine blade problem.

  5. Improved Low Temperature Performance of Supercapacitors

    NASA Technical Reports Server (NTRS)

    Brandon, Erik J.; West, William C.; Smart, Marshall C.; Gnanaraj, Joe

    2013-01-01

    Low temperature double-layer capacitor operation enabled by: - Base acetonitrile / TEATFB salt formulation - Addition of low melting point formates, esters and cyclic ethers center dot Key electrolyte design factors: - Volume of co-solvent - Concentration of salt center dot Capacity increased through higher capacity electrodes: - Zeolite templated carbons - Asymmetric cell designs center dot Continuing efforts - Improve asymmetric cell performance at low temperature - Cycle life testing Motivation center dot Benchmark performance of commercial cells center dot Approaches for designing low temperature systems - Symmetric cells (activated carbon electrodes) - Symmetric cells (zeolite templated carbon electrodes) - Asymmetric cells (lithium titanate/activated carbon electrodes) center dot Experimental results center dot Summary

  6. Implementation and validation of a conceptual benchmarking framework for patient blood management.

    PubMed

    Kastner, Peter; Breznik, Nada; Gombotz, Hans; Hofmann, Axel; Schreier, Günter

    2015-01-01

    Public health authorities and healthcare professionals are obliged to ensure high quality health service. Because of the high variability of the utilisation of blood and blood components, benchmarking is indicated in transfusion medicine. Implementation and validation of a benchmarking framework for Patient Blood Management (PBM) based on the report from the second Austrian Benchmark trial. Core modules for automatic report generation have been implemented with KNIME (Konstanz Information Miner) and validated by comparing the output with the results of the second Austrian benchmark trial. Delta analysis shows a deviation <0.1% for 95% (max. 1.4%). The framework provides a reliable tool for PBM benchmarking. The next step is technical integration with hospital information systems.

  7. NASTRAN DMAP Fuzzy Structures Analysis: Summary of Research

    NASA Technical Reports Server (NTRS)

    Sparrow, Victor W.

    2001-01-01

    The main proposed tasks of Cooperative Agreement NCC1-382 were: (1) developing MSC/NASTRAN DMAP language scripts to implement the Soize fuzzy structures approach for modeling the dynamics of complex structures; (2) benchmarking the results of the new code to those for a cantilevered beam in the literature; and (3) testing and validating the new code by comparing the fuzzy structures results to NASA Langley experimental and conventional finite element results for two model test structures representative of aircraft fuselage sidewall construction: (A) a small aluminum test panel (SLP, single longeron panel) with a single longitudinal stringer attached with bolts; and (B) a 47 by 72 inch flat aluminum fuselage panel (AFP, aluminum fuselage panel) including six longitudinal stringers and four frame stiffeners attached with rivets.

  8. Toxicological benchmarks for potential contaminants of concern for effects on soil and litter invertebrates and heterotrophic process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Will, M.E.; Suter, G.W. II

    1995-09-01

    An important step in ecological risk assessments is screening the chemicals occur-ring on a site for contaminants of potential concern. Screening may be accomplished by comparing reported ambient concentrations to a set of toxicological benchmarks. Multiple endpoints for assessing risks posed by soil-borne contaminants to organisms directly impacted by them have been established. This report presents benchmarks for soil invertebrates and microbial processes and addresses only chemicals found at United States Department of Energy (DOE) sites. No benchmarks for pesticides are presented. After discussing methods, this report presents the results of the literature review and benchmark derivation for toxicity tomore » earthworms (Sect. 3), heterotrophic microbes and their processes (Sect. 4), and other invertebrates (Sect. 5). The final sections compare the benchmarks to other criteria and background and draw conclusions concerning the utility of the benchmarks.« less

  9. Benchmarks for target tracking

    NASA Astrophysics Data System (ADS)

    Dunham, Darin T.; West, Philip D.

    2011-09-01

    The term benchmark originates from the chiseled horizontal marks that surveyors made, into which an angle-iron could be placed to bracket ("bench") a leveling rod, thus ensuring that the leveling rod can be repositioned in exactly the same place in the future. A benchmark in computer terms is the result of running a computer program, or a set of programs, in order to assess the relative performance of an object by running a number of standard tests and trials against it. This paper will discuss the history of simulation benchmarks that are being used by multiple branches of the military and agencies of the US government. These benchmarks range from missile defense applications to chemical biological situations. Typically, a benchmark is used with Monte Carlo runs in order to tease out how algorithms deal with variability and the range of possible inputs. We will also describe problems that can be solved by a benchmark.

  10. Benchmarking Using Basic DBMS Operations

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  11. 77 FR 70643 - Patient Protection and Affordable Care Act; Standards Related to Essential Health Benefits...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-26

    ... coverage \\1\\ in the individual and small group markets, Medicaid benchmark and benchmark-equivalent plans...) Act extends the coverage of the EHB package to issuers of non-grandfathered individual and small group... small group markets, and not to Medicaid benchmark or benchmark-equivalent plans. EHB applicability to...

  12. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    ERIC Educational Resources Information Center

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  13. The Craft of Benchmarking: Finding and Utilizing District-Level, Campus-Level, and Program-Level Standards.

    ERIC Educational Resources Information Center

    McGregor, Ellen N.; Attinasi, Louis C., Jr.

    This paper describes the processes involved in selecting peer institutions for appropriate benchmarking using national databases (NCES-IPEDS). Benchmarking involves the identification of peer institutions and/or best practices in specific operational areas for the purpose of developing standards. The benchmarking process was borne in the early…

  14. Measuring How Benchmark Assessments Affect Student Achievement. Issues & Answers. REL 2007-No. 039

    ERIC Educational Resources Information Center

    Henderson, Susan; Petrosino, Anthony; Guckenburg, Sarah; Hamilton, Stephen

    2007-01-01

    This report examines a Massachusetts pilot program for quarterly benchmark exams in middle-school mathematics, finding that program schools do not show greater gains in student achievement after a year. But that finding might reflect limited data rather than ineffective benchmark assessments. Benchmark assessments are used in many districts…

  15. 24 CFR 990.185 - Utilities expense level: Incentives for energy conservation/rate reduction.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) Utility benchmarking. HUD will pursue benchmarking utility consumption at the project level as part of the... convene a meeting with representation of appropriate stakeholders to review utility benchmarking options so that HUD may determine whether or how to implement utility benchmarking to be effective in FY 2011...

  16. Quality in E-Learning--A Conceptual Framework Based on Experiences from Three International Benchmarking Projects

    ERIC Educational Resources Information Center

    Ossiannilsson, E.; Landgren, L.

    2012-01-01

    Between 2008 and 2010, Lund University took part in three international benchmarking projects, "E-xcellence+," the "eLearning Benchmarking Exercise 2009," and the "First Dual-Mode Distance Learning Benchmarking Club." A comparison of these models revealed a rather high level of correspondence. From this finding and…

  17. 24 CFR 990.185 - Utilities expense level: Incentives for energy conservation/rate reduction.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) Utility benchmarking. HUD will pursue benchmarking utility consumption at the project level as part of the... convene a meeting with representation of appropriate stakeholders to review utility benchmarking options so that HUD may determine whether or how to implement utility benchmarking to be effective in FY 2011...

  18. 40 CFR 141.543 - How is the disinfection benchmark calculated?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false How is the disinfection benchmark... Disinfection-Systems Serving Fewer Than 10,000 People Disinfection Benchmark § 141.543 How is the disinfection benchmark calculated? If your system is making a significant change to its disinfection practice, it must...

  19. 40 CFR 141.543 - How is the disinfection benchmark calculated?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false How is the disinfection benchmark... Disinfection-Systems Serving Fewer Than 10,000 People Disinfection Benchmark § 141.543 How is the disinfection benchmark calculated? If your system is making a significant change to its disinfection practice, it must...

  20. 40 CFR 141.543 - How is the disinfection benchmark calculated?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 24 2013-07-01 2013-07-01 false How is the disinfection benchmark... Disinfection-Systems Serving Fewer Than 10,000 People Disinfection Benchmark § 141.543 How is the disinfection benchmark calculated? If your system is making a significant change to its disinfection practice, it must...

  1. 40 CFR 141.543 - How is the disinfection benchmark calculated?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 23 2011-07-01 2011-07-01 false How is the disinfection benchmark... Disinfection-Systems Serving Fewer Than 10,000 People Disinfection Benchmark § 141.543 How is the disinfection benchmark calculated? If your system is making a significant change to its disinfection practice, it must...

  2. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants: 1994 revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Will, M.E.; Suter, G.W. II

    1994-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a setmore » of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.« less

  3. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Terrestrial Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II

    1993-01-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a setmore » of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.« less

  4. How to Advance TPC Benchmarks with Dependability Aspects

    NASA Astrophysics Data System (ADS)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  5. Point Cloud and Digital Surface Model Generation from High Resolution Multiple View Stereo Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Gong, K.; Fritsch, D.

    2018-05-01

    Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.

  6. Feebates and Fuel Economy Standards: Impacts on Fuel Use in Light-Duty Vehicles and Greenhouse Gas Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, David L

    2011-01-01

    This study evaluates the potential impacts of a national feebate system, a market-based policy that consists of graduated fees on low-fuel-economy (or high-emitting) vehicles and rebates for high-fuel-economy (or lowemitting) vehicles. In their simplest form, feebate systems operate under three conditions: a benchmark divides all vehicles into two categories-those charged fees and those eligible for rebates; the sizes of the fees and rebates are a function of a vehicle's deviation from its benchmark; and placement of the benchmark ensures revenue neutrality or a desired level of subsidy or revenue. A model developed by the University of California for the Californiamore » Air Resources Board was revised and used to estimate the effects of six feebate structures on fuel economy and sales of new light-duty vehicles, given existing and anticipated future fuel economy and emission standards. These estimates for new vehicles were then entered into a vehicle stock model that simulated the evolution of the entire vehicle stock. The results indicate that feebates could produce large, additional reductions in emissions and fuel consumption, in large part by encouraging market acceptance of technologies with advanced fuel economy, such as hybrid electric vehicles.« less

  7. Leveraging long read sequencing from a single individual to provide a comprehensive resource for benchmarking variant calling methods

    PubMed Central

    Mu, John C.; Tootoonchi Afshar, Pegah; Mohiyuddin, Marghoob; Chen, Xi; Li, Jian; Bani Asadi, Narges; Gerstein, Mark B.; Wong, Wing H.; Lam, Hugo Y. K.

    2015-01-01

    A high-confidence, comprehensive human variant set is critical in assessing accuracy of sequencing algorithms, which are crucial in precision medicine based on high-throughput sequencing. Although recent works have attempted to provide such a resource, they still do not encompass all major types of variants including structural variants (SVs). Thus, we leveraged the massive high-quality Sanger sequences from the HuRef genome to construct by far the most comprehensive gold set of a single individual, which was cross validated with deep Illumina sequencing, population datasets, and well-established algorithms. It was a necessary effort to completely reanalyze the HuRef genome as its previously published variants were mostly reported five years ago, suffering from compatibility, organization, and accuracy issues that prevent their direct use in benchmarking. Our extensive analysis and validation resulted in a gold set with high specificity and sensitivity. In contrast to the current gold sets of the NA12878 or HS1011 genomes, our gold set is the first that includes small variants, deletion SVs and insertion SVs up to a hundred thousand base-pairs. We demonstrate the utility of our HuRef gold set to benchmark several published SV detection tools. PMID:26412485

  8. RNA-seq mixology: designing realistic control experiments to compare protocols and analysis methods

    PubMed Central

    Holik, Aliaksei Z.; Law, Charity W.; Liu, Ruijie; Wang, Zeya; Wang, Wenyi; Ahn, Jaeil; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.

    2017-01-01

    Abstract Carefully designed control experiments provide a gold standard for benchmarking different genomics research tools. A shortcoming of many gene expression control studies is that replication involves profiling the same reference RNA sample multiple times. This leads to low, pure technical noise that is atypical of regular studies. To achieve a more realistic noise structure, we generated a RNA-sequencing mixture experiment using two cell lines of the same cancer type. Variability was added by extracting RNA from independent cell cultures and degrading particular samples. The systematic gene expression changes induced by this design allowed benchmarking of different library preparation kits (standard poly-A versus total RNA with Ribozero depletion) and analysis pipelines. Data generated using the total RNA kit had more signal for introns and various RNA classes (ncRNA, snRNA, snoRNA) and less variability after degradation. For differential expression analysis, voom with quality weights marginally outperformed other popular methods, while for differential splicing, DEXSeq was simultaneously the most sensitive and the most inconsistent method. For sample deconvolution analysis, DeMix outperformed IsoPure convincingly. Our RNA-sequencing data set provides a valuable resource for benchmarking different protocols and data pre-processing workflows. The extra noise mimics routine lab experiments more closely, ensuring any conclusions are widely applicable. PMID:27899618

  9. Development and Validation of a High-Quality Composite Real-World Mortality Endpoint.

    PubMed

    Curtis, Melissa D; Griffith, Sandra D; Tucker, Melisa; Taylor, Michael D; Capra, William B; Carrigan, Gillis; Holzman, Ben; Torres, Aracelis Z; You, Paul; Arnieri, Brandon; Abernethy, Amy P

    2018-05-14

    To create a high-quality electronic health record (EHR)-derived mortality dataset for retrospective and prospective real-world evidence generation. Oncology EHR data, supplemented with external commercial and US Social Security Death Index data, benchmarked to the National Death Index (NDI). We developed a recent, linkable, high-quality mortality variable amalgamated from multiple data sources to supplement EHR data, benchmarked against the highest completeness U.S. mortality data, the NDI. Data quality of the mortality variable version 2.0 is reported here. For advanced non-small-cell lung cancer, sensitivity of mortality information improved from 66 percent in EHR structured data to 91 percent in the composite dataset, with high date agreement compared to the NDI. For advanced melanoma, metastatic colorectal cancer, and metastatic breast cancer, sensitivity of the final variable was 85 to 88 percent. Kaplan-Meier survival analyses showed that improving mortality data completeness minimized overestimation of survival relative to NDI-based estimates. For EHR-derived data to yield reliable real-world evidence, it needs to be of known and sufficiently high quality. Considering the impact of mortality data completeness on survival endpoints, we highlight the importance of data quality assessment and advocate benchmarking to the NDI. © 2018 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.

  10. Space Weather Action Plan Solar Radio Burst Phase 1 Benchmarks and the Steps to Phase 2

    NASA Astrophysics Data System (ADS)

    Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Love, J. J.; Pierson, J.

    2017-12-01

    Solar radio bursts, when at the right frequency and when strong enough, can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The benchmark team has developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks, the basis used to derive them, and the limitations of that work. We will also discuss the work that needs to be done to complete the phase 2 benchmarks.

  11. Comparison of Origin 2000 and Origin 3000 Using NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Turney, Raymond D.

    2001-01-01

    This report describes results of benchmark tests on the Origin 3000 system currently being installed at the NASA Ames National Advanced Supercomputing facility. This machine will ultimately contain 1024 R14K processors. The first part of the system, installed in November, 2000 and named mendel, is an Origin 3000 with 128 R12K processors. For comparison purposes, the tests were also run on lomax, an Origin 2000 with R12K processors. The BT, LU, and SP application benchmarks in the NAS Parallel Benchmark Suite and the kernel benchmark FT were chosen to determine system performance and measure the impact of changes on the machine as it evolves. Having been written to measure performance on Computational Fluid Dynamics applications, these benchmarks are assumed appropriate to represent the NAS workload. Since the NAS runs both message passing (MPI) and shared-memory, compiler directive type codes, both MPI and OpenMP versions of the benchmarks were used. The MPI versions used were the latest official release of the NAS Parallel Benchmarks, version 2.3. The OpenMP versiqns used were PBN3b2, a beta version that is in the process of being released. NPB 2.3 and PBN 3b2 are technically different benchmarks, and NPB results are not directly comparable to PBN results.

  12. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    PubMed

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  13. A benchmarking program to reduce red blood cell outdating: implementation, evaluation, and a conceptual framework.

    PubMed

    Barty, Rebecca L; Gagliardi, Kathleen; Owens, Wendy; Lauzon, Deborah; Scheuermann, Sheena; Liu, Yang; Wang, Grace; Pai, Menaka; Heddle, Nancy M

    2015-07-01

    Benchmarking is a quality improvement tool that compares an organization's performance to that of its peers for selected indicators, to improve practice. Processes to develop evidence-based benchmarks for red blood cell (RBC) outdating in Ontario hospitals, based on RBC hospital disposition data from Canadian Blood Services, have been previously reported. These benchmarks were implemented in 160 hospitals provincewide with a multifaceted approach, which included hospital education, inventory management tools and resources, summaries of best practice recommendations, recognition of high-performing sites, and audit tools on the Transfusion Ontario website (http://transfusionontario.org). In this study we describe the implementation process and the impact of the benchmarking program on RBC outdating. A conceptual framework for continuous quality improvement of a benchmarking program was also developed. The RBC outdating rate for all hospitals trended downward continuously from April 2006 to February 2012, irrespective of hospitals' transfusion rates or their distance from the blood supplier. The highest annual outdating rate was 2.82%, at the beginning of the observation period. Each year brought further reductions, with a nadir outdating rate of 1.02% achieved in 2011. The key elements of the successful benchmarking strategy included dynamic targets, a comprehensive and evidence-based implementation strategy, ongoing information sharing, and a robust data system to track information. The Ontario benchmarking program for RBC outdating resulted in continuous and sustained quality improvement. Our conceptual iterative framework for benchmarking provides a guide for institutions implementing a benchmarking program. © 2015 AABB.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munro, J.F.; Kristal, J.; Thompson, G.

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis.more » One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.« less

  15. Using Benchmarking Techniques and the 2011 Maternity Practices Infant Nutrition and Care (mPINC) Survey to Improve Performance among Peer Groups across the United States

    PubMed Central

    Edwards, Roger A.; Dee, Deborah; Umer, Amna; Perrine, Cria G.; Shealy, Katherine R.; Grummer-Strawn, Laurence M.

    2015-01-01

    Background A substantial proportion of US maternity care facilities engage in practices that are not evidence-based and that interfere with breastfeeding. The CDC Survey of Maternity Practices in Infant Nutrition and Care (mPINC) showed significant variation in maternity practices among US states. Objective The purpose of this article is to use benchmarking techniques to identify states within relevant peer groups that were top performers on mPINC survey indicators related to breastfeeding support. Methods We used 11 indicators of breastfeeding-related maternity care from the 2011 mPINC survey and benchmarking techniques to organize and compare hospital-based maternity practices across the 50 states and Washington, DC. We created peer categories for benchmarking first by region (grouping states by West, Midwest, South, and Northeast) and then by size (grouping states by the number of maternity facilities and dividing each region into approximately equal halves based on the number of facilities). Results Thirty-four states had scores high enough to serve as benchmarks, and 32 states had scores low enough to reflect the lowest score gap from the benchmark on at least 1 indicator. No state served as the benchmark on more than 5 indicators and no state was furthest from the benchmark on more than 7 indicators. The small peer group benchmarks in the South, West, and Midwest were better than the large peer group benchmarks on 91%, 82%, and 36% of the indicators, respectively. In the West large, the Midwest large, the Midwest small, and the South large peer groups, 4–6 benchmarks showed that less than 50% of hospitals have ideal practice in all states. Conclusion The evaluation presents benchmarks for peer group state comparisons that provide potential and feasible targets for improvement. PMID:24394963

  16. The 9th international symposium on the packaging and transportation of radioactive materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1989-06-01

    This three-volume document contains the papers and poster sessions presented at the symposium. Volume 3 contains 87 papers on topics such as structural codes and benchmarking, shipment of plutonium by air, spent fuel shipping, planning, package design and risk assessment, package testing, OCRWN operations experience and regulations. Individual papers were processed separately for the data base. (TEM)

  17. Making the Good Even Better: Feedback from easyCBM Focus Groups, School Year 2009/2010. Technical Report # 1001

    ERIC Educational Resources Information Center

    Alonzo, Julie; Tindal, Gerald; Lai, Cheng-Fei

    2010-01-01

    This technical report provides a summary of feedback from teachers, administrators, and support personnel who used the easyCBM progress monitoring and benchmark assessment system during school year 2009/2010. Data were gathered from semi-structured focus groups conducted during the 2010 easyCBM August Institute at the University of Oregon. Results…

  18. Bioelectrochemical Systems Workshop:Standardized Analyses, Design Benchmarks, and Reporting

    DTIC Science & Technology

    2012-01-01

    related to the exoelectrogenic biofilm activity, and to investigate whether the community structure is a function of design and operational parameters...where should biofilm samples be collected? The most prevalent methods of community characterization in BES studies have entailed phylogenetic ...of function associated with this genetic marker, and in methods that involve polymerase chain reaction (PCR) amplification the quantitative

  19. Developing a molecular dynamics force field for both folded and disordered protein states.

    PubMed

    Robustelli, Paul; Piana, Stefano; Shaw, David E

    2018-05-07

    Molecular dynamics (MD) simulation is a valuable tool for characterizing the structural dynamics of folded proteins and should be similarly applicable to disordered proteins and proteins with both folded and disordered regions. It has been unclear, however, whether any physical model (force field) used in MD simulations accurately describes both folded and disordered proteins. Here, we select a benchmark set of 21 systems, including folded and disordered proteins, simulate these systems with six state-of-the-art force fields, and compare the results to over 9,000 available experimental data points. We find that none of the tested force fields simultaneously provided accurate descriptions of folded proteins, of the dimensions of disordered proteins, and of the secondary structure propensities of disordered proteins. Guided by simulation results on a subset of our benchmark, however, we modified parameters of one force field, achieving excellent agreement with experiment for disordered proteins, while maintaining state-of-the-art accuracy for folded proteins. The resulting force field, a99SB- disp , should thus greatly expand the range of biological systems amenable to MD simulation. A similar approach could be taken to improve other force fields. Copyright © 2018 the Author(s). Published by PNAS.

  20. Healthcare quality measurement in orthopaedic surgery: current state of the art.

    PubMed

    Auerbach, Andrew

    2009-10-01

    Improving quality of care in arthroplasty is of increasing importance to payors, hospitals, surgeons, and patients. Efforts to compel improvement have traditionally focused measurement and reporting of data describing structural factors, care processes (or 'quality measures'), and clinical outcomes. Reporting structural measures (eg, surgical case volume) has been used with varying degrees of success. Care process measures, exemplified by initiatives such as the Surgical Care Improvement Project measures, are chosen based on the strength of randomized trial evidence linking the process to improved outcomes. However, evidence linking improved performance on Surgical Care Improvement Project measures with improved outcomes is limited. Outcome measures in surgery are of increasing importance as an approach to compel care improvement with prominent examples represented by the National Surgical Quality Improvement Project. Although outcomes-focused approaches are often costly, when linked to active benchmarking and collaborative activities, they may improve care broadly. Moreover, implementation of computerized data systems collecting information formerly collected on paper only will facilitate benchmarking. In the end, care will only be improved if these data are used to define methods for innovating care systems that deliver better outcomes at lower or equivalent costs.

Top