Sample records for community level benchmarks

  1. Does Global Progress on Sanitation Really Lag behind Water? An Analysis of Global Progress on Community- and Household-Level Access to Safe Water and Sanitation

    PubMed Central

    Cumming, Oliver; Elliott, Mark; Overbo, Alycia; Bartram, Jamie

    2014-01-01

    Safe drinking water and sanitation are important determinants of human health and wellbeing and have recently been declared human rights by the international community. Increased access to both were included in the Millennium Development Goals under a single dedicated target for 2015. This target was reached in 2010 for water but sanitation will fall short; however, there is an important difference in the benchmarks used for assessing global access. For drinking water the benchmark is community-level access whilst for sanitation it is household-level access, so a pit latrine shared between households does not count toward the Millennium Development Goal (MDG) target. We estimated global progress for water and sanitation under two scenarios: with equivalent household- and community-level benchmarks. Our results demonstrate that the “sanitation deficit” is apparent only when household-level sanitation access is contrasted with community-level water access. When equivalent benchmarks are used for water and sanitation, the global deficit is as great for water as it is for sanitation, and sanitation progress in the MDG-period (1990–2015) outstrips that in water. As both drinking water and sanitation access yield greater benefits at the household-level than at the community-level, we conclude that any post–2015 goals should consider a household-level benchmark for both. PMID:25502659

  2. Does global progress on sanitation really lag behind water? An analysis of global progress on community- and household-level access to safe water and sanitation.

    PubMed

    Cumming, Oliver; Elliott, Mark; Overbo, Alycia; Bartram, Jamie

    2014-01-01

    Safe drinking water and sanitation are important determinants of human health and wellbeing and have recently been declared human rights by the international community. Increased access to both were included in the Millennium Development Goals under a single dedicated target for 2015. This target was reached in 2010 for water but sanitation will fall short; however, there is an important difference in the benchmarks used for assessing global access. For drinking water the benchmark is community-level access whilst for sanitation it is household-level access, so a pit latrine shared between households does not count toward the Millennium Development Goal (MDG) target. We estimated global progress for water and sanitation under two scenarios: with equivalent household- and community-level benchmarks. Our results demonstrate that the "sanitation deficit" is apparent only when household-level sanitation access is contrasted with community-level water access. When equivalent benchmarks are used for water and sanitation, the global deficit is as great for water as it is for sanitation, and sanitation progress in the MDG-period (1990-2015) outstrips that in water. As both drinking water and sanitation access yield greater benefits at the household-level than at the community-level, we conclude that any post-2015 goals should consider a household-level benchmark for both.

  3. Issues in Benchmarking and Assessing Institutional Engagement

    ERIC Educational Resources Information Center

    Furco, Andrew; Miller, William

    2009-01-01

    The process of assessing and benchmarking community engagement can take many forms. To date, more than two dozen assessment tools for measuring community engagement institutionalization have been published. These tools vary substantially in purpose, level of complexity, scope, process, structure, and focus. While some instruments are designed to…

  4. Identification of overlapping communities and their hierarchy by locally calculating community-changing resolution levels

    NASA Astrophysics Data System (ADS)

    Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen

    2011-01-01

    We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.

  5. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  6. From Innovative Programs to Systemic Education Reform: Lesson from Five Communities. The Final Report of the Benchmark Communities Initiative.

    ERIC Educational Resources Information Center

    DeSalvatore, Larry; Goldberger, Susan; Steinberg, Adria

    This document presents the lessons of Jobs for the Future's Benchmark Communities Initiative (BCI), a 5-year systemic educational reform initiative launched in 1994 in five communities. Before joining the BCI, the five Benchmark communities had each begun a school-to-career effort. Five key findings from the BCI are outlined: (1) students engaged…

  7. Assessing rural small community water supply in Limpopo, South Africa: water service benchmarks and reliability.

    PubMed

    Majuru, Batsirai; Jagals, Paul; Hunter, Paul R

    2012-10-01

    Although a number of studies have reported on water supply improvements, few have simultaneously taken into account the reliability of the water services. The study aimed to assess whether upgrading water supply systems in small rural communities improved access, availability and potability of water by assessing the water services against selected benchmarks from the World Health Organisation and South African Department of Water Affairs, and to determine the impact of unreliability on the services. These benchmarks were applied in three rural communities in Limpopo, South Africa where rudimentary water supply services were being upgraded to basic services. Data were collected through structured interviews, observations and measurement, and multi-level linear regression models were used to assess the impact of water service upgrades on key outcome measures of distance to source, daily per capita water quantity and Escherichia coli count. When the basic system was operational, 72% of households met the minimum benchmarks for distance and water quantity, but only 8% met both enhanced benchmarks. During non-operational periods of the basic service, daily per capita water consumption decreased by 5.19l (p<0.001, 95% CI 4.06-6.31) and distances to water sources were 639 m further (p ≤ 0.001, 95% CI 560-718). Although both rudimentary and basic systems delivered water that met potability criteria at the sources, the quality of stored water sampled in the home was still unacceptable throughout the various service levels. These results show that basic water services can make substantial improvements to water access, availability, potability, but only if such services are reliable. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Adding Fault Tolerance to NPB Benchmarks Using ULFM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parchman, Zachary W; Vallee, Geoffroy R; Naughton III, Thomas J

    2016-01-01

    In the world of high-performance computing, fault tolerance and application resilience are becoming some of the primary concerns because of increasing hardware failures and memory corruptions. While the research community has been investigating various options, from system-level solutions to application-level solutions, standards such as the Message Passing Interface (MPI) are also starting to include such capabilities. The current proposal for MPI fault tolerant is centered around the User-Level Failure Mitigation (ULFM) concept, which provides means for fault detection and recovery of the MPI layer. This approach does not address application-level recovery, which is currently left to application developers. In thismore » work, we present a mod- ification of some of the benchmarks of the NAS parallel benchmark (NPB) to include support of the ULFM capabilities as well as application-level strategies and mechanisms for application-level failure recovery. As such, we present: (i) an application-level library to checkpoint and restore data, (ii) extensions of NPB benchmarks for fault tolerance based on different strategies, (iii) a fault injection tool, and (iv) some preliminary results that show the impact of such fault tolerant strategies on the application execution.« less

  9. Comparing Community College Student and Faculty Perceptions of Student Engagement

    ERIC Educational Resources Information Center

    Senn-Carter, Darian

    2017-01-01

    The purpose of this quantitative study was to compare faculty and student perceptions of "student engagement" at a mid-Atlantic community college to determine the level of correlation between student experiences and faculty practices in five benchmark areas of student engagement: "academic challenge, student-faculty interaction,…

  10. Utilizing Benchmarking to Study the Effectiveness of Parent-Child Interaction Therapy Implemented in a Community Setting

    ERIC Educational Resources Information Center

    Self-Brown, Shannon; Valente, Jessica R.; Wild, Robert C.; Whitaker, Daniel J.; Galanter, Rachel; Dorsey, Shannon; Stanley, Jenelle

    2012-01-01

    Benchmarking is a program evaluation approach that can be used to study whether the outcomes of parents/children who participate in an evidence-based program in the community approximate the outcomes found in randomized trials. This paper presents a case illustration using benchmarking methodology to examine a community implementation of…

  11. Austin Community College Benchmarking Update.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX. Office of Institutional Effectiveness.

    Austin Community College contracted with MGT of America, Inc. in spring 1999 to develop a peer and benchmark (best) practices analysis on key indicators. These indicators were updated in spring 2002 using data from eight Texas community colleges and four non-Texas institutions that represent large, comprehensive, urban community colleges, similar…

  12. Canada's Composite Learning Index: A Path Towards Learning Communities

    ERIC Educational Resources Information Center

    Cappon, Paul; Laughlin, Jarrett

    2013-01-01

    In the development of learning cities/communities, benchmarking progress is a key element. Not only does it permit cities/communities to assess their current strengths and weaknesses, it also engenders a dialogue within and between cities/communities on the means of enhancing learning conditions. Benchmarking thereby is a potentially motivational…

  13. Benchmarking and testing the "Sea Level Equation

    NASA Astrophysics Data System (ADS)

    Spada, G.; Barletta, V. R.; Klemann, V.; van der Wal, W.; James, T. S.; Simon, K.; Riva, R. E. M.; Martinec, Z.; Gasperini, P.; Lund, B.; Wolf, D.; Vermeersen, L. L. A.; King, M. A.

    2012-04-01

    The study of the process of Glacial Isostatic Adjustment (GIA) and of the consequent sea level variations is gaining an increasingly important role within the geophysical community. Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying GIA can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, we do not have a suitably large set of agreed numerical results through which the methods may be validated. Following the example of the mantle convection community and our recent successful Benchmark for Post Glacial Rebound codes (Spada et al., 2011, doi: 10.1111/j.1365-246X.2011.04952.x), here we present the results of a benchmark study of independently developed codes designed to solve the SLE. This study has taken place within a collaboration facilitated through the European Cooperation in Science and Technology (COST) Action ES0701. The tests involve predictions of past and current sea level variations, and 3D deformations of the Earth surface. In spite of the signi?cant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found, which can be often attributed to the different numerical algorithms employed within the community, help to constrain the intrinsic errors in model predictions. These are of fundamental importance for a correct interpretation of the geodetic variations observed today, and particularly for the evaluation of climate-driven sea level variations.

  14. Sustainability of the Communities That Care prevention system by coalitions participating in the Community Youth Development Study.

    PubMed

    Gloppen, Kari M; Arthur, Michael W; Hawkins, J David; Shapiro, Valerie B

    2012-09-01

    Community prevention coalitions are a common strategy to mobilize stakeholders to implement tested and effective prevention programs to promote adolescent health and well-being. This article examines the sustainability of Communities That Care (CTC) coalitions approximately 20 months after study support for the intervention ended. The Community Youth Development Study is a community-randomized trial of the CTC prevention system. Using data from 2007 and 2009 coalition leader interviews, this study reports changes in coalition activities from a period of study support for CTC (2007) to 20 months following the end of study support for CTC (2009), measured by the extent to which coalitions continued to meet specific benchmarks. Twenty months after study support for CTC implementation ended, 11 of 12 CTC coalitions in the Community Youth Development Study still existed. The 11 remaining coalitions continued to report significantly higher scores on the benchmarks of phases 2 through 5 of the CTC system than did prevention coalitions in the control communities. At the 20-month follow-up, two-thirds of the CTC coalitions reported having a paid staff person. This study found that the CTC coalitions maintained a relatively high level of implementation fidelity to the CTC system 20 months after the study support for the intervention ended. However, the downward trend in some of the measured benchmarks indicates that continued high-quality training and technical assistance may be important to ensure that CTC coalitions maintain a science-based approach to prevention, and continue to achieve public health impacts on adolescent health and behavior outcomes. Copyright © 2012 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  15. Community benefits: how do for-profit and nonprofit hospitals measure up?

    PubMed

    Nicholson, S; Pauly, M V

    The rise of the for-profit hospital industry has opened a debate about the level of community benefits provided by non-profit hospitals. Do nonprofits provide enough community benefits to justify the community's commitment of resources to them, and the tax-exempt status they receive? If nonprofit hospitals convert to for-profit entities, would community benefits be lost in the transaction? This debate has highlighted the need to define and measure community benefits more clearly. In this Issue Brief, the authors develop a new method of identifying activities that qualify as community benefits, and propose a benchmark for the amount of benefit a nonprofit hospital should provide.

  16. Interlaboratory Study Characterizing a Yeast Performance Standard for Benchmarking LC-MS Platform Performance*

    PubMed Central

    Paulovich, Amanda G.; Billheimer, Dean; Ham, Amy-Joan L.; Vega-Montoto, Lorenzo; Rudnick, Paul A.; Tabb, David L.; Wang, Pei; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Clauser, Karl R.; Kinsinger, Christopher R.; Schilling, Birgit; Tegeler, Tony J.; Variyath, Asokan Mulayath; Wang, Mu; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Fenyo, David; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Mesri, Mehdi; Neubert, Thomas A.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Stein, Stephen E.; Tempst, Paul; Liebler, Daniel C.

    2010-01-01

    Optimal performance of LC-MS/MS platforms is critical to generating high quality proteomics data. Although individual laboratories have developed quality control samples, there is no widely available performance standard of biological complexity (and associated reference data sets) for benchmarking of platform performance for analysis of complex biological proteomes across different laboratories in the community. Individual preparations of the yeast Saccharomyces cerevisiae proteome have been used extensively by laboratories in the proteomics community to characterize LC-MS platform performance. The yeast proteome is uniquely attractive as a performance standard because it is the most extensively characterized complex biological proteome and the only one associated with several large scale studies estimating the abundance of all detectable proteins. In this study, we describe a standard operating protocol for large scale production of the yeast performance standard and offer aliquots to the community through the National Institute of Standards and Technology where the yeast proteome is under development as a certified reference material to meet the long term needs of the community. Using a series of metrics that characterize LC-MS performance, we provide a reference data set demonstrating typical performance of commonly used ion trap instrument platforms in expert laboratories; the results provide a basis for laboratories to benchmark their own performance, to improve upon current methods, and to evaluate new technologies. Additionally, we demonstrate how the yeast reference, spiked with human proteins, can be used to benchmark the power of proteomics platforms for detection of differentially expressed proteins at different levels of concentration in a complex matrix, thereby providing a metric to evaluate and minimize preanalytical and analytical variation in comparative proteomics experiments. PMID:19858499

  17. Benchmarking Alumni Relations in Community Colleges: Findings from a 2015 CASE Survey

    ERIC Educational Resources Information Center

    Paradise, Andrew

    2016-01-01

    The Benchmarking Alumni Relations in Community Colleges white paper features key data on alumni relations programs at community colleges across the United States. The paper compares results from 2015 and 2012 across such areas as the structure, operations and budget for alumni relations, alumni data collection and management, alumni communications…

  18. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  19. Resource requirements of inclusive urban development in India: insights from ten cities

    NASA Astrophysics Data System (ADS)

    Singh Nagpure, Ajay; Reiner, Mark; Ramaswami, Anu

    2018-02-01

    This paper develops a methodology to assess the resource requirements of inclusive urban development in India and compares those requirements to current community-wide material and energy flows. Methods include: (a) identifying minimum service level benchmarks for the provision of infrastructure services including housing, electricity and clean cooking fuels; (b) assessing the percentage of homes that lack access to infrastructure or that consume infrastructure services below the identified benchmarks; (c) quantifying the material requirements to provide basic infrastructure services using India-specific design data; and (d) computing material and energy requirements for inclusive development and comparing it with current community-wide material and energy flows. Applying the method to ten Indian cities, we find that: 1%-6% of households do not have electricity, 14%-71% use electricity below the benchmark of 25 kWh capita-month-1 4%-16% lack structurally sound housing; 50%-75% live in floor area less than the benchmark of 8.75 m2 floor area/capita; 10%-65% lack clean cooking fuel; and 6%-60% lack connection to a sewerage system. Across the ten cities examined, to provide basic electricity (25 kWh capita-month-1) to all will require an addition of only 1%-10% in current community-wide electricity use. To provide basic clean LPG fuel (1.2 kg capita-month-1) to all requires an increase of 5%-40% in current community-wide LPG use. Providing permanent shelter (implemented over a ten year period) to populations living in non-permanent housing in Delhi and Chandigarh would require a 6%-14% increase over current annual community-wide cement use. Conversely, to provide permanent housing to all people living in structurally unsound housing and those living in overcrowded housing (<5 m cap-2) would require 32%-115% of current community-wide cement flows. Except for the last scenario, these results suggest that social policies that seek to provide basic infrastructure provisioning for all residents would not dramatically increasing current community-wide resource flows.

  20. Benchmark Simulation Model No 2: finalisation of plant layout and default control strategy.

    PubMed

    Nopens, I; Benedetti, L; Jeppsson, U; Pons, M-N; Alex, J; Copp, J B; Gernaey, K V; Rosen, C; Steyer, J-P; Vanrolleghem, P A

    2010-01-01

    The COST/IWA Benchmark Simulation Model No 1 (BSM1) has been available for almost a decade. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the research work related to the benchmark simulation models has resulted in more than 300 publications worldwide demonstrates the interest in and need of such tools within the research community. Recent efforts within the IWA Task Group on "Benchmarking of control strategies for WWTPs" have focused on an extension of the benchmark simulation model. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently, includes both pretreatment of wastewater as well as the processes describing sludge treatment. The motivation for the extension is the increasing interest and need to operate and control wastewater treatment systems not only at an individual process level but also on a plant-wide basis. To facilitate the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one week BSM1 evaluation period. In this paper, the finalised plant layout is summarised and, as was done for BSM1, a default control strategy is proposed. A demonstration of how BSM2 can be used to evaluate control strategies is also given.

  1. Benchmarking the cost efficiency of community care in Australian child and adolescent mental health services: implications for future benchmarking.

    PubMed

    Furber, Gareth; Brann, Peter; Skene, Clive; Allison, Stephen

    2011-06-01

    The purpose of this study was to benchmark the cost efficiency of community care across six child and adolescent mental health services (CAMHS) drawn from different Australian states. Organizational, contact and outcome data from the National Mental Health Benchmarking Project (NMHBP) data-sets were used to calculate cost per "treatment hour" and cost per episode for the six participating organizations. We also explored the relationship between intake severity as measured by the Health of the Nations Outcome Scales for Children and Adolescents (HoNOSCA) and cost per episode. The average cost per treatment hour was $223, with cost differences across the six services ranging from a mean of $156 to $273 per treatment hour. The average cost per episode was $3349 (median $1577) and there were significant differences in the CAMHS organizational medians ranging from $388 to $7076 per episode. HoNOSCA scores explained at best 6% of the cost variance per episode. These large cost differences indicate that community CAMHS have the potential to make substantial gains in cost efficiency through collaborative benchmarking. Benchmarking forums need considerable financial and business expertise for detailed comparison of business models for service provision.

  2. Iowa's Community College Adult Literacy Annual Report. Program Year 2007, July 1, 2006-June 30, 2007

    ERIC Educational Resources Information Center

    Division of Community Colleges and Workforce Preparation, Iowa Department of Education, 2007

    2007-01-01

    This comprehensive document replaces the previously published Benchmark Report, Benchmark Report Executive Summary, Iowa's Community College Basic Literacy Skills Credential Report, Iowa GED Statistical Report, GED Annual Performance Report and Iowa's Adult Literacy Program National Reporting System Annual Performance Report (Graphic…

  3. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 11 2012-01-01 2012-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the most...

  4. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 11 2014-01-01 2014-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the most...

  5. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the most...

  6. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 11 2011-01-01 2011-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the most...

  7. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 11 2013-01-01 2013-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the most...

  8. Toward benchmarking in catalysis science: Best practices, challenges, and opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bligaard, Thomas; Bullock, R. Morris; Campbell, Charles T.

    Benchmarking is a community-based and (preferably) community-driven activity involving consensus-based decisions on how to make reproducible, fair, and relevant assessments. In catalysis science, important catalyst performance metrics include activity, selectivity, and the deactivation profile, which enable comparisons between new and standard catalysts. Benchmarking also requires careful documentation, archiving, and sharing of methods and measurements, to ensure that the full value of research data can be realized. Beyond these goals, benchmarking presents unique opportunities to advance and accelerate understanding of complex reaction systems by combining and comparing experimental information from multiple, in situ and operando techniques with theoretical insights derived frommore » calculations characterizing model systems. This Perspective describes the origins and uses of benchmarking and its applications in computational catalysis, heterogeneous catalysis, molecular catalysis, and electrocatalysis. As a result, it also discusses opportunities and challenges for future developments in these fields.« less

  9. Toward benchmarking in catalysis science: Best practices, challenges, and opportunities

    DOE PAGES

    Bligaard, Thomas; Bullock, R. Morris; Campbell, Charles T.; ...

    2016-03-07

    Benchmarking is a community-based and (preferably) community-driven activity involving consensus-based decisions on how to make reproducible, fair, and relevant assessments. In catalysis science, important catalyst performance metrics include activity, selectivity, and the deactivation profile, which enable comparisons between new and standard catalysts. Benchmarking also requires careful documentation, archiving, and sharing of methods and measurements, to ensure that the full value of research data can be realized. Beyond these goals, benchmarking presents unique opportunities to advance and accelerate understanding of complex reaction systems by combining and comparing experimental information from multiple, in situ and operando techniques with theoretical insights derived frommore » calculations characterizing model systems. This Perspective describes the origins and uses of benchmarking and its applications in computational catalysis, heterogeneous catalysis, molecular catalysis, and electrocatalysis. As a result, it also discusses opportunities and challenges for future developments in these fields.« less

  10. Revenues and Expenditures: Peer and Benchmark Comparisons--University of Hawai'i Community Colleges, Fiscal Year 1995-96.

    ERIC Educational Resources Information Center

    Hawaii Univ., Honolulu. Institutional Research Office.

    This report presents information comparing the University of Hawaii Community Colleges (UHCC) to benchmark and peer-group institutions on selected financial measures. The primary data sources for this report were the Integrated Postsecondary Education Data System (IPEDS) Finance Survey for the 1995-1996 fiscal year and the IPEDS Fall Enrollment…

  11. A Year of Progress in School-to-Career System Building. The Benchmark Communities Initiative.

    ERIC Educational Resources Information Center

    Martinez, Martha I.; And Others

    This document examines the first year of Jobs for the Future's Benchmark Communities Initiative (BCI), a 5-year effort to achieve the following: large-scale systemic restructuring of K-16 educational systems; involvement of significant numbers of employers in work and learning partnerships; and development of the infrastructure necessary to…

  12. Practical Considerations when Using Benchmarking for Accountability in Higher Education

    ERIC Educational Resources Information Center

    Achtemeier, Sue D.; Simpson, Ronald D.

    2005-01-01

    The qualitative study on which this article is based examined key individuals' perceptions, both within a research university community and beyond in its external governing board, of how to improve benchmarking as an accountability method in higher education. Differing understanding of benchmarking revealed practical implications for using it as…

  13. Benchmarking in the Two-Year Public Postsecondary Sector: A Learning Process

    ERIC Educational Resources Information Center

    Mitchell, Jennevieve

    2015-01-01

    The recession prompted reflection on how resource allocation decisions contribute to the performance of community colleges in the United States. Private benchmarking initiatives, most notably those established by the National Higher Education Benchmarking Institute, can only partially begin to address this question. Empirical and financial…

  14. Critical Assessment of Metagenome Interpretation – a benchmark of computational metagenomics software

    PubMed Central

    Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D.; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z.; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J.; Chia, Burton K. H.; Denis, Bertrand; Froula, Jeff L.; Wang, Zhong; Egan, Robert; Kang, Dongwan Don; Cook, Jeffrey J.; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W.; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z.; Cuevas, Daniel A.; Edwards, Robert A.; Saha, Surya; Piro, Vitor C.; Renard, Bernhard Y.; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C.; Woyke, Tanja; Vorholt, Julia A.; Schulze-Lefert, Paul; Rubin, Edward M.; Darling, Aaron E.; Rattei, Thomas; McHardy, Alice C.

    2018-01-01

    In metagenome analysis, computational methods for assembly, taxonomic profiling and binning are key components facilitating downstream biological data interpretation. However, a lack of consensus about benchmarking datasets and evaluation metrics complicates proper performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on datasets of unprecedented complexity and realism. Benchmark metagenomes were generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids, including genomes with varying degrees of relatedness to each other and to publicly available ones and representing common experimental setups. Across all datasets, assembly and genome binning programs performed well for species represented by individual genomes, while performance was substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below the family level. Parameter settings substantially impacted performances, underscoring the importance of program reproducibility. While highlighting current challenges in computational metagenomics, the CAMI results provide a roadmap for software selection to answer specific research questions. PMID:28967888

  15. Length of stay benchmarks for inpatient rehabilitation after stroke.

    PubMed

    Meyer, Matthew; Britt, Eileen; McHale, Heather A; Teasell, Robert

    2012-01-01

    In Canada, no standardized benchmarks for length of stay (LOS) have been established for post-stroke inpatient rehabilitation. This paper describes the development of a severity specific median length of stay benchmarking strategy, assessment of its impact after one year of implementation in a Canadian rehabilitation hospital, and establishment of updated benchmarks that may be useful for comparison with other facilities across Canada. Patient data were retrospectively assessed for all patients admitted to a single post-acute stroke rehabilitation unit in Ontario, Canada between April 2005 and March 2008. Rehabilitation Patient Groups (RPGs) were used to establish stratified median length of stay benchmarks for each group that were incorporated into team rounds beginning in October 2009. Benchmark impact was assessed using mean LOS, FIM(®) gain, and discharge destination for each RPG group, collected prospectively for one year, compared against similar information from the previous calendar year. Benchmarks were then adjusted accordingly for future use. Between October 2009 and September 2010, a significant reduction in average LOS was noted compared to the previous year (35.3 vs. 41.2 days; p < 0.05). Reductions in LOS were noted in each RPG group including statistically significant reductions in 4 of the 7 groups. As intended, reductions in LOS were achieved with no significant reduction in mean FIM(®) gain or proportion of patients discharged home compared to the previous year. Adjusted benchmarks for LOS ranged from 13 to 48 days depending on the RPG group. After a single year of implementation, severity specific benchmarks helped the rehabilitation team reduce LOS while maintaining the same levels of functional gain and achieving the same rate of discharge to the community. © 2012 Informa UK, Ltd.

  16. Developing a Benchmarking Process in Perfusion: A Report of the Perfusion Downunder Collaboration

    PubMed Central

    Baker, Robert A.; Newland, Richard F.; Fenton, Carmel; McDonald, Michael; Willcox, Timothy W.; Merry, Alan F.

    2012-01-01

    Abstract: Improving and understanding clinical practice is an appropriate goal for the perfusion community. The Perfusion Downunder Collaboration has established a multi-center perfusion focused database aimed at achieving these goals through the development of quantitative quality indicators for clinical improvement through benchmarking. Data were collected using the Perfusion Downunder Collaboration database from procedures performed in eight Australian and New Zealand cardiac centers between March 2007 and February 2011. At the Perfusion Downunder Meeting in 2010, it was agreed by consensus, to report quality indicators (QI) for glucose level, arterial outlet temperature, and pCO2 management during cardiopulmonary bypass. The values chosen for each QI were: blood glucose ≥4 mmol/L and ≤10 mmol/L; arterial outlet temperature ≤37°C; and arterial blood gas pCO2 ≥ 35 and ≤45 mmHg. The QI data were used to derive benchmarks using the Achievable Benchmark of Care (ABC™) methodology to identify the incidence of QIs at the best performing centers. Five thousand four hundred and sixty-five procedures were evaluated to derive QI and benchmark data. The incidence of the blood glucose QI ranged from 37–96% of procedures, with a benchmark value of 90%. The arterial outlet temperature QI occurred in 16–98% of procedures with the benchmark of 94%; while the arterial pCO2 QI occurred in 21–91%, with the benchmark value of 80%. We have derived QIs and benchmark calculations for the management of several key aspects of cardiopulmonary bypass to provide a platform for improving the quality of perfusion practice. PMID:22730861

  17. Data Don't Drive: Building a Practitioner-Driven Culture of Inquiry to Assess Community College Performance. Lumina Foundation for Education Research Report

    ERIC Educational Resources Information Center

    Dowd, Alicia C.

    2005-01-01

    This report reviews the benchmarking practices that are presently being used at community colleges. It introduces the concept of a "culture of inquiry" as a means for judging their potential value. It classifies benchmarking efforts among three types--performance, diagnostic, and process--and characterizes each by its typical use. The…

  18. Benchmarking Big Data Systems and the BigData Top100 List.

    PubMed

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  19. Benchmark problems for numerical implementations of phase field models

    DOE PAGES

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less

  20. Developing Quality Indicators for Family Support Services in Community Team-Based Mental Health Care

    PubMed Central

    Olin, S. Serene; Kutash, Krista; Pollock, Michele; Burns, Barbara J.; Kuppinger, Anne; Craig, Nancy; Purdy, Frances; Armusewicz, Kelsey; Wisdom, Jennifer; Hoagwood, Kimberly E.

    2013-01-01

    Quality indicators for programs integrating parent-delivered family support services for children’s mental health have not been systematically developed. Increasing emphasis on accountability under the Affordable Care Act highlights the importance of quality-benchmarking efforts. Using a modified Delphi approach, quality indicators were developed for both program level and family support specialist level practices. These indicators were pilot tested with 21 community-based mental health programs. Psychometric properties of these indicators are reported; variations in program and family support specialist performance suggest the utility of these indicators as tools to guide policies and practices in organizations that integrate parent-delivered family support service components. PMID:23709287

  1. Seeding for pervasively overlapping communities

    NASA Astrophysics Data System (ADS)

    Lee, Conrad; Reid, Fergal; McDaid, Aaron; Hurley, Neil

    2011-06-01

    In some social and biological networks, the majority of nodes belong to multiple communities. It has recently been shown that a number of the algorithms specifically designed to detect overlapping communities do not perform well in such highly overlapping settings. Here, we consider one class of these algorithms, those which optimize a local fitness measure, typically by using a greedy heuristic to expand a seed into a community. We perform synthetic benchmarks which indicate that an appropriate seeding strategy becomes more important as the extent of community overlap increases. We find that distinct cliques provide the best seeds. We find further support for this seeding strategy with benchmarks on a Facebook network and the yeast interactome.

  2. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  3. Benchmarking infrastructure for mutation text mining.

    PubMed

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  4. Investing in innovation: trade-offs in the costs and cost-efficiency of school feeding using community-based kitchens in Bangladesh.

    PubMed

    Gelli, Aulo; Suwa, Yuko

    2014-09-01

    School feeding programs have been a key response to the recent food and economic crises and function to some degree in nearly every country in the world. However, school feeding programs are complex and exhibit different, context-specific models or configurations. To examine the trade-offs, including the costs and cost-efficiency, of an innovative cluster kitchen implementation model in Bangladesh using a standardized framework. A supply chain framework based on international standards was used to provide benchmarks for meaningful comparisons across models. Implementation processes specific to the program in Bangladesh were mapped against this reference to provide a basis for standardized performance measures. Qualitative and quantitative data on key metrics were collected retrospectively using semistructured questionnaires following an ingredients approach, including both financial and economic costs. Costs were standardized to a 200-feeding-day year and 700 kcal daily. The cluster kitchen model had similarities with the semidecentralized model and outsourced models in the literature, the main differences involving implementation scale, scale of purchasing volumes, and frequency of purchasing. Two important features stand out in terms of implementation: the nutritional quality of meals and the level of community involvement. The standardized full cost per child per year was US$110. Despite the nutritious content of the meals, the overall cost-efficiency in cost per nutrient output was lower than the benchmark for centralized programs, due mainly to support and start-up costs. Cluster kitchens provide an example of an innovative implementation model, combining an emphasis on quality meal delivery with strong community engagement. However, the standardized costs-per child were above the average benchmarks for both low-and middle-income countries. In contrast to the existing benchmark data from mature, centralized models, the main cost drivers of the program were associated with support and start-up activities. Further research is required to better understand changes in cost drivers as programs mature.

  5. Design and Application of a Community Land Benchmarking System for Earth System Models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Koven, C. D.; Kluzek, E. B.; Mao, J.; Randerson, J. T.

    2015-12-01

    Benchmarking has been widely used to assess the ability of climate models to capture the spatial and temporal variability of observations during the historical era. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we developed a new benchmarking software system that enables the user to specify the models, benchmarks, and scoring metrics, so that results can be tailored to specific model intercomparison projects. Evaluation data sets included soil and aboveground carbon stocks, fluxes of energy, carbon and water, burned area, leaf area, and climate forcing and response variables. We used this system to evaluate simulations from the 5th Phase of the Coupled Model Intercomparison Project (CMIP5) with prognostic atmospheric carbon dioxide levels over the period from 1850 to 2005 (i.e., esmHistorical simulations archived on the Earth System Grid Federation). We found that the multi-model ensemble had a high bias in incoming solar radiation across Asia, likely as a consequence of incomplete representation of aerosol effects in this region, and in South America, primarily as a consequence of a low bias in mean annual precipitation. The reduced precipitation in South America had a larger influence on gross primary production than the high bias in incoming light, and as a consequence gross primary production had a low bias relative to the observations. Although model to model variations were large, the multi-model mean had a positive bias in atmospheric carbon dioxide that has been attributed in past work to weak ocean uptake of fossil emissions. In mid latitudes of the northern hemisphere, most models overestimate latent heat fluxes in the early part of the growing season, and underestimate these fluxes in mid-summer and early fall, whereas sensible heat fluxes show the opposite trend.

  6. Benthic invertebrates of benchmark streams in agricultural areas of eastern Wisconsin, Western Lake Michigan Drainages

    USGS Publications Warehouse

    Rheaume, S.J.; Lenz, B.N.; Scudder, B.C.

    1996-01-01

    Information gathered from these benchmark streams can be used as a regional reference for comparison with other streams in agricultural areas, based on communities of aquatic biota, habitat, and water quality.

  7. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    EPA Pesticide Factsheets

    The U.S. EPA conducts risk assessments for an array of health effects that may result from exposure to environmental agents, and that require an analysis of the relationship between exposure and health-related outcomes. The dose-response assessment is essentially a two-step process, the first being the definition of a point of departure (POD), and the second extrapolation from the POD to low environmentally-relevant exposure levels. The benchmark dose (BMD) approach provides a more quantitative alternative to the first step in the dose-response assessment than the current NOAEL/LOAEL process for noncancer health effects, and is similar to that for determining the POD proposed for cancer endpoints. As the Agency moves toward harmonization of approaches for human health risk assessment, the dichotomy between cancer and noncancer health effects is being replaced by consideration of mode of action and whether the effects of concern are likely to be linear or nonlinear at low doses. Thus, the purpose of this project is to provide guidance for the Agency and the outside community on the application of the BMD approach in determining the POD for all types of health effects data, whether a linear or nonlinear low dose extrapolation is used. A guidance document is being developed under the auspices of EPA's Risk Assessment Forum. The purpose of this project is to provide guidance for the Agency and the outside community on the application of the benchmark dose (BMD) appr

  8. The Craft of Benchmarking: Finding and Utilizing District-Level, Campus-Level, and Program-Level Standards.

    ERIC Educational Resources Information Center

    McGregor, Ellen N.; Attinasi, Louis C., Jr.

    This paper describes the processes involved in selecting peer institutions for appropriate benchmarking using national databases (NCES-IPEDS). Benchmarking involves the identification of peer institutions and/or best practices in specific operational areas for the purpose of developing standards. The benchmarking process was borne in the early…

  9. A New Global Vertical Land Movement Data Set from the TIGA Combined Solution

    NASA Astrophysics Data System (ADS)

    Hunegnaw, Addisu; Teferle, Felix Norman; Ebuy Abraha, Kibrom; Santamaría-Gómez, Alvaro; Gravelle, Médéric; Wöppelman, Guy; Schöne, Tilo; Deng, Zhiguo; Bingley, Richard; Hansen, Dionne Nicole; Sanchez, Laura; Moore, Michael; Jia, Minghai

    2017-04-01

    Globally averaged sea level has been estimated from the network of tide gauges installed around the world since the 19th century. These mean sea level (MSL) records provide sea level relative to a nearby tide gauge benchmark (TGBM), which allows for the continuation of the instrumental record in time. Any changes in the benchmark levels, induced by vertical land movements (VLM) affect the MSL records and hence sea level estimates. Over the last two decades sea level has also been observed using satellite altimeters. While the satellite observations are globally more homogeneous providing a picture of sea level not confined to coastlines, they require the VLM-corrected MSL records for the bias calibration of instrumental drifts. Without this calibration altimeter instruments from different missions cannot be combined. GPS has made it possible to obtain highly accurate estimates of VLM in a geocentric reference frame for stations at or close to tide gauges. Under the umbrella of the International GNSS Service (IGS), the Tide Gauge Benchmark Monitoring (TIGA) Working Group (WG) has been established to apply the expertise of the GNSS community to solving issues related to the accuracy and reliability of the vertical component to provide estimates of VLM in a well-defined global reference frame. To achieve this objective, five TIGA Analysis Centers (TACs) contributed re-processed global GPS network solutions to TIGA, employing the latest bias models and processing strategies in accordance with the second re-processing campaign (repro2) of the IGS. These solutions include those of the British Isles continuous GNSS Facility - University of Luxembourg consortium (BLT), the German Research Centre for Geosciences (GFZ) Potsdam, the German Geodetic Research Institute (DGF) at the Technical University of Munich, Geoscience Australia (AUT) and the University of La Rochelle (ULR). In this study we present to the sea level community an evaluation of the VLM estimates from the first combined solution from the IGS TIGA WG. The TAC solutions include more than 700 stations and span the common period 1995-2014. The combined solution was computed by the TIGA Combination Centre (TCC) at the University of Luxembourg, which used the Combination and Analysis of Terrestrial Reference Frame (CATREF) software package for this purpose. This first solution forms Release 1.0 and further releases will be made available after further reprocessing campaigns. We evaluate the combined solution internally using the TAC solutions and externally using solutions from the IGS and the ITRF2008. The derived VLM estimates have undergone an initial evaluation and should be considered as the primary TIGA product for the sea level community to correct MSL records for land level changes.

  10. Increased incidence and altered risk demographics of childhood lead poisoning: predicting the impacts of the CDC’s 5 µg/dL reference value in Massachusetts (USA).

    PubMed

    Handler, Phoebe; Brabander, Daniel

    2012-10-30

    In May 2012, the CDC adopted a new sliding scale reference value for childhood lead poisoning, reducing the former 10 μg/dL benchmark by half. Using Massachusetts (MA) as a model state, we estimated the change in the population of 9-47 month-olds at risk for lead poisoning. We then examined the impact of the 5 µg/dL reference value on the demographic characteristics of lead risk in MA communities. We find that the new CDC benchmark will lead to a 1470% increase in childhood lead poisoning cases among 9-47 month-olds in MA, with nearly 50% of the examined communities experiencing an increased prevalence of lead poisoning. Further, the top 10 MA communities with BLLs ≥5 μg/dL have significantly fewer foreign-born residents and significantly larger white populations than the highest risk communities formerly identified by the MA Childhood Lead Poisoning Prevention Program. The CDC's new 5 μg/dL lead poisoning benchmark will drastically increase the number of children with elevated BLLs and alter the distribution and demographics high-risk communities in MA.

  11. A community detection algorithm using network topologies and rule-based hierarchical arc-merging strategies

    PubMed Central

    2017-01-01

    The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100

  12. Using a two-phase evolutionary framework to select multiple network spreaders based on community structure

    NASA Astrophysics Data System (ADS)

    Fu, Yu-Hsiang; Huang, Chung-Yuan; Sun, Chuen-Tsai

    2016-11-01

    Using network community structures to identify multiple influential spreaders is an appropriate method for analyzing the dissemination of information, ideas and infectious diseases. For example, data on spreaders selected from groups of customers who make similar purchases may be used to advertise products and to optimize limited resource allocation. Other examples include community detection approaches aimed at identifying structures and groups in social or complex networks. However, determining the number of communities in a network remains a challenge. In this paper we describe our proposal for a two-phase evolutionary framework (TPEF) for determining community numbers and maximizing community modularity. Lancichinetti-Fortunato-Radicchi benchmark networks were used to test our proposed method and to analyze execution time, community structure quality, convergence, and the network spreading effect. Results indicate that our proposed TPEF generates satisfactory levels of community quality and convergence. They also suggest a need for an index, mechanism or sampling technique to determine whether a community detection approach should be used for selecting multiple network spreaders.

  13. 7 CFR 25.404 - Validation of designation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... maintain a process for ensuring ongoing broad-based participation by community residents consistent with the approved application and planning process outlined in the strategic plan. (1) Continuous... benchmarks, the process it will use for reviewing goals and benchmarks and revising its strategic plan. (2...

  14. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    USGS Publications Warehouse

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie A.; Reed, Sasha C.; Reich, Peter B.; Ryan, Michael G.; Wood, Tana E.; Yang, Xiaojuan

    2017-01-01

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is to compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.

  15. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO 2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is tomore » compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO 2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.« less

  16. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    NASA Astrophysics Data System (ADS)

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie; Reed, Sasha; Reich, Peter B.; Ryan, Michael G.; Wood, Tana E.; Yang, Xiaojuan

    2017-10-01

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is to compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.

  17. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    DOE PAGES

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie; ...

    2017-10-23

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO 2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is tomore » compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO 2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.« less

  18. Anthropogenic organic compounds in source water of selected community water systems that use groundwater, 2002-05

    USGS Publications Warehouse

    Hopple, Jessica A.; Delzer, Gregory C.; Kingsbury, James A.

    2009-01-01

    Source water, defined as groundwater collected from a community water system well prior to water treatment, was sampled from 221 wells during October 2002 to July 2005 and analyzed for 258 anthropogenic organic compounds. Most of these compounds are unregulated in drinking water and include pesticides and pesticide degradates, gasoline hydrocarbons, personal-care and domestic-use products, and solvents. The laboratory analytical methods used in the study have detection levels that commonly are 100 to 1,000 times lower than State and Federal standards and guidelines for protecting water quality. Detections of anthropogenic organic compounds do not necessarily indicate a concern to human health but rather help to identify emerging issues and track changes in occurrence and concentrations over time. Less than one-half (120) of the 258 compounds were detected in at least one source-water sample. Chloroform, in 36 percent of samples, was the most commonly detected of the 12 compounds that were in about 10 percent or more of source-water samples. The herbicides atrazine, metolachlor, prometon, and simazine also were among the commonly detected compounds. The commonly detected degradates of atrazine - deethylatrazine and deisopropylatrazine - as well as degradates of acetochlor and alachlor, generally were detected at concentrations similar to or greater than concentrations of the parent herbicide. The compounds perchloroethene, trichloroethene, 1,1,1-trichloroethane, methyl tert-butyl ether, and cis-1,2-dichloroethene also were detected commonly. The most commonly detected compounds in source-water samples generally were among those detected commonly across the country and reported in previous studies by the U.S. Geological Survey's National Water-Quality Assessment Program. Relatively few compounds were detected at concentrations greater than human-health benchmarks, and 84 percent of the concentrations were two or more orders of magnitude less than benchmarks. Five compounds (perchloroethene, trichloroethene, 1,2-dibromoethane, acrylonitrile, and dieldrin) were detected at concentrations greater than their human-health benchmark. The human-health benchmarks used for comparison were U.S. Environmental Protection Agency Maximum Contaminant Levels (MCLs) for regulated compounds and Health-Based Screening Levels developed by the U.S. Geological Survey in collaboration with the U.S. Environmental Protection Agency and other agencies for unregulated compounds. About one-half of all detected compounds do not have human-health benchmarks or adequate toxicity information to evaluate results in a human-health context. Ninety-four source-water and finished-water (water that has passed through all the treatment processes but prior to distribution) sites were sampled at selected community water systems during June 2004 to September 2005. Most of the samples were analyzed for compounds that were detected commonly or at relatively high concentrations during the initial source-water sampling. The majority of the finished-water samples represented water blended with water from one or more other wells. Thirty-four samples were from water systems that did not blend water from sampled wells with water from other wells prior to distribution. The comparison of source- and finished-water samples represents an initial assessment of whether compounds present in source water also are present in finished water and is not intended as an evaluation of water-treatment efficacy. The treatment used at the majority of the community water systems sampled is disinfection, which, in general, is not designed to remove the compounds monitored in this study. Concentrations of all compounds detected in finished water were less than their human-health benchmarks. Two detections of perchloroethene and one detection of trichloroethene in finished water had concentrations within an order of magnitude of the MCL. Concentrations of disinfection by-products were

  19. There is no one-size-fits-all product for InSAR; on the inclusion of contextual information for geodetically-proof InSAR data products

    NASA Astrophysics Data System (ADS)

    Hanssen, R. F.

    2017-12-01

    In traditional geodesy, one is interested in determining the coordinates, or the change in coordinates, of predefined benchmarks. These benchmarks are clearly identifiable and are especially established to be representative of the signal of interest. This holds, e.g., for leveling benchmarks, for triangulation/trilateration benchmarks, and for GNSS benchmarks. The desired coordinates are not identical to the basic measurements, and need to be estimated using robust estimation procedures, where the stochastic nature of the measurements is taken into account. For InSAR, however, the `benchmarks' are not predefined. In fact, usually we do not know where an effective benchmark is located, even though we can determine its dynamic behavior pretty well. This poses several significant problems. First, we cannot describe the quality of the measurements, unless we already know the dynamic behavior of the benchmark. Second, if we don't know the quality of the measurements, we cannot compute the quality of the estimated parameters. Third, rather harsh assumptions need to be made to produce a result. These (usually implicit) assumptions differ between processing operators and the used software, and are severely affected by the amount of available data. Fourth, the `relative' nature of the final estimates is usually not explicitly stated, which is particularly problematic for non-expert users. Finally, whereas conventional geodesy applies rigorous testing to check for measurement or model errors, this is hardly ever done in InSAR-geodesy. These problems make it rather impossible to provide a precise, reliable, repeatable, and `universal' InSAR product or service. Here we evaluate the requirements and challenges to move towards InSAR as a geodetically-proof product. In particular this involves the explicit inclusion of contextual information, as well as InSAR procedures, standards and a technical protocol, supported by the International Association of Geodesy and the international scientific community.

  20. A wind energy benchmark for ABL modelling of a diurnal cycle with a nocturnal low-level jet: GABLS3 revisited

    DOE PAGES

    Rodrigo, J. Sanz; Churchfield, M.; Kosović, B.

    2016-10-03

    The third GEWEX Atmospheric Boundary Layer Studies (GABLS3) model intercomparison study, around the Cabauw met tower in the Netherlands, is revisited as a benchmark for wind energy atmospheric boundary layer (ABL) models. The case was originally developed by the boundary layer meteorology community, interested in analysing the performance of single-column and large-eddy simulation atmospheric models dealing with a diurnal cycle leading to the development of a nocturnal low-level jet. The case addresses fundamental questions related to the definition of the large-scale forcing, the interaction of the ABL with the surface and the evaluation of model results with observations. The characterizationmore » of mesoscale forcing for asynchronous microscale modelling of the ABL is discussed based on momentum budget analysis of WRF simulations. Then a single-column model is used to demonstrate the added value of incorporating different forcing mechanisms in microscale models. The simulations are evaluated in terms of wind energy quantities of interest.« less

  1. Benchmarking and audit of breast units improves quality of care

    PubMed Central

    van Dam, P.A.; Verkinderen, L.; Hauspy, J.; Vermeulen, P.; Dirix, L.; Huizing, M.; Altintas, S.; Papadimitriou, K.; Peeters, M.; Tjalma, W.

    2013-01-01

    Quality Indicators (QIs) are measures of health care quality that make use of readily available hospital inpatient administrative data. Assessment quality of care can be performed on different levels: national, regional, on a hospital basis or on an individual basis. It can be a mandatory or voluntary system. In all cases development of an adequate database for data extraction, and feedback of the findings is of paramount importance. In the present paper we performed a Medline search on “QIs and breast cancer” and “benchmarking and breast cancer care”, and we have added some data from personal experience. The current data clearly show that the use of QIs for breast cancer care, regular internal and external audit of performance of breast units, and benchmarking are effective to improve quality of care. Adherence to guidelines improves markedly (particularly regarding adjuvant treatment) and there are data emerging showing that this results in a better outcome. As quality assurance benefits patients, it will be a challenge for the medical and hospital community to develop affordable quality control systems, which are not leading to excessive workload. PMID:24753926

  2. 24 CFR 990.185 - Utilities expense level: Incentives for energy conservation/rate reduction.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) Utility benchmarking. HUD will pursue benchmarking utility consumption at the project level as part of the... convene a meeting with representation of appropriate stakeholders to review utility benchmarking options so that HUD may determine whether or how to implement utility benchmarking to be effective in FY 2011...

  3. 24 CFR 990.185 - Utilities expense level: Incentives for energy conservation/rate reduction.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) Utility benchmarking. HUD will pursue benchmarking utility consumption at the project level as part of the... convene a meeting with representation of appropriate stakeholders to review utility benchmarking options so that HUD may determine whether or how to implement utility benchmarking to be effective in FY 2011...

  4. Markov Dynamics as a Zooming Lens for Multiscale Community Detection: Non Clique-Like Communities and the Field-of-View Limit

    PubMed Central

    Schaub, Michael T.; Delvenne, Jean-Charles; Yaliraki, Sophia N.; Barahona, Mauricio

    2012-01-01

    In recent years, there has been a surge of interest in community detection algorithms for complex networks. A variety of computational heuristics, some with a long history, have been proposed for the identification of communities or, alternatively, of good graph partitions. In most cases, the algorithms maximize a particular objective function, thereby finding the ‘right’ split into communities. Although a thorough comparison of algorithms is still lacking, there has been an effort to design benchmarks, i.e., random graph models with known community structure against which algorithms can be evaluated. However, popular community detection methods and benchmarks normally assume an implicit notion of community based on clique-like subgraphs, a form of community structure that is not always characteristic of real networks. Specifically, networks that emerge from geometric constraints can have natural non clique-like substructures with large effective diameters, which can be interpreted as long-range communities. In this work, we show that long-range communities escape detection by popular methods, which are blinded by a restricted ‘field-of-view’ limit, an intrinsic upper scale on the communities they can detect. The field-of-view limit means that long-range communities tend to be overpartitioned. We show how by adopting a dynamical perspective towards community detection [1], [2], in which the evolution of a Markov process on the graph is used as a zooming lens over the structure of the network at all scales, one can detect both clique- or non clique-like communities without imposing an upper scale to the detection. Consequently, the performance of algorithms on inherently low-diameter, clique-like benchmarks may not always be indicative of equally good results in real networks with local, sparser connectivity. We illustrate our ideas with constructive examples and through the analysis of real-world networks from imaging, protein structures and the power grid, where a multiscale structure of non clique-like communities is revealed. PMID:22384178

  5. Surveys and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  6. Benchmarks: WICHE Region 2012

    ERIC Educational Resources Information Center

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  7. Best practices in Web-based courses: generational differences across undergraduate and graduate nursing students.

    PubMed

    Billings, Diane M; Skiba, Diane J; Connors, Helen R

    2005-01-01

    The demand for online courses is greatly increasing across all levels of the curriculum in higher education. With this change in teaching and learning strategies comes the need for quality control to determine best practices in online learning communities. This study examines the differences in student perceptions of the use of technology, educational practices, and outcomes between undergraduate and graduate students enrolled in Web-based courses. The multisite study uses the benchmarking process and the Flashlight Program Evaluating Educational Uses of the Web in Nursing survey instrument to study best practices and examine generational differences between the two groups of students. The outcomes of the study establish benchmarks for quality improvement in online learning. The results support the educational model for online learning and postulates about generational differences for future study.

  8. A comprehensive benchmarking study of protocols and sequencing platforms for 16S rRNA community profiling

    DOE PAGES

    Podar, Mircea; Shakya, Migun; D'Amore, Rosalinda; ...

    2016-01-14

    In the last 5 years, the rapid pace of innovations and improvements in sequencing technologies has completely changed the landscape of metagenomic and metagenetic experiments. Therefore, it is critical to benchmark the various methodologies for interrogating the composition of microbial communities, so that we can assess their strengths and limitations. Here, the most common phylogenetic marker for microbial community diversity studies is the 16S ribosomal RNA gene and in the last 10 years the field has moved from sequencing a small number of amplicons and samples to more complex studies where thousands of samples and multiple different gene regions aremore » interrogated.« less

  9. 7 CFR 1709.107 - Eligible communities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... each community in the grant's proposed target area exceeds one or more of the RUS high energy cost community benchmarks to be eligible for assistance under this program. The smallest area that may be designated as a target area is a 2000 Census block (c) The target community may include an extremely high...

  10. Emergency Management Benchmarking Study: Lessons for Increasing Supply Chain Resilience

    DTIC Science & Technology

    2010-03-01

    studied if public-private partnerships could improve community resilience . In essence they concluded that in order to achieve community resilience , public...improve community resilience in times of disaster. International Journal of Physical Distribution & Logistics Management, Vol. 39, No. 5, pp. 343

  11. A phylogenetic transform enhances analysis of compositional microbiota data.

    PubMed

    Silverman, Justin D; Washburne, Alex D; Mukherjee, Sayan; David, Lawrence A

    2017-02-15

    Surveys of microbial communities (microbiota), typically measured as relative abundance of species, have illustrated the importance of these communities in human health and disease. Yet, statistical artifacts commonly plague the analysis of relative abundance data. Here, we introduce the PhILR transform, which incorporates microbial evolutionary models with the isometric log-ratio transform to allow off-the-shelf statistical tools to be safely applied to microbiota surveys. We demonstrate that analyses of community-level structure can be applied to PhILR transformed data with performance on benchmarks rivaling or surpassing standard tools. Additionally, by decomposing distance in the PhILR transformed space, we identified neighboring clades that may have adapted to distinct human body sites. Decomposing variance revealed that covariation of bacterial clades within human body sites increases with phylogenetic relatedness. Together, these findings illustrate how the PhILR transform combines statistical and phylogenetic models to overcome compositional data challenges and enable evolutionary insights relevant to microbial communities.

  12. Benchmarking Alumni Relations in Community Colleges: Findings from a 2012 CASE Survey. CASE White Paper

    ERIC Educational Resources Information Center

    Paradise, Andrew; Heaton, Paul

    2013-01-01

    In 2011, CASE founded the Center for Community College Advancement to provide training and resources to help community colleges build and sustain effective fundraising, alumni relations and communications and marketing programs. This white paper summarizes the results of a groundbreaking survey on alumni relations programs at community colleges…

  13. Board oversight of community benefit: an ethical imperative.

    PubMed

    Magill, Gerard; Prybil, Lawrence D

    2011-03-01

    Board oversight of community benefit responsibility in tax-exempt organizations in the nonprofit health care sector is attracting considerable attention. Scrutiny by the IRS and other official bodies has led to stricter measures of compliance with the community benefit standard. But stricter compliance does not sufficiently engage the underlying ethical imperative for boards to provide effective oversight--an imperative that recent research suggests has not been sufficiently honored. This analysis considers why there is a distinctively ethical imperative for board oversight, the organizational nature of the imperative involved, and practical ways to fulfill its obligations. We adopt an organizational ethics paradigm to illuminate the constituent components of the ethical imperative and to clarify emerging benchmarks as flexible guidelines. As these emerging benchmarks enhance board oversight of community benefit they also can shed light on what it means to be a virtuous organization.

  14. Optimized selection of benchmark test parameters for image watermark algorithms based on Taguchi methods and corresponding influence on design decisions for real-world applications

    NASA Astrophysics Data System (ADS)

    Rodriguez, Tony F.; Cushman, David A.

    2003-06-01

    With the growing commercialization of watermarking techniques in various application scenarios it has become increasingly important to quantify the performance of watermarking products. The quantification of relative merits of various products is not only essential in enabling further adoption of the technology by society as a whole, but will also drive the industry to develop testing plans/methodologies to ensure quality and minimize cost (to both vendors & customers.) While the research community understands the theoretical need for a publicly available benchmarking system to quantify performance, there has been less discussion on the practical application of these systems. By providing a standard set of acceptance criteria, benchmarking systems can dramatically increase the quality of a particular watermarking solution, validating the product performances if they are used efficiently and frequently during the design process. In this paper we describe how to leverage specific design of experiments techniques to increase the quality of a watermarking scheme, to be used with the benchmark tools being developed by the Ad-Hoc Watermark Verification Group. A Taguchi Loss Function is proposed for an application and orthogonal arrays used to isolate optimal levels for a multi-factor experimental situation. Finally, the results are generalized to a population of cover works and validated through an exhaustive test.

  15. Quality assurance, benchmarking, assessment and mutual international recognition of qualifications.

    PubMed

    Hobson, R; Rolland, S; Rotgans, J; Schoonheim-Klein, M; Best, H; Chomyszyn-Gajewska, M; Dymock, D; Essop, R; Hupp, J; Kundzina, R; Love, R; Memon, R A; Moola, M; Neumann, L; Ozden, N; Roth, K; Samwel, P; Villavicencio, J; Wright, P; Harzer, W

    2008-02-01

    The aim of this report is to provide guidance to assist in the international convergence of quality assurance, benchmarking and assessment systems to improve dental education. Proposals are developed for mutual recognition of qualifications, to aid international movement and exchange of staff and students including and supporting developing countries. Quality assurance is the responsibility of all staff involved in dental education and involves three levels: internal, institutional and external. Benchmarking information provides a subject framework. Benchmarks are useful for a variety of purposes including design and validation of programmes, examination and review; they can also strengthen the accreditation process undertaken by professional and statutory bodies. Benchmark information can be used by institutions as part of their programme approval process, to set degree standards. The standards should be developed by the dental academic community through formal groups of experts. Assessment outcomes of student learning are a measure of the quality of the learning programme. The goal of an effective assessment strategy should be that it provides the starting point for students to adopt a positive approach to effective and competent practice, reflective and lifelong learning. All assessment methods should be evidence based or based upon research. Mutual recognition of professional qualifications means that qualifications gained in one country (the home country) are recognized in another country (the host country). It empowers movement of skilled workers, which can help resolve skills shortages within participating countries. These proposals are not intended to be either exhaustive or prescriptive; they are purely for guidance and derived from the identification of what is perceived to be 'best practice'.

  16. 75 FR 28643 - Pine Island, Matlacha Pass, Island Bay, and Caloosahatchee National Wildlife Refuges, Lee and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-21

    ... would work with the partners to establish benchmarks to record sea level rise and beach profiles and... partners to establish benchmarks to record sea level rise and beach profiles and shoreline changes, which... establish benchmarks to record sea level rise and beach profiles and shoreline changes, which could...

  17. Revenues and Expenditures: Peer and Benchmark Comparisons, University of Hawai'i, Fiscal Year 1994-95.

    ERIC Educational Resources Information Center

    Hawaii Univ., Honolulu.

    The University of Hawaii's (UH) three university and seven community college campuses are compared with benchmark and peer group institutions with regard to selected financial measures. The primary data sources for this report were the Integrated Postsecondary Education Data System (IPEDS) Finance Survey, Fiscal Year 1994-95. Tables show data on…

  18. Searching for Elements of Evidence-based Practices in Children’s Usual Care and Examining their Impact

    PubMed Central

    Garland, Ann F.; Accurso, Erin C.; Haine-Schlagel, Rachel; Brookman-Frazee, Lauren; Roesch, Scott; Zhang, Jin Jin

    2014-01-01

    Objective Most of the knowledge generated to bridge the research - practice gap has been derived from experimental studies implementing specific treatment models. Alternatively, this study uses observational methods to generate knowledge about community-based treatment processes and outcomes. Aims are to (1) describe outcome trajectories for children with disruptive behavior problems (DBPs), and (2) test how observed delivery of a benchmark set of practice elements common in evidence-based (EB) treatments may be associated with outcome change, while accounting for potential confounding variables. Method Participants included 190 children ages 4–13 with DBPs and their caregivers, plus 85 psychotherapists, recruited from six clinics. All treatment sessions were video-taped and a random sample of four sessions in the first four months of treatment was reliably coded for intensity on 27 practice elements (benchmark set and others). Three outcomes (child symptom severity, parent discipline, and family functioning) were assessed by parent report at intake, four, and eight months. Data were collected on several potential covariates including child, parent, therapist, and service use characteristics. Multi-level modeling was used to assess relationships between observed practice and outcome slopes, while accounting for covariates. Results Children and families demonstrated improvements in all three outcomes, but few significant associations between treatment processes and outcome change were identified. Families receiving greater intensity on the benchmark practice elements did demonstrate greater improvement in the parental discipline outcome. Conclusion Observed changes in outcomes for families in community care were generally not strongly associated with the type or amount of treatment received. PMID:24555882

  19. Clinical Trial Assessment of Infrastructure Matrix Tool to Improve the Quality of Research Conduct in the Community.

    PubMed

    Dimond, Eileen P; Zon, Robin T; Weiner, Bryan J; St Germain, Diane; Denicoff, Andrea M; Dempsey, Kandie; Carrigan, Angela C; Teal, Randall W; Good, Marjorie J; McCaskill-Stevens, Worta; Grubbs, Stephen S; Dimond, Eileen P; Zon, Robin T; Weiner, Bryan J; St Germain, Diane; Denicoff, Andrea M; Dempsey, Kandie; Carrigan, Angela C; Teal, Randall W; Good, Marjorie J; McCaskill-Stevens, Worta; Grubbs, Stephen S

    2016-01-01

    Several publications have described minimum standards and exemplary attributes for clinical trial sites to improve research quality. The National Cancer Institute (NCI) Community Cancer Centers Program (NCCCP) developed the clinical trial Best Practice Matrix tool to facilitate research program improvements through annual self-assessments and benchmarking. The tool identified nine attributes, each with three progressive levels, to score clinical trial infrastructural elements from less to more exemplary. The NCCCP sites correlated tool use with research program improvements, and the NCI pursued a formative evaluation to refine the interpretability and measurability of the tool. From 2011 to 2013, 21 NCCCP sites self-assessed their programs with the tool annually. During 2013 to 2014, NCI collaborators conducted a five-step formative evaluation of the matrix tool. Sites reported significant increases in level-three scores across the original nine attributes combined (P<.001). Two specific attributes exhibited significant change: clinical trial portfolio diversity and management (P=.0228) and clinical trial communication (P=.0281). The formative evaluation led to revisions, including renaming the Best Practice Matrix as the Clinical Trial Assessment of Infrastructure Matrix (CT AIM), expanding infrastructural attributes from nine to 11, clarifying metrics, and developing a new scoring tool. Broad community input, cognitive interviews, and pilot testing improved the usability and functionality of the tool. Research programs are encouraged to use the CT AIM to assess and improve site infrastructure. Experience within the NCCCP suggests that the CT AIM is useful for improving quality, benchmarking research performance, reporting progress, and communicating program needs with institutional leaders. The tool model may also be useful in disciplines beyond oncology.

  20. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    PubMed

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  1. Formalization of the classification pattern: survey of classification modeling in information systems engineering.

    PubMed

    Partridge, Chris; de Cesare, Sergio; Mitchell, Andrew; Odell, James

    2018-01-01

    Formalization is becoming more common in all stages of the development of information systems, as a better understanding of its benefits emerges. Classification systems are ubiquitous, no more so than in domain modeling. The classification pattern that underlies these systems provides a good case study of the move toward formalization in part because it illustrates some of the barriers to formalization, including the formal complexity of the pattern and the ontological issues surrounding the "one and the many." Powersets are a way of characterizing the (complex) formal structure of the classification pattern, and their formalization has been extensively studied in mathematics since Cantor's work in the late nineteenth century. One can use this formalization to develop a useful benchmark. There are various communities within information systems engineering (ISE) that are gradually working toward a formalization of the classification pattern. However, for most of these communities, this work is incomplete, in that they have not yet arrived at a solution with the expressiveness of the powerset benchmark. This contrasts with the early smooth adoption of powerset by other information systems communities to, for example, formalize relations. One way of understanding the varying rates of adoption is recognizing that the different communities have different historical baggage. Many conceptual modeling communities emerged from work done on database design, and this creates hurdles to the adoption of the high level of expressiveness of powersets. Another relevant factor is that these communities also often feel, particularly in the case of domain modeling, a responsibility to explain the semantics of whatever formal structures they adopt. This paper aims to make sense of the formalization of the classification pattern in ISE and surveys its history through the literature, starting from the relevant theoretical works of the mathematical literature and gradually shifting focus to the ISE literature. The literature survey follows the evolution of ISE's understanding of how to formalize the classification pattern. The various proposals are assessed using the classical example of classification; the Linnaean taxonomy formalized using powersets as a benchmark for formal expressiveness. The broad conclusion of the survey is that (1) the ISE community is currently in the early stages of the process of understanding how to formalize the classification pattern, particularly in the requirements for expressiveness exemplified by powersets, and (2) that there is an opportunity to intervene and speed up the process of adoption by clarifying this expressiveness. Given the central place that the classification pattern has in domain modeling, this intervention has the potential to lead to significant improvements.

  2. Results of the 2012 CASE Compensation Survey: Community College Respondents

    ERIC Educational Resources Information Center

    Paradise, Andrew

    2012-01-01

    The Council for Advancement and Support of Education (CASE) has conducted compensations surveys to track trends in the profession and to help members benchmark salaries since 1982. The 2012 Community College Compensation Report summarizes the results of CASE's most recent compensation survey just for community college respondents. This report…

  3. Reflections on "Real-World" Community Psychology

    ERIC Educational Resources Information Center

    Wolff, Tom; Swift, Carolyn

    2008-01-01

    Reflections on the history of real-world (applied) community psychologists trace their participation in the field's official guild, the Society for Community Research and Action (SCRA), beginning with the Swampscott Conference in 1965 through the current date. Four benchmarks are examined. The issues these real-world psychologists bring to the…

  4. Benchmarking on Tsunami Currents with ComMIT

    NASA Astrophysics Data System (ADS)

    Sharghi vand, N.; Kanoglu, U.

    2015-12-01

    There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)

  5. High-resolution phylogenetic microbial community profiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, Esther; Bushnell, Brian; Coleman-Derr, Devin

    Over the past decade, high-throughput short-read 16S rRNA gene amplicon sequencing has eclipsed clone-dependent long-read Sanger sequencing for microbial community profiling. The transition to new technologies has provided more quantitative information at the expense of taxonomic resolution with implications for inferring metabolic traits in various ecosystems. We applied single-molecule real-time sequencing for microbial community profiling, generating full-length 16S rRNA gene sequences at high throughput, which we propose to name PhyloTags. We benchmarked and validated this approach using a defined microbial community. When further applied to samples from the water column of meromictic Sakinaw Lake, we show that while community structuresmore » at the phylum level are comparable between PhyloTags and Illumina V4 16S rRNA gene sequences (iTags), variance increases with community complexity at greater water depths. PhyloTags moreover allowed less ambiguous classification. Last, a platform-independent comparison of PhyloTags and in silico generated partial 16S rRNA gene sequences demonstrated significant differences in community structure and phylogenetic resolution across multiple taxonomic levels, including a severe underestimation in the abundance of specific microbial genera involved in nitrogen and methane cycling across the Lake's water column. Thus, PhyloTags provide a reliable adjunct or alternative to cost-effective iTags, enabling more accurate phylogenetic resolution of microbial communities and predictions on their metabolic potential.« less

  6. High-resolution phylogenetic microbial community profiling

    DOE PAGES

    Singer, Esther; Bushnell, Brian; Coleman-Derr, Devin; ...

    2016-02-09

    Over the past decade, high-throughput short-read 16S rRNA gene amplicon sequencing has eclipsed clone-dependent long-read Sanger sequencing for microbial community profiling. The transition to new technologies has provided more quantitative information at the expense of taxonomic resolution with implications for inferring metabolic traits in various ecosystems. We applied single-molecule real-time sequencing for microbial community profiling, generating full-length 16S rRNA gene sequences at high throughput, which we propose to name PhyloTags. We benchmarked and validated this approach using a defined microbial community. When further applied to samples from the water column of meromictic Sakinaw Lake, we show that while community structuresmore » at the phylum level are comparable between PhyloTags and Illumina V4 16S rRNA gene sequences (iTags), variance increases with community complexity at greater water depths. PhyloTags moreover allowed less ambiguous classification. Last, a platform-independent comparison of PhyloTags and in silico generated partial 16S rRNA gene sequences demonstrated significant differences in community structure and phylogenetic resolution across multiple taxonomic levels, including a severe underestimation in the abundance of specific microbial genera involved in nitrogen and methane cycling across the Lake's water column. Thus, PhyloTags provide a reliable adjunct or alternative to cost-effective iTags, enabling more accurate phylogenetic resolution of microbial communities and predictions on their metabolic potential.« less

  7. Benchmarks: The Development of a New Approach to Student Evaluation.

    ERIC Educational Resources Information Center

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  8. PDS: A Performance Database Server

    DOE PAGES

    Berry, Michael W.; Dongarra, Jack J.; Larose, Brian H.; ...

    1994-01-01

    The process of gathering, archiving, and distributing computer benchmark data is a cumbersome task usually performed by computer users and vendors with little coordination. Most important, there is no publicly available central depository of performance data for all ranges of machines from personal computers to supercomputers. We present an Internet-accessible performance database server (PDS) that can be used to extract current benchmark data and literature. As an extension to the X-Windows-based user interface (Xnetlib) to the Netlib archival system, PDS provides an on-line catalog of public domain computer benchmarks such as the LINPACK benchmark, Perfect benchmarks, and the NAS parallelmore » benchmarks. PDS does not reformat or present the benchmark data in any way that conflicts with the original methodology of any particular benchmark; it is thereby devoid of any subjective interpretations of machine performance. We believe that all branches (research laboratories, academia, and industry) of the general computing community can use this facility to archive performance metrics and make them readily available to the public. PDS can provide a more manageable approach to the development and support of a large dynamic database of published performance metrics.« less

  9. Design and development of a community carbon cycle benchmarking system for CMIP5 models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Randerson, J. T.

    2013-12-01

    Benchmarking has been widely used to assess the ability of atmosphere, ocean, sea ice, and land surface models to capture the spatial and temporal variability of observations during the historical period. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we designed and developed a software system that enables the user to specify the models, benchmarks, and scoring systems so that results can be tailored to specific model intercomparison projects. We used this system to evaluate the performance of CMIP5 Earth system models (ESMs). Our scoring system used information from four different aspects of climate, including the climatological mean spatial pattern of gridded surface variables, seasonal cycle dynamics, the amplitude of interannual variability, and long-term decadal trends. We used this system to evaluate burned area, global biomass stocks, net ecosystem exchange, gross primary production, and ecosystem respiration from CMIP5 historical simulations. Initial results indicated that the multi-model mean often performed better than many of the individual models for most of the observational constraints.

  10. Benchmark levels for the consumptive water footprint of crop production for different environmental conditions: a case study for winter wheat in China

    NASA Astrophysics Data System (ADS)

    Zhuo, La; Mekonnen, Mesfin M.; Hoekstra, Arjen Y.

    2016-11-01

    Meeting growing food demands while simultaneously shrinking the water footprint (WF) of agricultural production is one of the greatest societal challenges. Benchmarks for the WF of crop production can serve as a reference and be helpful in setting WF reduction targets. The consumptive WF of crops, the consumption of rainwater stored in the soil (green WF), and the consumption of irrigation water (blue WF) over the crop growing period varies spatially and temporally depending on environmental factors like climate and soil. The study explores which environmental factors should be distinguished when determining benchmark levels for the consumptive WF of crops. Hereto we determine benchmark levels for the consumptive WF of winter wheat production in China for all separate years in the period 1961-2008, for rain-fed vs. irrigated croplands, for wet vs. dry years, for warm vs. cold years, for four different soil classes, and for two different climate zones. We simulate consumptive WFs of winter wheat production with the crop water productivity model AquaCrop at a 5 by 5 arcmin resolution, accounting for water stress only. The results show that (i) benchmark levels determined for individual years for the country as a whole remain within a range of ±20 % around long-term mean levels over 1961-2008, (ii) the WF benchmarks for irrigated winter wheat are 8-10 % larger than those for rain-fed winter wheat, (iii) WF benchmarks for wet years are 1-3 % smaller than for dry years, (iv) WF benchmarks for warm years are 7-8 % smaller than for cold years, (v) WF benchmarks differ by about 10-12 % across different soil texture classes, and (vi) WF benchmarks for the humid zone are 26-31 % smaller than for the arid zone, which has relatively higher reference evapotranspiration in general and lower yields in rain-fed fields. We conclude that when determining benchmark levels for the consumptive WF of a crop, it is useful to primarily distinguish between different climate zones. If actual consumptive WFs of winter wheat throughout China were reduced to the benchmark levels set by the best 25 % of Chinese winter wheat production (1224 m3 t-1 for arid areas and 841 m3 t-1 for humid areas), the water saving in an average year would be 53 % of the current water consumption at winter wheat fields in China. The majority of the yield increase and associated improvement in water productivity can be achieved in southern China.

  11. Thermo-hydro-mechanical-chemical processes in fractured-porous media: Benchmarks and examples

    NASA Astrophysics Data System (ADS)

    Kolditz, O.; Shao, H.; Görke, U.; Kalbacher, T.; Bauer, S.; McDermott, C. I.; Wang, W.

    2012-12-01

    The book comprises an assembly of benchmarks and examples for porous media mechanics collected over the last twenty years. Analysis of thermo-hydro-mechanical-chemical (THMC) processes is essential to many applications in environmental engineering, such as geological waste deposition, geothermal energy utilisation, carbon capture and storage, water resources management, hydrology, even climate change. In order to assess the feasibility as well as the safety of geotechnical applications, process-based modelling is the only tool to put numbers, i.e. to quantify future scenarios. This charges a huge responsibility concerning the reliability of computational tools. Benchmarking is an appropriate methodology to verify the quality of modelling tools based on best practices. Moreover, benchmarking and code comparison foster community efforts. The benchmark book is part of the OpenGeoSys initiative - an open source project to share knowledge and experience in environmental analysis and scientific computation.

  12. A benchmark study of the sea-level equation in GIA modelling

    NASA Astrophysics Data System (ADS)

    Martinec, Zdenek; Klemann, Volker; van der Wal, Wouter; Riva, Riccardo; Spada, Giorgio; Simon, Karen; Blank, Bas; Sun, Yu; Melini, Daniele; James, Tom; Bradley, Sarah

    2017-04-01

    The sea-level load in glacial isostatic adjustment (GIA) is described by the so called sea-level equation (SLE), which represents the mass redistribution between ice sheets and oceans on a deforming earth. Various levels of complexity of SLE have been proposed in the past, ranging from a simple mean global sea level (the so-called eustatic sea level) to the load with a deforming ocean bottom, migrating coastlines and a changing shape of the geoid. Several approaches to solve the SLE have been derived, from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, there has been no systematic intercomparison amongst the solvers through which the methods may be validated. The goal of this paper is to present a series of benchmark experiments designed for testing and comparing numerical implementations of the SLE. Our approach starts with simple load cases even though the benchmark will not result in GIA predictions for a realistic loading scenario. In the longer term we aim for a benchmark with a realistic loading scenario, and also for benchmark solutions with rotational feedback. The current benchmark uses an earth model for which Love numbers have been computed and benchmarked in Spada et al (2011). In spite of the significant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found can often be attributed to the different approximations inherent to the various algorithms. Literature G. Spada, V. R. Barletta, V. Klemann, R. E. M. Riva, Z. Martinec, P. Gasperini, B. Lund, D. Wolf, L. L. A. Vermeersen, and M. A. King, 2011. A benchmark study for glacial isostatic adjustment codes. Geophys. J. Int. 185: 106-132 doi:10.1111/j.1365-

  13. Benchmarking Alumni Relations in Community Colleges: Findings from a 2015 CASE Survey. CASE White Paper

    ERIC Educational Resources Information Center

    Paradise, Andrew

    2016-01-01

    Building on the inaugural survey conducted three years prior, the 2015 CASE Community College Alumni Relations survey collected additional insightful data on staffing, structure, communications, engagement, and fundraising. This white paper features key data on alumni relations programs at community colleges across the United States. The paper…

  14. Benchmarks for target tracking

    NASA Astrophysics Data System (ADS)

    Dunham, Darin T.; West, Philip D.

    2011-09-01

    The term benchmark originates from the chiseled horizontal marks that surveyors made, into which an angle-iron could be placed to bracket ("bench") a leveling rod, thus ensuring that the leveling rod can be repositioned in exactly the same place in the future. A benchmark in computer terms is the result of running a computer program, or a set of programs, in order to assess the relative performance of an object by running a number of standard tests and trials against it. This paper will discuss the history of simulation benchmarks that are being used by multiple branches of the military and agencies of the US government. These benchmarks range from missile defense applications to chemical biological situations. Typically, a benchmark is used with Monte Carlo runs in order to tease out how algorithms deal with variability and the range of possible inputs. We will also describe problems that can be solved by a benchmark.

  15. A phylogenetic transform enhances analysis of compositional microbiota data

    PubMed Central

    Silverman, Justin D; Washburne, Alex D; Mukherjee, Sayan; David, Lawrence A

    2017-01-01

    Surveys of microbial communities (microbiota), typically measured as relative abundance of species, have illustrated the importance of these communities in human health and disease. Yet, statistical artifacts commonly plague the analysis of relative abundance data. Here, we introduce the PhILR transform, which incorporates microbial evolutionary models with the isometric log-ratio transform to allow off-the-shelf statistical tools to be safely applied to microbiota surveys. We demonstrate that analyses of community-level structure can be applied to PhILR transformed data with performance on benchmarks rivaling or surpassing standard tools. Additionally, by decomposing distance in the PhILR transformed space, we identified neighboring clades that may have adapted to distinct human body sites. Decomposing variance revealed that covariation of bacterial clades within human body sites increases with phylogenetic relatedness. Together, these findings illustrate how the PhILR transform combines statistical and phylogenetic models to overcome compositional data challenges and enable evolutionary insights relevant to microbial communities. DOI: http://dx.doi.org/10.7554/eLife.21887.001 PMID:28198697

  16. The Standardized Faculty Schedule: A New Methodology for Interinstitutional Comparison of Faculty Salaries.

    ERIC Educational Resources Information Center

    Cooper, Ernest C.

    For a number of years, the California community colleges have used data from annual statewide surveys conducted by the Kern Community College District and the California Community College Trustees (CCCT) for comparative faculty salary information. Both the Kern and CCCT studies rely upon the device of selecting benchmark points (such as the…

  17. Influence of sediment chemistry and sediment toxicity on macroinvertebrate communities across 99 wadable streams of the Midwestern USA

    USGS Publications Warehouse

    Moran, Patrick W.; Nowell, Lisa H.; Kemble, Nile E.; Mahler, Barbara J.; Waite, Ian R.; Van Metre, Peter C.

    2017-01-01

    Simultaneous assessment of sediment chemistry, sediment toxicity, and macroinvertebrate communities can provide multiple lines of evidence when investigating relations between sediment contaminants and ecological degradation. These three measures were evaluated at 99 wadable stream sites across 11 states in the Midwestern United States during the summer of 2013 to assess sediment pollution across a large agricultural landscape. This evaluation considers an extensive suite of sediment chemistry totaling 274 analytes (polycyclic aromatic hydrocarbons, organochlorine compounds, polychlorinated biphenyls, polybrominated diphenyl ethers, trace elements, and current-use pesticides) and a mixture assessment based on the ratios of detected compounds to available effects-based benchmarks. The sediments were tested for toxicity with the amphipod Hyalella azteca (28-d exposure), the midge Chironomus dilutus (10-d), and, at a few sites, with the freshwater mussel Lampsilis siliquoidea (28-d). Sediment concentrations, normalized to organic carbon content, infrequently exceeded benchmarks for aquatic health, which was generally consistent with low rates of observed toxicity. However, the benchmark-based mixture score and the pyrethroid insecticide bifenthrin were significantly related to observed sediment toxicity. The sediment mixture score and bifenthrin were also significant predictors of the upper limits of several univariate measures of the macroinvertebrate community (EPT percent, MMI (Macroinvertebrate Multimetric Index) Score, Ephemeroptera and Trichoptera richness) using quantile regression. Multivariate pattern matching (Mantel-like tests) of macroinvertebrate species per site to identified contaminant metrics and sediment toxicity also indicate that the sediment mixture score and bifenthrin have weak, albeit significant, influence on the observed invertebrate community composition. Together, these three lines of evidence (toxicity tests, univariate metrics, and multivariate community analysis) suggest that elevated contaminant concentrations in sediments, in particular bifenthrin, is limiting macroinvertebrate communities in several of these Midwest streams.

  18. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    PubMed

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  19. Quality in E-Learning--A Conceptual Framework Based on Experiences from Three International Benchmarking Projects

    ERIC Educational Resources Information Center

    Ossiannilsson, E.; Landgren, L.

    2012-01-01

    Between 2008 and 2010, Lund University took part in three international benchmarking projects, "E-xcellence+," the "eLearning Benchmarking Exercise 2009," and the "First Dual-Mode Distance Learning Benchmarking Club." A comparison of these models revealed a rather high level of correspondence. From this finding and…

  20. Standardised Benchmarking in the Quest for Orthologs

    PubMed Central

    Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe

    2016-01-01

    The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  1. Learning Communities: An Untapped Sustainable Competitive Advantage for Higher Education

    ERIC Educational Resources Information Center

    Dawson, Shane; Burnett, Bruce; O' Donohue, Mark

    2006-01-01

    Purpose: This paper demonstrates the need for the higher education sector to develop and implement scaleable, quantitative measures that evaluate community and establish organisational benchmarks in order to guide the development of future practices designed to enhance the student learning experience. Design/methodology/approach: Literature…

  2. Clinical audit of leg ulceration prevalence in a community area: a case study of good practice.

    PubMed

    Hindley, Jenny

    2014-09-01

    This article presents the findings of an audit on venous leg ulceration prevalence in a community area as a framework for discussing the concept and importance of audit as a tool to inform practice and as a means to benchmark care against national or international standards. It is hoped that the discussed audit will practically demonstrate how such procedures can be implemented in practice for those who have not yet undertaken it, as well as highlighting the unexpected extra benefits of this type of qualitative data collection that can often unexpectedly inform practice and influence change. Audit can be used to measure, monitor and disseminate evidence-based practice across community localities, facilitating the identification of learning needs and the instigation of clinical change, thereby prioritising patient needs by ensuring safety through the benchmarking of clinical practice.

  3. National Performance Benchmarks for Modern Screening Digital Mammography: Update from the Breast Cancer Surveillance Consortium.

    PubMed

    Lehman, Constance D; Arao, Robert F; Sprague, Brian L; Lee, Janie M; Buist, Diana S M; Kerlikowske, Karla; Henderson, Louise M; Onega, Tracy; Tosteson, Anna N A; Rauscher, Garth H; Miglioretti, Diana L

    2017-04-01

    Purpose To establish performance benchmarks for modern screening digital mammography and assess performance trends over time in U.S. community practice. Materials and Methods This HIPAA-compliant, institutional review board-approved study measured the performance of digital screening mammography interpreted by 359 radiologists across 95 facilities in six Breast Cancer Surveillance Consortium (BCSC) registries. The study included 1 682 504 digital screening mammograms performed between 2007 and 2013 in 792 808 women. Performance measures were calculated according to the American College of Radiology Breast Imaging Reporting and Data System, 5th edition, and were compared with published benchmarks by the BCSC, the National Mammography Database, and performance recommendations by expert opinion. Benchmarks were derived from the distribution of performance metrics across radiologists and were presented as 50th (median), 10th, 25th, 75th, and 90th percentiles, with graphic presentations using smoothed curves. Results Mean screening performance measures were as follows: abnormal interpretation rate (AIR), 11.6 (95% confidence interval [CI]: 11.5, 11.6); cancers detected per 1000 screens, or cancer detection rate (CDR), 5.1 (95% CI: 5.0, 5.2); sensitivity, 86.9% (95% CI: 86.3%, 87.6%); specificity, 88.9% (95% CI: 88.8%, 88.9%); false-negative rate per 1000 screens, 0.8 (95% CI: 0.7, 0.8); positive predictive value (PPV) 1, 4.4% (95% CI: 4.3%, 4.5%); PPV2, 25.6% (95% CI: 25.1%, 26.1%); PPV3, 28.6% (95% CI: 28.0%, 29.3%); cancers stage 0 or 1, 76.9%; minimal cancers, 57.7%; and node-negative invasive cancers, 79.4%. Recommended CDRs were achieved by 92.1% of radiologists in community practice, and 97.1% achieved recommended ranges for sensitivity. Only 59.0% of radiologists achieved recommended AIRs, and only 63.0% achieved recommended levels of specificity. Conclusion The majority of radiologists in the BCSC surpass cancer detection recommendations for screening mammography; however, AIRs continue to be higher than the recommended rate for almost half of radiologists interpreting screening mammograms. © RSNA, 2016 Online supplemental material is available for this article.

  4. Operationalizing the Rubric: The Effect of Benchmark Selection on the Assessed Quality of Writing.

    ERIC Educational Resources Information Center

    Popp, Sharon E. Osborn; Ryan, Joseph M.; Thompson, Marilyn S.; Behrens, John T.

    The purposes of this study were to investigate the role of benchmark writing samples in direct assessment of writing and to examine the consequences of differential benchmark selection with a common writing rubric. The influences of discourse and grade level were also examined within the context of differential benchmark selection. Raters scored…

  5. Benchmarks for effective primary care-based nursing services for adults with depression: a Delphi study.

    PubMed

    McIlrath, Carole; Keeney, Sinead; McKenna, Hugh; McLaughlin, Derek

    2010-02-01

    This paper is a report of a study conducted to identify and gain consensus on appropriate benchmarks for effective primary care-based nursing services for adults with depression. Worldwide evidence suggests that between 5% and 16% of the population have a diagnosis of depression. Most of their care and treatment takes place in primary care. In recent years, primary care nurses, including community mental health nurses, have become more involved in the identification and management of patients with depression; however, there are no appropriate benchmarks to guide, develop and support their practice. In 2006, a three-round electronic Delphi survey was completed by a United Kingdom multi-professional expert panel (n = 67). Round 1 generated 1216 statements relating to structures (such as training and protocols), processes (such as access and screening) and outcomes (such as patient satisfaction and treatments). Content analysis was used to collapse statements into 140 benchmarks. Seventy-three benchmarks achieved consensus during subsequent rounds. Of these, 45 (61%) were related to structures, 18 (25%) to processes and 10 (14%) to outcomes. Multi-professional primary care staff have similar views about the appropriate benchmarks for care of adults with depression. These benchmarks could serve as a foundation for depression improvement initiatives in primary care and ongoing research into depression management by nurses.

  6. Educating Next Generation Nuclear Criticality Safety Engineers at the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. D. Bess; J. B. Briggs; A. S. Garcia

    2011-09-01

    One of the challenges in educating our next generation of nuclear safety engineers is the limitation of opportunities to receive significant experience or hands-on training prior to graduation. Such training is generally restricted to on-the-job-training before this new engineering workforce can adequately provide assessment of nuclear systems and establish safety guidelines. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) can provide students and young professionals the opportunity to gain experience and enhance critical engineering skills. The ICSBEP and IRPhEP publish annual handbooks that contain evaluations of experiments along withmore » summarized experimental data and peer-reviewed benchmark specifications to support the validation of neutronics codes, nuclear cross-section data, and the validation of reactor designs. Participation in the benchmark process not only benefits those who use these Handbooks within the international community, but provides the individual with opportunities for professional development, networking with an international community of experts, and valuable experience to be used in future employment. Traditionally students have participated in benchmarking activities via internships at national laboratories, universities, or companies involved with the ICSBEP and IRPhEP programs. Additional programs have been developed to facilitate the nuclear education of students while participating in the benchmark projects. These programs include coordination with the Center for Space Nuclear Research (CSNR) Next Degree Program, the Collaboration with the Department of Energy Idaho Operations Office to train nuclear and criticality safety engineers, and student evaluations as the basis for their Master's thesis in nuclear engineering.« less

  7. The 5 Essentials of Organizational Excellence: Maximizing Schoolwide Student Achievement and Performance.

    ERIC Educational Resources Information Center

    Marazza, Lawrence L.

    This book explores the necessity for building strong relationships among administrators, teachers, parents, and the community by applying what the book calls the five essentials of organizational excellence. The five essentials are planning strategically; benchmarking for excellence; leading collaboratively; engaging the community; and governing…

  8. The Journey toward NADE Accreditation: Investments Reap Benefits

    ERIC Educational Resources Information Center

    Kratz, Stephanie

    2018-01-01

    The author examines the process for applying for National Association for Development Education (NADE) accreditation. The multi-year process began when the English faculty of the community college she works at reviewed data from the National Community College Benchmark Project. The data showed low success rates and poor persistence from…

  9. Visual Arts Performance Standards at Grades 4, 8 and 12 for North Dakota Visual Art Standards and Benchmarks.

    ERIC Educational Resources Information Center

    Shaw-Elgin, Linda; Jackson, Jane; Kurkowski, Bob; Riehl, Lori; Syvertson, Karen; Whitney, Linda

    This document outlines the performance standards for visual arts in North Dakota public schools, grades K-12. Four levels of performance are provided for each benchmark by North Dakota educators for K-4, 5-8, and 9-12 grade levels. Level 4 describes advanced proficiency; Level 3, proficiency; Level 2, partial proficiency; and Level 1, novice. Each…

  10. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone. Using a ROV to place and remove sensors on the benchmarks will significantly reduce the number of sensors required by the community to monitor offshore strain in subduction zones.

  11. A call for benchmarking transposable element annotation methods.

    PubMed

    Hoen, Douglas R; Hickey, Glenn; Bourque, Guillaume; Casacuberta, Josep; Cordaux, Richard; Feschotte, Cédric; Fiston-Lavier, Anna-Sophie; Hua-Van, Aurélie; Hubley, Robert; Kapusta, Aurélie; Lerat, Emmanuelle; Maumus, Florian; Pollock, David D; Quesneville, Hadi; Smit, Arian; Wheeler, Travis J; Bureau, Thomas E; Blanchette, Mathieu

    2015-01-01

    DNA derived from transposable elements (TEs) constitutes large parts of the genomes of complex eukaryotes, with major impacts not only on genomic research but also on how organisms evolve and function. Although a variety of methods and tools have been developed to detect and annotate TEs, there are as yet no standard benchmarks-that is, no standard way to measure or compare their accuracy. This lack of accuracy assessment calls into question conclusions from a wide range of research that depends explicitly or implicitly on TE annotation. In the absence of standard benchmarks, toolmakers are impeded in improving their tools, annotators cannot properly assess which tools might best suit their needs, and downstream researchers cannot judge how accuracy limitations might impact their studies. We therefore propose that the TE research community create and adopt standard TE annotation benchmarks, and we call for other researchers to join the authors in making this long-overdue effort a success.

  12. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  13. Space Weather Action Plan Ionizing Radiation Benchmarks: Phase 1 update and plans for Phase 2

    NASA Astrophysics Data System (ADS)

    Talaat, E. R.; Kozyra, J.; Onsager, T. G.; Posner, A.; Allen, J. E., Jr.; Black, C.; Christian, E. R.; Copeland, K.; Fry, D. J.; Johnston, W. R.; Kanekal, S. G.; Mertens, C. J.; Minow, J. I.; Pierson, J.; Rutledge, R.; Semones, E.; Sibeck, D. G.; St Cyr, O. C.; Xapsos, M.

    2017-12-01

    Changes in the near-Earth radiation environment can affect satellite operations, astronauts in space, commercial space activities, and the radiation environment on aircraft at relevant latitudes or altitudes. Understanding the diverse effects of increased radiation is challenging, but producing ionizing radiation benchmarks will help address these effects. The following areas have been considered in addressing the near-Earth radiation environment: the Earth's trapped radiation belts, the galactic cosmic ray background, and solar energetic-particle events. The radiation benchmarks attempt to account for any change in the near-Earth radiation environment, which, under extreme cases, could present a significant risk to critical infrastructure operations or human health. The goal of these ionizing radiation benchmarks and associated confidence levels will define at least the radiation intensity as a function of time, particle type, and energy for an occurrence frequency of 1 in 100 years and an intensity level at the theoretical maximum for the event. In this paper, we present the benchmarks that address radiation levels at all applicable altitudes and latitudes in the near-Earth environment, the assumptions made and the associated uncertainties, and the next steps planned for updating the benchmarks.

  14. The Health Impact Assessment (HIA) Resource and Tool ...

    EPA Pesticide Factsheets

    Health Impact Assessment (HIA) is a relatively new and rapidly emerging field in the U.S. An inventory of available HIA resources and tools was conducted, with a primary focus on resources developed in the U.S. The resources and tools available to HIA practitioners in the conduct of their work were identified through multiple methods and compiled into a comprehensive list. The compilation includes tools and resources related to the HIA process itself and those that can be used to collect and analyze data, establish a baseline profile, assess potential health impacts, and establish benchmarks and indicators for monitoring and evaluation. These resources include literature and evidence bases, data and statistics, guidelines, benchmarks, decision and economic analysis tools, scientific models, methods, frameworks, indices, mapping, and various data collection tools. Understanding the data, tools, models, methods, and other resources available to perform HIAs will help to advance the HIA community of practice in the U.S., improve the quality and rigor of assessments upon which stakeholder and policy decisions are based, and potentially improve the overall effectiveness of HIA to promote healthy and sustainable communities. The Health Impact Assessment (HIA) Resource and Tool Compilation is a comprehensive list of resources and tools that can be utilized by HIA practitioners with all levels of HIA experience to guide them throughout the HIA process. The HIA Resource

  15. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  16. Vulnerability and Gambling Addiction: Psychosocial Benchmarks and Avenues for Intervention

    ERIC Educational Resources Information Center

    Suissa, Amnon Jacob

    2011-01-01

    Defined by researchers as "a silent epidemic" the gambling phenomenon is a social problem that has a negative impact on individuals, families and communities. Among these effects, there is exasperating evidence of comprised community networks, a deterioration of family and social ties, psychiatric co-morbidity, suicides and more recently,…

  17. Consortial Collaboration and the Creation of an Assessment Instrument for Community-Based Learning

    ERIC Educational Resources Information Center

    Murphy, Margueritte S.; Flowers, Kathleen S.

    2017-01-01

    This article describes the development of the Community-Based Learning (CBL) Scorecard by a grant-funded consortium of liberal arts institutions. The aim of the scorecard was to promote assessment that improves student learning with an instrument that employs a quantitative scale, allowing for benchmarking across institutions. Extensive interviews…

  18. A Discussion on Community Colleges and Global Counterparts Completion Policies

    ERIC Educational Resources Information Center

    Raby, Rosalind Latiner; Friedel, Janice Nahra; Valeau, Edward J.

    2016-01-01

    This article is a comparative study of community colleges and global counterparts at 41 institutions in 25 countries. Policies from each country link completion of a college program to career entry and to advancement opportunities. National and institutional policies are being defined, benchmark data is being collected on goals in the process, and…

  19. Examples of coupled human and environmental systems from the extractive industry and hydropower sector interfaces.

    PubMed

    Castro, Marcia C; Krieger, Gary R; Balge, Marci Z; Tanner, Marcel; Utzinger, Jürg; Whittaker, Maxine; Singer, Burton H

    2016-12-20

    Large-scale corporate projects, particularly those in extractive industries or hydropower development, have a history from early in the twentieth century of creating negative environmental, social, and health impacts on communities proximal to their operations. In many instances, especially for hydropower projects, the forced resettlement of entire communities was a feature in which local cultures and core human rights were severely impacted. These projects triggered an activist opposition that progressively expanded and became influential at both the host community level and with multilateral financial institutions. In parallel to, and spurred by, this activism, a shift occurred in 1969 with the passage of the National Environmental Policy Act in the United States, which required Environmental Impact Assessment (EIA) for certain types of industrial and infrastructure projects. Over the last four decades, there has been a global movement to develop a formal legal/regulatory EIA process for large industrial and infrastructure projects. In addition, social, health, and human rights impact assessments, with associated mitigation plans, were sequentially initiated and have increasingly influenced project design and relations among companies, host governments, and locally impacted communities. Often, beneficial community-level social, economic, and health programs have voluntarily been put in place by companies. These flagship programs can serve as benchmarks for community-corporate-government partnerships in the future. Here, we present examples of such positive phenomena and also focus attention on a myriad of challenges that still lie ahead.

  20. Determination of bench-mark elevations at Bethel Island and vicinity, Contra Costa and San Joaquin counties, California, 1987

    USGS Publications Warehouse

    Blodgett, J.C.; Ikehara, M.E.; McCaffrey, William F.

    1988-01-01

    Elevations of 49 bench marks in the southwestern part of the Sacramento-San Joaquin River Delta were determined during October and November 1987. A total of 58 miles of level lines were run in the vicinity of Bethel Island and the community of Discovery Bay. The datum of these surveys is based on a National Geodetic Survey bench mark T934 situated on bedrock 10.5 mi east of Mount Diablo and near Marsh Creek Reservoir. The accuracy of these levels, based on National Geodetic Survey standards, was of first, second, and third order, depending on the various segments surveyed. Several bench marks were noted as possibly being stable, but most show evidence of instability. (USGS)

  1. Benchmarking forensic mental health organizations.

    PubMed

    Coombs, Tim; Taylor, Monica; Pirkis, Jane

    2011-04-01

    This paper describes the forensic mental health forums that were conducted as part of the National Mental Health Benchmarking Project (NMHBP). These forums encouraged participating organizations to compare their performance on a range of key performance indicators (KPIs) with that of their peers. Four forensic mental health organizations took part in the NMHBP. Representatives from these organizations attended eight benchmarking forums at which they documented their performance against previously agreed KPIs. They also undertook three special projects which explored some of the factors that might explain inter-organizational variation in performance. The inter-organizational range for many of the indicators was substantial. Observing this led participants to conduct the special projects to explore three factors which might help explain the variability - seclusion practices, delivery of community mental health services, and provision of court liaison services. The process of conducting the special projects gave participants insights into the practices and structures employed by their counterparts, and provided them with some important lessons for quality improvement. The forensic mental health benchmarking forums have demonstrated that benchmarking is feasible and likely to be useful in improving service performance and quality.

  2. A Comparative Case Study Analysis of Administrators Perceptions on the Adaptation of Quality and Continuous Improvement Tools to Community Colleges in the State of Michigan

    ERIC Educational Resources Information Center

    Mattis, Ted B.

    2011-01-01

    The purpose of this study was to determine whether community college administrators in the state of Michigan believe that commonly known quality and continuous improvement tools, prevalent in a manufacturing environment, can be adapted to a community college model. The tools, specifically Six Sigma, benchmarking and process mapping have played a…

  3. Canada's Composite Learning Index: A path towards learning communities

    NASA Astrophysics Data System (ADS)

    Cappon, Paul; Laughlin, Jarrett

    2013-09-01

    In the development of learning cities/communities, benchmarking progress is a key element. Not only does it permit cities/communities to assess their current strengths and weaknesses, it also engenders a dialogue within and between cities/communities on the means of enhancing learning conditions. Benchmarking thereby is a potentially motivational tool, energising further progress. In Canada, the Canadian Council on Learning created the world's first Composite Learning Index (CLI), the purpose of which is to measure the conditions of learning nationally, regionally and locally. Cities/communities in Canada have utilised the CLI Simulator, an online tool provided by the Canadian Council on Learning, to gauge the change in overall learning conditions which may be expected depending on which particular indicator is emphasised. In this way, the CLI has proved to be both a dynamic and a locally relevant tool for improvement, moreover a strong motivational factor in the development of learning cities/communities. After presenting the main features of the CLI, the authors of this paper sum up the lessons learned during its first 5 years (2006-2010) of existence, also with a view to its transferability to other regions. Indeed, the CLI model was already adopted in Europe by the German Bertelsmann foundation in 2010 and has the potential to be useful in many other countries as well.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less thanmore » these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk, osprey) (scientific names for both the mammalian and avian species are presented in Appendix B). [In this document, NOAEL refers to both dose (mg contaminant per kg animal body weight per day) and concentration (mg contaminant per kg of food or L of drinking water)]. The 20 wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at U.S. Department of Energy (DOE) waste sites. The NOAEL-based benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species; LOAEL-based benchmarks represent threshold levels at which adverse effects are likely to become evident. These benchmarks consider contaminant exposure through oral ingestion of contaminated media only. Exposure through inhalation and/or direct dermal exposure are not considered in this report.« less

  5. Assessing equity in the geographical distribution of community pharmacies in South Africa in preparation for a national health insurance scheme.

    PubMed

    Ward, Kim; Sanders, David; Leng, Henry; Pollock, Allyson M

    2014-07-01

    To investigate equity in the geographical distribution of community pharmacies in South Africa and assess whether regulatory reforms have furthered such equity. Data on community pharmacies from the national department of health and the South African pharmacy council were used to analyse the change in community pharmacy ownership and density (number per 10,000 residents) between 1994 and 2012 in all nine provinces and 15 selected districts. In addition, the density of public clinics, alone and with community pharmacies, was calculated and compared with a national benchmark of one clinic per 10,000 residents. Interviews were conducted with nine national experts from the pharmacy sector. Community pharmacies increased in number by 13% between 1994 and 2012--less than the 25% population growth. In 2012, community pharmacy density was higher in urban provinces and was eight times higher in the least deprived districts than in the most deprived ones. Maldistribution persisted despite the growth of corporate community pharmacies. In 2012, only two provinces met the 1 per 10,000 benchmark, although all provinces achieved it when community pharmacies and clinics were combined. Experts expressed concerns that a lack of rural incentives, inappropriate licensing criteria and a shortage of pharmacy workers could undermine access to pharmaceutical services, especially in rural areas. To reduce inequity in the distribution of pharmaceutical services, new policies and legislation are needed to increase the staffing and presence of pharmacies.

  6. IMAGESEER - IMAGEs for Education and Research

    NASA Technical Reports Server (NTRS)

    Le Moigne, Jacqueline; Grubb, Thomas; Milner, Barbara

    2012-01-01

    IMAGESEER is a new Web portal that brings easy access to NASA image data for non-NASA researchers, educators, and students. The IMAGESEER Web site and database are specifically designed to be utilized by the university community, to enable teaching image processing (IP) techniques on NASA data, as well as to provide reference benchmark data to validate new IP algorithms. Along with the data and a Web user interface front-end, basic knowledge of the application domains, benchmark information, and specific NASA IP challenges (or case studies) are provided.

  7. The relationships among work stress, strain and self-reported errors in UK community pharmacy.

    PubMed

    Johnson, S J; O'Connor, E M; Jacobs, S; Hassell, K; Ashcroft, D M

    2014-01-01

    Changes in the UK community pharmacy profession including new contractual frameworks, expansion of services, and increasing levels of workload have prompted concerns about rising levels of workplace stress and overload. This has implications for pharmacist health and well-being and the occurrence of errors that pose a risk to patient safety. Despite these concerns being voiced in the profession, few studies have explored work stress in the community pharmacy context. To investigate work-related stress among UK community pharmacists and to explore its relationships with pharmacists' psychological and physical well-being, and the occurrence of self-reported dispensing errors and detection of prescribing errors. A cross-sectional postal survey of a random sample of practicing community pharmacists (n = 903) used ASSET (A Shortened Stress Evaluation Tool) and questions relating to self-reported involvement in errors. Stress data were compared to general working population norms, and regressed on well-being and self-reported errors. Analysis of the data revealed that pharmacists reported significantly higher levels of workplace stressors than the general working population, with concerns about work-life balance, the nature of the job, and work relationships being the most influential on health and well-being. Despite this, pharmacists were not found to report worse health than the general working population. Self-reported error involvement was linked to both high dispensing volume and being troubled by perceived overload (dispensing errors), and resources and communication (detection of prescribing errors). This study contributes to the literature by benchmarking community pharmacists' health and well-being, and investigating sources of stress using a quantitative approach. A further important contribution to the literature is the identification of a quantitative link between high workload and self-reported dispensing errors. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Technical Report: Installed Cost Benchmarks and Deployment Barriers for

    Science.gov Websites

    Cost Benchmarks and Deployment Barriers for Residential Solar Photovoltaics with Energy Storage Q1 2016 Installed Cost Benchmarks and Deployment Barriers for Residential Solar with Energy Storage Researchers from NREL published a report that provides detailed component and system-level cost breakdowns for

  9. BENCHMARK DOSES FOR CHEMICAL MIXTURES: EVALUATION OF A MIXTURE OF 18 PHAHS.

    EPA Science Inventory

    Benchmark doses (BMDs), defined as doses of a substance that are expected to result in a pre-specified level of "benchmark" response (BMR), have been used for quantifying the risk associated with exposure to environmental hazards. The lower confidence limit of the BMD is used as...

  10. COPRED: prediction of fold, GO molecular function and functional residues at the domain level.

    PubMed

    López, Daniel; Pazos, Florencio

    2013-07-15

    Only recently the first resources devoted to the functional annotation of proteins at the domain level started to appear. The next step is to develop specific methodologies for predicting function at the domain level based on these resources, and to implement them in web servers to be used by the community. In this work, we present COPRED, a web server for the concomitant prediction of fold, molecular function and functional sites at the domain level, based on a methodology for domain molecular function prediction and a resource of domain functional annotations previously developed and benchmarked. COPRED can be freely accessed at http://csbg.cnb.csic.es/copred. The interface works in all standard web browsers. WebGL (natively supported by most browsers) is required for the in-line preview and manipulation of protein 3D structures. The website includes a detailed help section and usage examples. pazos@cnb.csic.es.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munro, J.F.; Kristal, J.; Thompson, G.

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis.more » One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.« less

  12. More allopurinol is needed to get gout patients < 0.36 mmol/l: a gout audit in the form of a before-after trial.

    PubMed

    Arroll, Bruce; Bennett, Merran; Dalbeth, Nicola; Hettiarachchi, Dilanka; Ben, Cribben; Shelling, Ginnie

    2009-12-01

    To establish a benchmark for gout control using the proportion of patients with serum uric acid (SUA) < 0.36 mmol/L, assess patients' understanding of their preventive medication and trial a mail and phone intervention to improve gout control. Patients clinically diagnosed with gout and baseline SUAs were identified in two South Auckland practices. A mail and phone intervention was introduced aimed at improving the control of gout. Intervention #1 took place in one practice over three months. Intervention #2 occurred in the other practice four to 16 months following baseline. No significant change in SUA from intervention #1 after three months. The second intervention by mail and phone resulted in improvement in SUA levels with a greater proportion of those with SUA < 0.36 mmol/L and the difference in means statistically significant (p = 0.039 two-tailed paired t-test). Benchmarking for usual care was established at 38-43% SUA < 0.36 level. It was possible to increase from 38% to 50%. Issues relating to gout identified included lack of understanding of the need for long-term allopurinol and diagnosis and management for patients for whom English is not their first language. 1. Community workers who speak Pacific languages may assist GPs in communicating to non-English speaking patients. 2. Alternative diagnoses should be considered in symptomatic patients with prolonged normouricaemia. 3. GPs should gradually introduce allopurinol after acute gout attacks, emphasising importance of prophylaxis. 4. A campaign to inform patients about benefits of allopurinol should be considered. 5. A simple one keystroke audit is needed for gout audit and benchmarking. 6. GP guidelines for gout diagnosis and management should be available.

  13. Benchmark matrix and guide: Part II.

    PubMed

    1991-01-01

    In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.

  14. Benchmarking in pathology: development of an activity-based costing model.

    PubMed

    Burnett, Leslie; Wilson, Roger; Pfeffer, Sally; Lowry, John

    2012-12-01

    Benchmarking in Pathology (BiP) allows pathology laboratories to determine the unit cost of all laboratory tests and procedures, and also provides organisational productivity indices allowing comparisons of performance with other BiP participants. We describe 14 years of progressive enhancement to a BiP program, including the implementation of 'avoidable costs' as the accounting basis for allocation of costs rather than previous approaches using 'total costs'. A hierarchical tree-structured activity-based costing model distributes 'avoidable costs' attributable to the pathology activities component of a pathology laboratory operation. The hierarchical tree model permits costs to be allocated across multiple laboratory sites and organisational structures. This has enabled benchmarking on a number of levels, including test profiles and non-testing related workload activities. The development of methods for dealing with variable cost inputs, allocation of indirect costs using imputation techniques, panels of tests, and blood-bank record keeping, have been successfully integrated into the costing model. A variety of laboratory management reports are produced, including the 'cost per test' of each pathology 'test' output. Benchmarking comparisons may be undertaken at any and all of the 'cost per test' and 'cost per Benchmarking Complexity Unit' level, 'discipline/department' (sub-specialty) level, or overall laboratory/site and organisational levels. We have completed development of a national BiP program. An activity-based costing methodology based on avoidable costs overcomes many problems of previous benchmarking studies based on total costs. The use of benchmarking complexity adjustment permits correction for varying test-mix and diagnostic complexity between laboratories. Use of iterative communication strategies with program participants can overcome many obstacles and lead to innovations.

  15. MGmapper: Reference based mapping and taxonomy annotation of metagenomics sequence reads

    PubMed Central

    Lukjancenko, Oksana; Thomsen, Martin Christen Frølund; Maddalena Sperotto, Maria; Lund, Ole; Møller Aarestrup, Frank; Sicheritz-Pontén, Thomas

    2017-01-01

    An increasing amount of species and gene identification studies rely on the use of next generation sequence analysis of either single isolate or metagenomics samples. Several methods are available to perform taxonomic annotations and a previous metagenomics benchmark study has shown that a vast number of false positive species annotations are a problem unless thresholds or post-processing are applied to differentiate between correct and false annotations. MGmapper is a package to process raw next generation sequence data and perform reference based sequence assignment, followed by a post-processing analysis to produce reliable taxonomy annotation at species and strain level resolution. An in-vitro bacterial mock community sample comprised of 8 genuses, 11 species and 12 strains was previously used to benchmark metagenomics classification methods. After applying a post-processing filter, we obtained 100% correct taxonomy assignments at species and genus level. A sensitivity and precision at 75% was obtained for strain level annotations. A comparison between MGmapper and Kraken at species level, shows MGmapper assigns taxonomy at species level using 84.8% of the sequence reads, compared to 70.5% for Kraken and both methods identified all species with no false positives. Extensive read count statistics are provided in plain text and excel sheets for both rejected and accepted taxonomy annotations. The use of custom databases is possible for the command-line version of MGmapper, and the complete pipeline is freely available as a bitbucked package (https://bitbucket.org/genomicepidemiology/mgmapper). A web-version (https://cge.cbs.dtu.dk/services/MGmapper) provides the basic functionality for analysis of small fastq datasets. PMID:28467460

  16. MGmapper: Reference based mapping and taxonomy annotation of metagenomics sequence reads.

    PubMed

    Petersen, Thomas Nordahl; Lukjancenko, Oksana; Thomsen, Martin Christen Frølund; Maddalena Sperotto, Maria; Lund, Ole; Møller Aarestrup, Frank; Sicheritz-Pontén, Thomas

    2017-01-01

    An increasing amount of species and gene identification studies rely on the use of next generation sequence analysis of either single isolate or metagenomics samples. Several methods are available to perform taxonomic annotations and a previous metagenomics benchmark study has shown that a vast number of false positive species annotations are a problem unless thresholds or post-processing are applied to differentiate between correct and false annotations. MGmapper is a package to process raw next generation sequence data and perform reference based sequence assignment, followed by a post-processing analysis to produce reliable taxonomy annotation at species and strain level resolution. An in-vitro bacterial mock community sample comprised of 8 genuses, 11 species and 12 strains was previously used to benchmark metagenomics classification methods. After applying a post-processing filter, we obtained 100% correct taxonomy assignments at species and genus level. A sensitivity and precision at 75% was obtained for strain level annotations. A comparison between MGmapper and Kraken at species level, shows MGmapper assigns taxonomy at species level using 84.8% of the sequence reads, compared to 70.5% for Kraken and both methods identified all species with no false positives. Extensive read count statistics are provided in plain text and excel sheets for both rejected and accepted taxonomy annotations. The use of custom databases is possible for the command-line version of MGmapper, and the complete pipeline is freely available as a bitbucked package (https://bitbucket.org/genomicepidemiology/mgmapper). A web-version (https://cge.cbs.dtu.dk/services/MGmapper) provides the basic functionality for analysis of small fastq datasets.

  17. Calibrating coseismic coastal land-level changes during the 2014 Iquique (Mw=8.2) earthquake (northern Chile) with leveling, GPS and intertidal biota.

    PubMed

    Jaramillo, Eduardo; Melnick, Daniel; Baez, Juan Carlos; Montecino, Henry; Lagos, Nelson A; Acuña, Emilio; Manzano, Mario; Camus, Patricio A

    2017-01-01

    The April 1st 2014 Iquique earthquake (MW 8.1) occurred along the northern Chile margin where the Nazca plate is subducted below the South American continent. The last great megathrust earthquake here, in 1877 of Mw ~8.8 opened a seismic gap, which was only partly closed by the 2014 earthquake. Prior to the earthquake in 2013, and shortly after it we compared data from leveled benchmarks, deployed campaign GPS instruments, continuous GPS stations and estimated sea levels using the upper vertical level of rocky shore benthic organisms including algae, barnacles, and mussels. Land-level changes estimated from mean elevations of benchmarks indicate subsidence along a ~100-km stretch of coast, ranging from 3 to 9 cm at Corazones (18°30'S) to between 30 and 50 cm at Pisagua (19°30'S). About 15 cm of uplift was measured along the southern part of the rupture at Chanabaya (20°50'S). Land-level changes obtained from benchmarks and campaign GPS were similar at most sites (mean difference 3.7±3.2 cm). Higher differences however, were found between benchmarks and continuous GPS (mean difference 8.5±3.6 cm), possibly because sites were not collocated and separated by several kilometers. Subsidence estimated from the upper limits of intertidal fauna at Pisagua ranged between 40 to 60 cm, in general agreement with benchmarks and GPS. At Chanavaya, the magnitude and sense of displacement of the upper marine limit was variable across species, possibly due to species-dependent differences in ecology. Among the studied species, measurements on lithothamnioid calcareous algae most closely matched those made with benchmarks and GPS. When properly calibrated, rocky shore benthic species may be used to accurately measure land-level changes along coasts affected by subduction earthquakes. Our calibration of those methods will improve their accuracy when applied to coasts lacking pre-earthquake data and in estimating deformation during pre-instrumental earthquakes.

  18. Planktonic food web structure at a coastal time-series site: I. Partitioning of microbial abundances and carbon biomass

    NASA Astrophysics Data System (ADS)

    Caron, David A.; Connell, Paige E.; Schaffner, Rebecca A.; Schnetzer, Astrid; Fuhrman, Jed A.; Countway, Peter D.; Kim, Diane Y.

    2017-03-01

    Biogeochemistry in marine plankton communities is strongly influenced by the activities of microbial species. Understanding the composition and dynamics of these assemblages is essential for modeling emergent community-level processes, yet few studies have examined all of the biological assemblages present in the plankton, and benchmark data of this sort from time-series studies are rare. Abundance and biomass of the entire microbial assemblage and mesozooplankton (>200 μm) were determined vertically, monthly and seasonally over a 3-year period at a coastal time-series station in the San Pedro Basin off the southwestern coast of the USA. All compartments of the planktonic community were enumerated (viruses in the femtoplankton size range [0.02-0.2 μm], bacteria + archaea and cyanobacteria in the picoplankton size range [0.2-2.0 μm], phototrophic and heterotrophic protists in the nanoplanktonic [2-20 μm] and microplanktonic [20-200 μm] size ranges, and mesozooplankton [>200 μm]. Carbon biomass of each category was estimated using standard conversion factors. Plankton abundances varied over seven orders of magnitude across all categories, and total carbon biomass averaged approximately 60 μg C l-1 in surface waters of the 890 m water column over the study period. Bacteria + archaea comprised the single largest component of biomass (>1/3 of the total), with the sum of phototrophic protistan biomass making up a similar proportion. Temporal variability at this subtropical station was not dramatic. Monthly depth-specific and depth-integrated biomass varied 2-fold at the station, while seasonal variances were generally <50%. This study provides benchmark information for investigating long-term environmental forcing on the composition and dynamics of the microbes that dominate food web structure and function at this coastal observatory.

  19. Benchmark Problems for Spacecraft Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.

    2003-01-01

    To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.

  20. Planning estimates for the provision of core mental health services in Queensland 2007 to 2017.

    PubMed

    Harris, Meredith G; Buckingham, William J; Pirkis, Jane; Groves, Aaron; Whiteford, Harvey

    2012-10-01

    To derive planning estimates for the provision of public mental health services in Queensland 2007-2017. We used a five-step approach that involved: (i) estimating the prevalence and severity of mental disorders in Queensland, and the number of people at each level of severity treated by health services; (ii) benchmarking the level and mix of specialised mental health services in Queensland against national data; (iii) examining 5-year trends in Queensland public sector mental health service utilisation; (iv) reviewing Australian and international planning benchmarks; and (v) setting resource targets based on the results of the preceding four steps. Best available evidence was used where possible, supplemented by value judgements as required. Recommended resource targets for inpatient service were: 20 acute beds per 100,000 population, consistent with national average service provision but 13% above Queensland provision in 2005; and 10 non-acute beds per 100,000, 65% below Queensland levels in 2005. Growth in service provision was recommended for all other components. Adult residential rehabilitation service targets were 10 clinical 24-hour staffed beds per 100,000, and 18 non-clinical beds per 100,000. Supported accommodation targets were 35 beds per 100,000 in supervised hostels and 35 places per 100,000 in supported public housing. A direct care clinical workforce of 70 FTE per 100,000 for ambulatory care services was recommended. Fifteen per cent of total mental health funding was recommended for community support services provided by non-government organisations. The recommended targets pointed to specific areas for priority in Queensland, notably the need for additional acute inpatient services for older persons and expansion of clinical ambulatory care, residential rehabilitation and supported accommodation services. The development of nationally agreed planning targets for public mental health services and the mental health community support sector were identified as priorities.

  1. Benchmarking and beyond. Information trends in home care.

    PubMed

    Twiss, Amanda; Rooney, Heather; Lang, Christine

    2002-11-01

    With today's benchmarking concepts and tools, agencies have the unprecedented opportunity to use information as a strategic advantage. Because agencies are demanding more and better information, benchmark functionality has grown increasingly sophisticated. Agencies now require a new type of analysis, focused on high-level executive summaries while reducing the current "data overload."

  2. Issues in Institutional Benchmarking of Student Learning Outcomes Using Case Examples

    ERIC Educational Resources Information Center

    Judd, Thomas P.; Pondish, Christopher; Secolsky, Charles

    2013-01-01

    Benchmarking is a process that can take place at both the inter-institutional and intra-institutional level. This paper focuses on benchmarking intra-institutional student learning outcomes using case examples. The findings of the study illustrate the point that when the outcomes statements associated with the mission of the institution are…

  3. A human health assessment of hazardous air pollutants in Portland, OR.

    PubMed

    Tam, B N; Neumann, C M

    2004-11-01

    Ambient air samples collected from five monitoring sites in Portland, OR during July 1999 to August 2000 were analyzed for 43 hazardous air pollutants (HAP). HAP concentrations were compared to carcinogenic and non-carcinogenic benchmark levels. Carcinogenic benchmark concentrations were set at a risk level of one-in-one-million (1x10(-6)). Hazard ratios of 1.0 were used when comparing HAP concentrations to non-carcinogenic benchmarks. Emission sources (point, area, and mobile) were identified and a cumulative cancer risk and total hazard index were calculated for HAPs exceeding these health benchmark levels. Seventeen HAPs exceeded a cancer risk level of 1x10(-6) at all five monitoring sites. Nineteen HAPs exceeded this level at one or more site. Carbon tetrachloride, 1,3-butadiene, formaldehyde, and 1,1,2,2-tetrachloroethane contributed more than 50% to the upper-bound lifetime cumulative cancer risk of 2.47x10(-4). Acrolein was the only non-carcinogenic HAP with hazard ratios that exceeded 1.0 at all five sites. Mobile sources contributed the greatest percentage (68%) of HAP emissions. Additional monitoring and health assessments for HAPs in Portland, OR are warranted, including addressing issues that may have overestimated or underestimated risks in this study. Abatement strategies for HAPs that exceeded health benchmarks should be implemented to reduce potential adverse health risks.

  4. EBT Fidelity Trajectories Across Training Cohorts Using the Interagency Collaborative Team Strategy

    PubMed Central

    Hecht, Debra; Aarons, Greg; Fettes, Danielle; Hurlburt, Michael; Ledesma, Karla

    2015-01-01

    The Interdisciplinary Collaborative Team (ICT) strategy uses front-line providers as adaptation, training and quality control agents for multi-agency EBT implementation. This study tests whether an ICT transmits fidelity to subsequent provider cohorts. SafeCare was implemented by home visitors from multiple community-based agencies contracting with child welfare. Client-reported fidelity trajectories for 5,769 visits, 957 clients and 45 providers were compared using three-level growth models. Provider cohorts trained and live-coached by the ICT attained benchmark fidelity after 12 weeks, and this was sustained. Hispanic clients reported high cultural competency, supporting a cultural adaptation crafted by the ICT. PMID:25586878

  5. EBT Fidelity Trajectories Across Training Cohorts Using the Interagency Collaborative Team Strategy.

    PubMed

    Chaffin, Mark; Hecht, Debra; Aarons, Greg; Fettes, Danielle; Hurlburt, Michael; Ledesma, Karla

    2016-03-01

    The Interdisciplinary Collaborative Team (ICT) strategy uses front-line providers as adaptation, training and quality control agents for multi-agency EBT implementation. This study tests whether an ICT transmits fidelity to subsequent provider cohorts. SafeCare was implemented by home visitors from multiple community-based agencies contracting with child welfare. Client-reported fidelity trajectories for 5,769 visits, 957 clients and 45 providers were compared using three-level growth models. Provider cohorts trained and live-coached by the ICT attained benchmark fidelity after 12 weeks, and this was sustained. Hispanic clients reported high cultural competency, supporting a cultural adaptation crafted by the ICT.

  6. AmeriFlux US-ADR Amargosa Desert Research Site (ADRS)

    DOE Data Explorer

    Moreo, Michael [U.S. Geological Survey

    2018-01-01

    This is the AmeriFlux version of the carbon flux data for the site US-ADR Amargosa Desert Research Site (ADRS). Site Description - This tower is located at the Amargosa Desert Research Site (ADRS). The U.S. Geological Survey (USGS) began studies of unsaturated zone hydrology at ADRS in 1976. Over the years, USGS investigations at ADRS have provided long-term "benchmark" information about the hydraulic characteristics and soil-water movement for both natural-site conditions and simulated waste-site conditions in an arid environment. The ADRS is located in a creosote-bush community adjacent to disposal trenches for low-level radioactive waste.

  7. Examples of coupled human and environmental systems from the extractive industry and hydropower sector interfaces

    PubMed Central

    Castro, Marcia C.; Krieger, Gary R.; Balge, Marci Z.; Tanner, Marcel; Utzinger, Jürg; Whittaker, Maxine; Singer, Burton H.

    2016-01-01

    Large-scale corporate projects, particularly those in extractive industries or hydropower development, have a history from early in the twentieth century of creating negative environmental, social, and health impacts on communities proximal to their operations. In many instances, especially for hydropower projects, the forced resettlement of entire communities was a feature in which local cultures and core human rights were severely impacted. These projects triggered an activist opposition that progressively expanded and became influential at both the host community level and with multilateral financial institutions. In parallel to, and spurred by, this activism, a shift occurred in 1969 with the passage of the National Environmental Policy Act in the United States, which required Environmental Impact Assessment (EIA) for certain types of industrial and infrastructure projects. Over the last four decades, there has been a global movement to develop a formal legal/regulatory EIA process for large industrial and infrastructure projects. In addition, social, health, and human rights impact assessments, with associated mitigation plans, were sequentially initiated and have increasingly influenced project design and relations among companies, host governments, and locally impacted communities. Often, beneficial community-level social, economic, and health programs have voluntarily been put in place by companies. These flagship programs can serve as benchmarks for community–corporate–government partnerships in the future. Here, we present examples of such positive phenomena and also focus attention on a myriad of challenges that still lie ahead. PMID:27791077

  8. Indicators of AEI applied to the Delaware Estuary.

    PubMed

    Barnthouse, Lawrence W; Heimbuch, Douglas G; Anthony, Vaughn C; Hilborn, Ray W; Myers, Ransom A

    2002-05-18

    We evaluated the impacts of entrainment and impingement at the Salem Generating Station on fish populations and communities in the Delaware Estuary. In the absence of an agreed-upon regulatory definition of "adverse environmental impact" (AEI), we developed three independent benchmarks of AEI based on observed or predicted changes that could threaten the sustainability of a population or the integrity of a community. Our benchmarks of AEI included: (1) disruption of the balanced indigenous community of fish in the vicinity of Salem (the "BIC" analysis); (2) a continued downward trend in the abundance of one or more susceptible fish species (the "Trends" analysis); and (3) occurrence of entrainment/impingement mortality sufficient, in combination with fishing mortality, to jeopardize the future sustainability of one or more populations (the "Stock Jeopardy" analysis). The BIC analysis utilized nearly 30 years of species presence/absence data collected in the immediate vicinity of Salem. The Trends analysis examined three independent data sets that document trends in the abundance of juvenile fish throughout the estuary over the past 20 years. The Stock Jeopardy analysis used two different assessment models to quantify potential long-term impacts of entrainment and impingement on susceptible fish populations. For one of these models, the compensatory capacities of the modeled species were quantified through meta-analysis of spawner-recruit data available for several hundred fish stocks. All three analyses indicated that the fish populations and communities of the Delaware Estuary are healthy and show no evidence of an adverse impact due to Salem. Although the specific models and analyses used at Salem are not applicable to every facility, we believe that a weight of evidence approach that evaluates multiple benchmarks of AEI using both retrospective and predictive methods is the best approach for assessing entrainment and impingement impacts at existing facilities.

  9. A benchmark for subduction zone modeling

    NASA Astrophysics Data System (ADS)

    van Keken, P.; King, S.; Peacock, S.

    2003-04-01

    Our understanding of subduction zones hinges critically on the ability to discern its thermal structure and dynamics. Computational modeling has become an essential complementary approach to observational and experimental studies. The accurate modeling of subduction zones is challenging due to the unique geometry, complicated rheological description and influence of fluid and melt formation. The complicated physics causes problems for the accurate numerical solution of the governing equations. As a consequence it is essential for the subduction zone community to be able to evaluate the ability and limitations of various modeling approaches. The participants of a workshop on the modeling of subduction zones, held at the University of Michigan at Ann Arbor, MI, USA in 2002, formulated a number of case studies to be developed into a benchmark similar to previous mantle convection benchmarks (Blankenbach et al., 1989; Busse et al., 1991; Van Keken et al., 1997). Our initial benchmark focuses on the dynamics of the mantle wedge and investigates three different rheologies: constant viscosity, diffusion creep, and dislocation creep. In addition we investigate the ability of codes to accurate model dynamic pressure and advection dominated flows. Proceedings of the workshop and the formulation of the benchmark are available at www.geo.lsa.umich.edu/~keken/subduction02.html We strongly encourage interested research groups to participate in this benchmark. At Nice 2003 we will provide an update and first set of benchmark results. Interested researchers are encouraged to contact one of the authors for further details.

  10. Benchmarking for Excellence and the Nursing Process

    NASA Technical Reports Server (NTRS)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  11. A Question of Accountability: Looking beyond Federal Mandates for Metrics That Accurately Benchmark Community College Success

    ERIC Educational Resources Information Center

    Joch, Alan

    2014-01-01

    The need for increased accountability in higher education and, specifically, the nation's community colleges-is something most educators can agree on. The challenge has, and continues to be, finding a system of metrics that meets the unique needs of two-year institutions versus their four-year-counterparts. Last summer, President Obama unveiled…

  12. Bioelectrochemical Systems Workshop:Standardized Analyses, Design Benchmarks, and Reporting

    DTIC Science & Technology

    2012-01-01

    related to the exoelectrogenic biofilm activity, and to investigate whether the community structure is a function of design and operational parameters...where should biofilm samples be collected? The most prevalent methods of community characterization in BES studies have entailed phylogenetic ...of function associated with this genetic marker, and in methods that involve polymerase chain reaction (PCR) amplification the quantitative

  13. Assessing equity in the geographical distribution of community pharmacies in South Africa in preparation for a national health insurance scheme

    PubMed Central

    Sanders, David; Leng, Henry; Pollock, Allyson M

    2014-01-01

    Abstract Objective To investigate equity in the geographical distribution of community pharmacies in South Africa and assess whether regulatory reforms have furthered such equity. Methods Data on community pharmacies from the national department of health and the South African pharmacy council were used to analyse the change in community pharmacy ownership and density (number per 10 000 residents) between 1994 and 2012 in all nine provinces and 15 selected districts. In addition, the density of public clinics, alone and with community pharmacies, was calculated and compared with a national benchmark of one clinic per 10 000 residents. Interviews were conducted with nine national experts from the pharmacy sector. Findings Community pharmacies increased in number by 13% between 1994 and 2012 – less than the 25% population growth. In 2012, community pharmacy density was higher in urban provinces and was eight times higher in the least deprived districts than in the most deprived ones. Maldistribution persisted despite the growth of corporate community pharmacies. In 2012, only two provinces met the 1 per 10 000 benchmark, although all provinces achieved it when community pharmacies and clinics were combined. Experts expressed concerns that a lack of rural incentives, inappropriate licensing criteria and a shortage of pharmacy workers could undermine access to pharmaceutical services, especially in rural areas. Conclusion To reduce inequity in the distribution of pharmaceutical services, new policies and legislation are needed to increase the staffing and presence of pharmacies. PMID:25110373

  14. An Alignment of the Canadian Language Benchmarks to the BC ESL Articulation Levels. Final Report - January 2007

    ERIC Educational Resources Information Center

    Barbour, Ross; Ostler, Catherine; Templeman, Elizabeth; West, Elizabeth

    2007-01-01

    The British Columbia (BC) English as a Second Language (ESL) Articulation Committee's Canadian Language Benchmarks project was precipitated by ESL instructors' desire to address transfer difficulties of ESL students within the BC transfer system and to respond to the recognition that the Canadian Language Benchmarks, a descriptive scale of ESL…

  15. High-energy neutron depth-dose distribution experiment.

    PubMed

    Ferenci, M S; Hertel, N E

    2003-01-01

    A unique set of high-energy neutron depth-dose benchmark experiments were performed at the Los Alamos Neutron Science Center/Weapons Neutron Research (LANSCE/WNR) complex. The experiments consisted of filtered neutron beams with energies up to 800 MeV impinging on a 30 x 30 x 30 cm3 liquid, tissue-equivalent phantom. The absorbed dose was measured in the phantom at various depths with tissue-equivalent ion chambers. This experiment is intended to serve as a benchmark experiment for the testing of high-energy radiation transport codes for the international radiation protection community.

  16. Tobacco companies’ efforts to undermine ingredient disclosure: the Massachusetts benchmark study

    PubMed Central

    Velicer, Clayton; Aguinaga-Bialous, Stella; Glantz, Stanton

    2015-01-01

    Objectives To assess the Massachusetts Benchmark ‘Study’ (MBS) that the tobacco companies presented to the Massachusetts Department of Public Health (MDPH) in 1999 in response to ingredient disclosure regulations in the state. This case study can inform future ingredient disclosure regulations, including implementation of Articles 9 and 10 of the WHO Framework Convention on Tobacco Control (FCTC). Methods We analysed documents available at http://legacy.library.ucsf.edu to identify internal communications regarding the design and execution of the MBS and internal studies on the relationship between tar, nicotine and carbon monoxide and smoke constituents and reviewed publications that further evaluated data published as part of the MBS. Results The companies conducted extensive studies of cigarette design factors and ingredients that significantly impacted the levels of constituents. While this study asserted that by-brand emissions could be estimated reliably from published tar, nicotine, and carbon monoxide levels, the tobacco companies were well aware that factors beyond tar, nicotine and carbon monoxide influenced levels of constituents included in the study. This severely limited the potential usefulness of the MBS predictor equations. Conclusions Despite promises to provide data that would allow regulators to predict constituent data for all brands on the market, the final MBS results offered no useful predictive information to inform regulators, the scientific community or consumers. When implementing FCTC Articles 9 and 10, regulatory agencies should demand detailed by-brand information on tobacco product constituents and toxin deliveries to users. PMID:26292701

  17. Mathematics Content Standards Benchmarks and Performance Standards

    ERIC Educational Resources Information Center

    New Mexico Public Education Department, 2008

    2008-01-01

    New Mexico Mathematics Content Standards, Benchmarks, and Performance Standards identify what students should know and be able to do across all grade levels, forming a spiraling framework in the sense that many skills, once introduced, develop over time. While the Performance Standards are set forth at grade-specific levels, they do not exist as…

  18. BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology.

    PubMed

    Villaverde, Alejandro F; Henriques, David; Smallbone, Kieran; Bongard, Sophia; Schmid, Joachim; Cicin-Sain, Damjan; Crombach, Anton; Saez-Rodriguez, Julio; Mauch, Klaus; Balsa-Canto, Eva; Mendes, Pedro; Jaeger, Johannes; Banga, Julio R

    2015-02-20

    Dynamic modelling is one of the cornerstones of systems biology. Many research efforts are currently being invested in the development and exploitation of large-scale kinetic models. The associated problems of parameter estimation (model calibration) and optimal experimental design are particularly challenging. The community has already developed many methods and software packages which aim to facilitate these tasks. However, there is a lack of suitable benchmark problems which allow a fair and systematic evaluation and comparison of these contributions. Here we present BioPreDyn-bench, a set of challenging parameter estimation problems which aspire to serve as reference test cases in this area. This set comprises six problems including medium and large-scale kinetic models of the bacterium E. coli, baker's yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The level of description includes metabolism, transcription, signal transduction, and development. For each problem we provide (i) a basic description and formulation, (ii) implementations ready-to-run in several formats, (iii) computational results obtained with specific solvers, (iv) a basic analysis and interpretation. This suite of benchmark problems can be readily used to evaluate and compare parameter estimation methods. Further, it can also be used to build test problems for sensitivity and identifiability analysis, model reduction and optimal experimental design methods. The suite, including codes and documentation, can be freely downloaded from the BioPreDyn-bench website, https://sites.google.com/site/biopredynbenchmarks/ .

  19. Capacity factor analysis for evaluating water and sanitation infrastructure choices for developing communities.

    PubMed

    Bouabid, Ali; Louis, Garrick E

    2015-09-15

    40% of the world's population lacks access to adequate supplies of water and sanitation services to sustain human health. In fact, more than 780 million people lack access to safe water supplies and about 2.5 billion people lack access to basic sanitation. Appropriate technology for water supply and sanitation (Watsan) systems is critical for sustained access to these services. Current approaches for the selection of Watsan technologies in developing communities have a high failure rate. It is estimated that 30%-60% of Watsan installed infrastructures in developing countries are not operating. Inappropriate technology is a common explanation for the high rate of failure of Watsan infrastructure, particularly in lower-income communities (Palaniappan et al., 2008). This paper presents the capacity factor analysis (CFA) model, for the assessment of a community's capacity to manage and sustain access to water supply and sanitation services. The CFA model is used for the assessment of a community's capacity to operate, and maintain a municipal sanitation service (MSS) such as, drinking water supply, wastewater and sewage treatment, and management of solid waste. The assessment of the community's capacity is based on seven capacity factors that have been identified as playing a key role in the sustainability of municipal sanitation services in developing communities (Louis, 2002). These capacity factors and their constituents are defined for each municipal sanitation service. Benchmarks and international standards for the constituents of the CFs are used to assess the capacity factors. The assessment of the community's capacity factors leads to determine the overall community capacity level (CCL) to manage a MSS. The CCL can then be used to assist the community in the selection of appropriate Watsan technologies for their MSS needs. The selection is done from Watsan technologies that require a capacity level to operate them that matches the assessed CCL of the community. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Detecting network communities beyond assortativity-related attributes

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Murata, Tsuyoshi; Wakita, Ken

    2014-07-01

    In network science, assortativity refers to the tendency of links to exist between nodes with similar attributes. In social networks, for example, links tend to exist between individuals of similar age, nationality, location, race, income, educational level, religious belief, and language. Thus, various attributes jointly affect the network topology. An interesting problem is to detect community structure beyond some specific assortativity-related attributes ρ, i.e., to take out the effect of ρ on network topology and reveal the hidden community structures which are due to other attributes. An approach to this problem is to redefine the null model of the modularity measure, so as to simulate the effect of ρ on network topology. However, a challenge is that we do not know to what extent the network topology is affected by ρ and by other attributes. In this paper, we propose a distance modularity, which allows us to freely choose any suitable function to simulate the effect of ρ. Such freedom can help us probe the effect of ρ and detect the hidden communities which are due to other attributes. We test the effectiveness of distance modularity on synthetic benchmarks and two real-world networks.

  1. Calibrating coseismic coastal land-level changes during the 2014 Iquique (Mw=8.2) earthquake (northern Chile) with leveling, GPS and intertidal biota

    PubMed Central

    Melnick, Daniel; Baez, Juan Carlos; Montecino, Henry; Lagos, Nelson A.; Acuña, Emilio; Manzano, Mario; Camus, Patricio A.

    2017-01-01

    The April 1st 2014 Iquique earthquake (MW 8.1) occurred along the northern Chile margin where the Nazca plate is subducted below the South American continent. The last great megathrust earthquake here, in 1877 of Mw ~8.8 opened a seismic gap, which was only partly closed by the 2014 earthquake. Prior to the earthquake in 2013, and shortly after it we compared data from leveled benchmarks, deployed campaign GPS instruments, continuous GPS stations and estimated sea levels using the upper vertical level of rocky shore benthic organisms including algae, barnacles, and mussels. Land-level changes estimated from mean elevations of benchmarks indicate subsidence along a ~100-km stretch of coast, ranging from 3 to 9 cm at Corazones (18°30’S) to between 30 and 50 cm at Pisagua (19°30’S). About 15 cm of uplift was measured along the southern part of the rupture at Chanabaya (20°50’S). Land-level changes obtained from benchmarks and campaign GPS were similar at most sites (mean difference 3.7±3.2 cm). Higher differences however, were found between benchmarks and continuous GPS (mean difference 8.5±3.6 cm), possibly because sites were not collocated and separated by several kilometers. Subsidence estimated from the upper limits of intertidal fauna at Pisagua ranged between 40 to 60 cm, in general agreement with benchmarks and GPS. At Chanavaya, the magnitude and sense of displacement of the upper marine limit was variable across species, possibly due to species—dependent differences in ecology. Among the studied species, measurements on lithothamnioid calcareous algae most closely matched those made with benchmarks and GPS. When properly calibrated, rocky shore benthic species may be used to accurately measure land-level changes along coasts affected by subduction earthquakes. Our calibration of those methods will improve their accuracy when applied to coasts lacking pre-earthquake data and in estimating deformation during pre–instrumental earthquakes. PMID:28333998

  2. A comprehensive analysis of sodium levels in the Canadian packaged food supply

    PubMed Central

    Arcand, JoAnne; Au, Jennifer T.C.; Schermel, Alyssa; L’Abbe, Mary R.

    2016-01-01

    Background Population-wide sodium reduction strategies aim to reduce the cardiovascular burden of excess dietary sodium. Lowering sodium in packaged foods, which contribute the most sodium to the diet, is an important intervention to lower population intakes. Purpose To determine sodium levels in Canadian packaged foods and evaluate the proportion of foods meeting sodium benchmark targets set by Health Canada. Methods A cross-sectional analysis of 7234 packaged foods available in Canada in 2010–11. Sodium values were obtained from the Nutrition Facts table. Results Overall, 51.4% of foods met one of the sodium benchmark levels: 11.5% met Phase 1, 11.1% met Phase 2, and 28.7% met 2016 goal (Phase 3) benchmarks. Food groups with the greatest proportion meeting goal benchmarks were dairy (52.0%) and breakfast cereals (42.2%). Overall 48.6% of foods did not meet any benchmark level and 25% of all products exceeded maximum levels. Meats (61.2%) and canned vegetables/legumes and legumes (29.6%) had the most products exceeding maximum levels. There was large variability in the range of sodium within and between food categories. Food categories highest in sodium (mg/serving) were dry, condensed and ready-to-serve soups (834 ± 256, 754 ± 163, and 636 ± 173, respectively), oriental noodles (783 ± 433), broth (642 ± 239), and frozen appetizers/sides (642 ± 292). Conclusion These data provide a critical baseline assessment for monitoring sodium levels in Canadian foods. While some segments of the market are making progress towards sodium reduction, all sectors need encouragement to continue to reduce the amount of sodium added during food processing. PMID:24842740

  3. Lessons Learned over Four Benchmark Exercises from the Community Structure-Activity Resource

    PubMed Central

    Carlson, Heather A.

    2016-01-01

    Preparing datasets and analyzing the results is difficult and time-consuming, and I hope the points raised here will help other scientists avoid some of the thorny issues we wrestled with. PMID:27345761

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horowitz, Kelsey A; Ding, Fei; Mather, Barry A

    This presentation was given at the 2017 NREL Workshop 'Benchmarking Distribution Grid Integration Costs Under High Distributed PV Penetrations.' It provides a brief overview of recent and ongoing NREL work on distribution system grid integration costs, as well as challenges and needs from the community.

  5. Board oversight of patient care quality in community health systems.

    PubMed

    Prybil, Lawrence D; Peterson, Richard; Brezinski, Paul; Zamba, Gideon; Roach, William; Fillmore, Ammon

    2010-01-01

    In hospitals and health systems, ensuring that standards for the quality of patient care are established and continuous improvement processes are in place are among the board's most fundamental responsibilities. A recent survey has examined governance oversight of patient care quality at 123 nonprofit community health systems and compared their practices with current benchmarks of good governance. The findings show that 88% of the boards have established standing committees on patient quality and safety, nearly all chief executive officers' performance expectations now include targets related to patient quality and safety, and 96% of the boards regularly receive formal written reports regarding their organizations' performance in relation to quality measures and standards. However, there continue to be gaps between present reality and current benchmarks of good governance in several areas. These gaps are somewhat greater for independent systems than for those affiliated with a larger parent organization.

  6. An evaluation of the accuracy and speed of metagenome analysis tools

    PubMed Central

    Lindgreen, Stinus; Adair, Karen L.; Gardner, Paul P.

    2016-01-01

    Metagenome studies are becoming increasingly widespread, yielding important insights into microbial communities covering diverse environments from terrestrial and aquatic ecosystems to human skin and gut. With the advent of high-throughput sequencing platforms, the use of large scale shotgun sequencing approaches is now commonplace. However, a thorough independent benchmark comparing state-of-the-art metagenome analysis tools is lacking. Here, we present a benchmark where the most widely used tools are tested on complex, realistic data sets. Our results clearly show that the most widely used tools are not necessarily the most accurate, that the most accurate tool is not necessarily the most time consuming, and that there is a high degree of variability between available tools. These findings are important as the conclusions of any metagenomics study are affected by errors in the predicted community composition and functional capacity. Data sets and results are freely available from http://www.ucbioinformatics.org/metabenchmark.html PMID:26778510

  7. A BENCHMARKING ANALYSIS FOR FIVE RADIONUCLIDE VADOSE ZONE MODELS (CHAIN, MULTIMED_DP, FECTUZ, HYDRUS, AND CHAIN 2D) IN SOIL SCREENING LEVEL CALCULATIONS

    EPA Science Inventory

    Five radionuclide vadose zone models with different degrees of complexity (CHAIN, MULTIMED_DP, FECTUZ, HYDRUS, and CHAIN 2D) were selected for use in soil screening level (SSL) calculations. A benchmarking analysis between the models was conducted for a radionuclide (99Tc) rele...

  8. Utilising Benchmarking to Inform Decision-Making at the Institutional Level: A Research-Informed Process

    ERIC Educational Resources Information Center

    Booth, Sara

    2013-01-01

    Benchmarking has traditionally been viewed as a way to compare data only; however, its utilisation as a more investigative, research-informed process to add rigor to decision-making processes at the institutional level is gaining momentum in the higher education sector. Indeed, with recent changes in the Australian quality environment from the…

  9. Contributions to Integral Nuclear Data in ICSBEP and IRPhEP since ND 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Briggs, J. Blair; Gulliford, Jim

    2016-09-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the international nuclear data community at ND2013. Since ND2013, integral benchmark data that are available for nuclear data testing has continued to increase. The status of the international benchmark efforts and the latest contributions to integral nuclear data for testing is discussed. Select benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2013 are highlighted. The 2015 edition of the ICSBEP Handbook now contains 567 evaluations with benchmark specifications for 4,874more » critical, near-critical, or subcritical configurations, 31 criticality alarm placement/shielding configuration with multiple dose points apiece, and 207 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. The 2015 edition of the IRPhEP Handbook contains data from 143 different experimental series that were performed at 50 different nuclear facilities. Currently 139 of the 143 evaluations are published as approved benchmarks with the remaining four evaluations published in draft format only. Measurements found in the IRPhEP Handbook include criticality, buckling and extrapolation length, spectral characteristics, reactivity effects, reactivity coefficients, kinetics, reaction-rate distributions, power distributions, isotopic compositions, and/or other miscellaneous types of measurements for various types of reactor systems. Annual technical review meetings for both projects were held in April 2016; additional approved benchmark evaluations will be included in the 2016 editions of these handbooks.« less

  10. A suite of exercises for verifying dynamic earthquake rupture codes

    USGS Publications Warehouse

    Harris, Ruth A.; Barall, Michael; Aagaard, Brad T.; Ma, Shuo; Roten, Daniel; Olsen, Kim B.; Duan, Benchun; Liu, Dunyu; Luo, Bin; Bai, Kangchen; Ampuero, Jean-Paul; Kaneko, Yoshihiro; Gabriel, Alice-Agnes; Duru, Kenneth; Ulrich, Thomas; Wollherr, Stephanie; Shi, Zheqiang; Dunham, Eric; Bydlon, Sam; Zhang, Zhenguo; Chen, Xiaofei; Somala, Surendra N.; Pelties, Christian; Tago, Josue; Cruz-Atienza, Victor Manuel; Kozdon, Jeremy; Daub, Eric; Aslam, Khurram; Kase, Yuko; Withers, Kyle; Dalguer, Luis

    2018-01-01

    We describe a set of benchmark exercises that are designed to test if computer codes that simulate dynamic earthquake rupture are working as intended. These types of computer codes are often used to understand how earthquakes operate, and they produce simulation results that include earthquake size, amounts of fault slip, and the patterns of ground shaking and crustal deformation. The benchmark exercises examine a range of features that scientists incorporate in their dynamic earthquake rupture simulations. These include implementations of simple or complex fault geometry, off‐fault rock response to an earthquake, stress conditions, and a variety of formulations for fault friction. Many of the benchmarks were designed to investigate scientific problems at the forefronts of earthquake physics and strong ground motions research. The exercises are freely available on our website for use by the scientific community.

  11. 75 FR 6368 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-09

    ... benchmarking U.S. performance in mathematics and science at the fourth- and eighth-grade levels against other... assessment data for internationally benchmarking U.S. performance in fourth-grade reading. NCES has received...

  12. Evaluation of CHO Benchmarks on the Arria 10 FPGA using Intel FPGA SDK for OpenCL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes themore » FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. Benchmarking of OpenCL-based framework is an effective way for analyzing the performance of system by studying the execution of the benchmark applications. CHO is a suite of benchmark applications that provides support for OpenCL [1]. The authors presented CHO as an OpenCL port of the CHStone benchmark. Using Altera OpenCL (AOCL) compiler to synthesize the benchmark applications, they listed the resource usage and performance of each kernel that can be successfully synthesized by the compiler. In this report, we evaluate the resource usage and performance of the CHO benchmark applications using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board that features an Arria 10 FPGA device. The focus of the report is to have a better understanding of the resource usage and performance of the kernel implementations using Arria-10 FPGA devices compared to Stratix-5 FPGA devices. In addition, we also gain knowledge about the limitations of the current compiler when it fails to synthesize a benchmark application.« less

  13. Derivation of Draft Ecological Soil Screening Levels for TNT and RDX Utilizing Terrestrial Plant and Soil Invertebrate Toxicity Benchmarks

    DTIC Science & Technology

    2012-11-01

    TSL Soils Utilizing Growth Benchmarks for Alfalfa , Barnyard Grass, and Perennial Ryegrass ............................................. 5 3...Derivation of Terrestrial Plant-Based Draft Eco-SSL Value for RDX Weathered-and-Aged in SSL or TSL Soils Utilizing Growth Benchmarks for Alfalfa ...studies were conducted using the following plant species:  Dicotyledonous symbiotic species alfalfa (Medicago sativa L.)  Monocotyledonous

  14. SA-SOM algorithm for detecting communities in complex networks

    NASA Astrophysics Data System (ADS)

    Chen, Luogeng; Wang, Yanran; Huang, Xiaoming; Hu, Mengyu; Hu, Fang

    2017-10-01

    Currently, community detection is a hot topic. This paper, based on the self-organizing map (SOM) algorithm, introduced the idea of self-adaptation (SA) that the number of communities can be identified automatically, a novel algorithm SA-SOM of detecting communities in complex networks is proposed. Several representative real-world networks and a set of computer-generated networks by LFR-benchmark are utilized to verify the accuracy and the efficiency of this algorithm. The experimental findings demonstrate that this algorithm can identify the communities automatically, accurately and efficiently. Furthermore, this algorithm can also acquire higher values of modularity, NMI and density than the SOM algorithm does.

  15. The Mobile story: data-driven community efforts to raise graduation rates.

    PubMed

    Newell, Jeremiah; Akers, Carolyn

    2010-01-01

    Through sustained community organizing and strategic partnerships, the Mobile (Alabama) County Public School System is improving achievement and creating beat-the-odds schools that set and achieve high academic expectations despite the challenges of poverty and racial disparity. The authors chart how Mobile's Research Alliance for Multiple Pathways, funded through the U.S. Department of Labor's Multiple Pathways Blueprint Initiative, is identifying gaps in services throughout the community, analyzing the data about dropouts, benchmarking other communities, studying best practices, and mobilizing the community to expect and demand higher graduation rates. These activities are resulting in early identification of off-track students and coordination of school- and community-based reforms.

  16. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE PAGES

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...

    2018-03-26

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  17. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  18. A benchmark for vehicle detection on wide area motion imagery

    NASA Astrophysics Data System (ADS)

    Catrambone, Joseph; Amzovski, Ismail; Liang, Pengpeng; Blasch, Erik; Sheaff, Carolyn; Wang, Zhonghai; Chen, Genshe; Ling, Haibin

    2015-05-01

    Wide area motion imagery (WAMI) has been attracting an increased amount of research attention due to its large spatial and temporal coverage. An important application includes moving target analysis, where vehicle detection is often one of the first steps before advanced activity analysis. While there exist many vehicle detection algorithms, a thorough evaluation of them on WAMI data still remains a challenge mainly due to the lack of an appropriate benchmark data set. In this paper, we address a research need by presenting a new benchmark for wide area motion imagery vehicle detection data. The WAMI benchmark is based on the recently available Wright-Patterson Air Force Base (WPAFB09) dataset and the Temple Resolved Uncertainty Target History (TRUTH) associated target annotation. Trajectory annotations were provided in the original release of the WPAFB09 dataset, but detailed vehicle annotations were not available with the dataset. In addition, annotations of static vehicles, e.g., in parking lots, are also not identified in the original release. Addressing these issues, we re-annotated the whole dataset with detailed information for each vehicle, including not only a target's location, but also its pose and size. The annotated WAMI data set should be useful to community for a common benchmark to compare WAMI detection, tracking, and identification methods.

  19. Struever Bros. Eccles & Rouse: Mixed, Humid Climate Region 40+% Energy Savings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2009-08-14

    This case study describes the Overlook at Clipper Hill community homes in Baltimore, Maryland, which have been designed to perform 40+% better than the Building America benchmark house by combining aesthetics, energy performance, and sustainability.

  20. CALiPER Report 20.3: Robustness of LED PAR38 Lamps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poplawski, Michael E.; Royer, Michael P.; Brown, Charles C.

    2014-12-01

    Three samples of 40 of the Series 20 PAR38 lamps underwent multi-stress testing, whereby samples were subjected to increasing levels of simultaneous thermal, humidity, electrical, and vibrational stress. The results do not explicitly predict expected lifetime or reliability, but they can be compared with one another, as well as with benchmark conventional products, to assess the relative robustness of the product designs. On average, the 32 LED lamp models tested were substantially more robust than the conventional benchmark lamps. As with other performance attributes, however, there was great variability in the robustness and design maturity of the LED lamps. Severalmore » LED lamp samples failed within the first one or two levels of the ten-level stress plan, while all three samples of some lamp models completed all ten levels. One potential area of improvement is design maturity, given that more than 25% of the lamp models demonstrated a difference in failure level for the three samples that was greater than or equal to the maximum for the benchmarks. At the same time, the fact that nearly 75% of the lamp models exhibited better design maturity than the benchmarks is noteworthy, given the relative stage of development for the technology.« less

  1. A Standard-Setting Study to Establish College Success Criteria to Inform the SAT® College and Career Readiness Benchmark. Research Report 2012-3

    ERIC Educational Resources Information Center

    Kobrin, Jennifer L.; Patterson, Brian F.; Wiley, Andrew; Mattern, Krista D.

    2012-01-01

    In 2011, the College Board released its SAT college and career readiness benchmark, which represents the level of academic preparedness associated with a high likelihood of college success and completion. The goal of this study, which was conducted in 2008, was to establish college success criteria to inform the development of the benchmark. The…

  2. Human Health Benchmarks for Pesticides

    EPA Pesticide Factsheets

    Advanced testing methods now allow pesticides to be detected in water at very low levels. These small amounts of pesticides detected in drinking water or source water for drinking water do not necessarily indicate a health risk. The EPA has developed human health benchmarks for 363 pesticides to enable our partners to better determine whether the detection of a pesticide in drinking water or source waters for drinking water may indicate a potential health risk and to help them prioritize monitoring efforts.The table below includes benchmarks for acute (one-day) and chronic (lifetime) exposures for the most sensitive populations from exposure to pesticides that may be found in surface or ground water sources of drinking water. The table also includes benchmarks for 40 pesticides in drinking water that have the potential for cancer risk. The HHBP table includes pesticide active ingredients for which Health Advisories or enforceable National Primary Drinking Water Regulations (e.g., maximum contaminant levels) have not been developed.

  3. Interactive hazards education program for youth in a low SES community: a quasi-experimental pilot study.

    PubMed

    Webb, Michelle; Ronan, Kevin R

    2014-10-01

    A pilot study of an interactive hazards education program was carried out in Canberra (Australia), with direct input from youth participants. Effects were evaluated in relation to youths' interest in disasters, motivation to prepare, risk awareness, knowledge indicators, perceived preparedness levels, planning and practice for emergencies, and fear and anxiety indicators. Parents also provided ratings, including of actual home-based preparedness activities. Using a single group pretest-posttest with benchmarking design, a sample of 20 youths and their parents from a low SES community participated. Findings indicated beneficial changes on a number of indicators. Preparedness indicators increased significantly from pre- to posttest on both youth (p < 0.01) and parent ratings (p < 0.01). Parent ratings reflected an increase of just under six home-based preparedness activities. Youth knowledge about disaster mitigation also was seen to increase significantly (p < 0.001), increasing 39% from pretest levels. While personalized risk perceptions significantly increased (p < 0.01), anxiety and worry levels were seen either not to change (generalized anxiety, p > 0.05) or to reduce between pre- and posttest (hazards-specific fears, worry, and distress, ps ranged from p < 0.05 to < 0.001). In terms of predictors of preparedness, a number of variables were found to predict posttest preparedness levels, including information searching done by participants between education sessions. These pilot findings are the first to reflect quasi-experimental outcomes for a youth hazards education program carried out in a setting other than a school that focused on a sample of youth from a low SES community. © 2014 Society for Risk Analysis.

  4. Ecological Consistency of SSU rRNA-Based Operational Taxonomic Units at a Global Scale

    PubMed Central

    Schmidt, Thomas S. B.; Matias Rodrigues, João F.; von Mering, Christian

    2014-01-01

    Operational Taxonomic Units (OTUs), usually defined as clusters of similar 16S/18S rRNA sequences, are the most widely used basic diversity units in large-scale characterizations of microbial communities. However, it remains unclear how well the various proposed OTU clustering algorithms approximate ‘true’ microbial taxa. Here, we explore the ecological consistency of OTUs – based on the assumption that, like true microbial taxa, they should show measurable habitat preferences (niche conservatism). In a global and comprehensive survey of available microbial sequence data, we systematically parse sequence annotations to obtain broad ecological descriptions of sampling sites. Based on these, we observe that sequence-based microbial OTUs generally show high levels of ecological consistency. However, different OTU clustering methods result in marked differences in the strength of this signal. Assuming that ecological consistency can serve as an objective external benchmark for cluster quality, we conclude that hierarchical complete linkage clustering, which provided the most ecologically consistent partitions, should be the default choice for OTU clustering. To our knowledge, this is the first approach to assess cluster quality using an external, biologically meaningful parameter as a benchmark, on a global scale. PMID:24763141

  5. Fingerprinting sea-level variations in response to continental ice loss: a benchmark exercise

    NASA Astrophysics Data System (ADS)

    Barletta, Valentina R.; Spada, Giorgio; Riva, Riccardo E. M.; James, Thomas S.; Simon, Karen M.; van der Wal, Wouter; Martinec, Zdenek; Klemann, Volker; Olsson, Per-Anders; Hagedoorn, Jan; Stocchi, Paolo; Vermeersen, Bert

    2013-04-01

    Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying Glacial Isostatic Adjustment (GIA) can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Here we present the results of a benchmark exercise of independently developed codes designed to solve the SLE. The study involves predictions of current sea level changes due to present-day ice mass loss. In spite of the differences in the methods employed, the comparison shows that a significant number of GIA modellers can reproduce their sea-level computations within 2% for well defined, large-scale present-day ice mass changes. Smaller and more detailed loads need further and dedicated benchmarking and high resolution computation. This study shows how the details of the implementation and the inputs specifications are an important, and often underappreciated, aspect. Hence this represents a step toward the assessment of reliability of sea level projections obtained with benchmarked SLE codes.

  6. Correlation of Noncancer Benchmark Doses in Short- and Long-Term Rodent Bioassays.

    PubMed

    Kratchman, Jessica; Wang, Bing; Fox, John; Gray, George

    2018-05-01

    This study investigated whether, in the absence of chronic noncancer toxicity data, short-term noncancer toxicity data can be used to predict chronic toxicity effect levels by focusing on the dose-response relationship instead of a critical effect. Data from National Toxicology Program (NTP) technical reports have been extracted and modeled using the Environmental Protection Agency's Benchmark Dose Software. Best-fit, minimum benchmark dose (BMD), and benchmark dose lower limits (BMDLs) have been modeled for all NTP pathologist identified significant nonneoplastic lesions, final mean body weight, and mean organ weight of 41 chemicals tested by NTP between 2000 and 2012. Models were then developed at the chemical level using orthogonal regression techniques to predict chronic (two years) noncancer health effect levels using the results of the short-term (three months) toxicity data. The findings indicate that short-term animal studies may reasonably provide a quantitative estimate of a chronic BMD or BMDL. This can allow for faster development of human health toxicity values for risk assessment for chemicals that lack chronic toxicity data. © 2017 Society for Risk Analysis.

  7. Groundwater-quality data in the North San Francisco Bay Shallow Aquifer study unit, 2012: results from the California GAMA Program

    USGS Publications Warehouse

    Bennett, George L.; Fram, Miranda S.

    2014-01-01

    Results for constituents with non-regulatory benchmarks set for aesthetic concerns from the grid wells showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in 13 grid wells. Chloride was detected at a concentration greater than the SMCL-CA recommended benchmark of 250 mg/L in two grid wells. Sulfate concentrations greater than the SMCL-CA recommended benchmark of 250 mg/L were measured in two grid wells, and the concentration in one of these wells was also greater than the SMCL-CA upper benchmark of 500 mg/L. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 15 grid wells, and concentrations in 4 of these wells were also greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  8. Adverse Outcome Pathway Network Analyses: Techniques and benchmarking the AOPwiki

    EPA Science Inventory

    Abstract: As the community of toxicological researchers, risk assessors, and risk managers adopt the adverse outcome pathway (AOP) paradigm for organizing toxicological knowledge, the number and diversity of adverse outcome pathways and AOP networks are continuing to grow. This ...

  9. A note on bound constraints handling for the IEEE CEC'05 benchmark function suite.

    PubMed

    Liao, Tianjun; Molina, Daniel; de Oca, Marco A Montes; Stützle, Thomas

    2014-01-01

    The benchmark functions and some of the algorithms proposed for the special session on real parameter optimization of the 2005 IEEE Congress on Evolutionary Computation (CEC'05) have played and still play an important role in the assessment of the state of the art in continuous optimization. In this article, we show that if bound constraints are not enforced for the final reported solutions, state-of-the-art algorithms produce infeasible best candidate solutions for the majority of functions of the IEEE CEC'05 benchmark function suite. This occurs even though the optima of the CEC'05 functions are within the specified bounds. This phenomenon has important implications on algorithm comparisons, and therefore on algorithm designs. This article's goal is to draw the attention of the community to the fact that some authors might have drawn wrong conclusions from experiments using the CEC'05 problems.

  10. Final Report of the NASA Office of Safety and Mission Assurance Agile Benchmarking Team

    NASA Technical Reports Server (NTRS)

    Wetherholt, Martha

    2016-01-01

    To ensure that the NASA Safety and Mission Assurance (SMA) community remains in a position to perform reliable Software Assurance (SA) on NASAs critical software (SW) systems with the software industry rapidly transitioning from waterfall to Agile processes, Terry Wilcutt, Chief, Safety and Mission Assurance, Office of Safety and Mission Assurance (OSMA) established the Agile Benchmarking Team (ABT). The Team's tasks were: 1. Research background literature on current Agile processes, 2. Perform benchmark activities with other organizations that are involved in software Agile processes to determine best practices, 3. Collect information on Agile-developed systems to enable improvements to the current NASA standards and processes to enhance their ability to perform reliable software assurance on NASA Agile-developed systems, 4. Suggest additional guidance and recommendations for updates to those standards and processes, as needed. The ABT's findings and recommendations for software management, engineering and software assurance are addressed herein.

  11. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.; Kornreich, D.E.

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less

  12. Assessment of the monitoring and evaluation system for integrated community case management (ICCM) in Ethiopia: a comparison against global benchmark indicators.

    PubMed

    Mamo, Dereje; Hazel, Elizabeth; Lemma, Israel; Guenther, Tanya; Bekele, Abeba; Demeke, Berhanu

    2014-10-01

    Program managers require feasible, timely, reliable, and valid measures of iCCM implementation to identify problems and assess progress. The global iCCM Task Force developed benchmark indicators to guide implementers to develop or improve monitoring and evaluation (M&E) systems. To assesses Ethiopia's iCCM M&E system by determining the availability and feasibility of the iCCM benchmark indicators. We conducted a desk review of iCCM policy documents, monitoring tools, survey reports, and other rele- vant documents; and key informant interviews with government and implementing partners involved in iCCM scale-up and M&E. Currently, Ethiopia collects data to inform most (70% [33/47]) iCCM benchmark indicators, and modest extra effort could boost this to 83% (39/47). Eight (17%) are not available given the current system. Most benchmark indicators that track coordination and policy, human resources, service delivery and referral, supervision, and quality assurance are available through the routine monitoring systems or periodic surveys. Indicators for supply chain management are less available due to limited consumption data and a weak link with treatment data. Little information is available on iCCM costs. Benchmark indicators can detail the status of iCCM implementation; however, some indicators may not fit country priorities, and others may be difficult to collect. The government of Ethiopia and partners should review and prioritize the benchmark indicators to determine which should be included in the routine M&E system, especially since iCCMdata are being reviewed for addition to the HMIS. Moreover, the Health Extension Worker's reporting burden can be minimized by an integrated reporting approach.

  13. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 36 safety and 18 health compliance officers. After opportunity for public...

  14. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 23 safety and 14 health compliance officers. After opportunity for public...

  15. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 47 safety and 23 health compliance officers. After opportunity for public...

  16. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION..., in conjunction with OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 17 safety and 12 health compliance officers. After...

  17. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 22 safety and 14 health compliance officers. After opportunity for public...

  18. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 22 safety and 14 health compliance officers. After opportunity for public...

  19. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 6 safety and 2 health compliance officers. After opportunity for pulbic...

  20. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 9 safety and 6 health compliance officers. After opportunity for public...

  1. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 38 safety and 21 health compliance officers. After opportunity for public...

  2. 29 CFR 1952.203 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 31 safety and 12 health compliance officers. After opportunity for public...

  3. 29 CFR 1952.203 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 31 safety and 12 health compliance officers. After opportunity for public...

  4. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 6 safety and 2 health compliance officers. After opportunity for pulbic...

  5. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 38 safety and 21 health compliance officers. After opportunity for public...

  6. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION..., in conjunction with OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 17 safety and 12 health compliance officers. After...

  7. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 23 safety and 14 health compliance officers. After opportunity for public...

  8. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 47 safety and 23 health compliance officers. After opportunity for public...

  9. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 9 safety and 6 health compliance officers. After opportunity for public...

  10. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 36 safety and 18 health compliance officers. After opportunity for public...

  11. The Digital Library for Earth System Education: A Progress Report from the DLESE Program Center

    NASA Astrophysics Data System (ADS)

    Marlino, M. R.; Sumner, T. R.; Kelly, K. K.; Wright, M.

    2002-12-01

    DLESE is a community-owned and governed digital library offering easy access to high quality electronic resources about the Earth system at all educational levels. Currently in its third year of development and operation, DLESE resources are designed to support systemic educational reform, and include web-based teaching resources, tools, and services for the inclusion of data in classroom activities, as well as a "virtual community center" that supports community goals and growth. "Community-owned" and "community-governed" embody the singularity of DLESE through its unique participatory approach to both library building and governance. DLESE is guided by policy development vested in the DLESE Steering Committee, and informed by Standing Committees centered on Collections, Services, Technology, and Users, and community working groups covering a wide variety of interest areas. This presentation highlights both current and projected status of the library and opportunities for community engagement. It is specifically structured to engage community members in the design of the next version of the library release. The current Version 1.0 of the library consists of a web-accessible graphical user interface connected to a database of catalogued educational resources (approximately 3000); a metadata framework enabling resource characterization; a cataloging tool allowing community cataloging and indexing of materials; a search and discovery system allowing browsing based on topic, grade level, and resource type, and permitting keyword and controlled vocabulary-based searches; and a portal website supporting library use, community action, and DLESE partnerships. Future stages of library development will focus on enhanced community collaborative support; development of controlled vocabularies; collections building and community review systems; resource discovery integrating the National Science Education Standards and geography standards; Earth system science vocabulary; georeferenced discovery; and ultimately, AAAS Benchmarks. DLESE is being designed from the outset to support resource discovery across a diverse, federated network of holdings and collections, including the Alexandria Digital Library Earth Prototype (ADL/ADEPT), NASA education collections, the DLESE reviewed collection, and other community-held resources that have been cataloged and indexed as part of the overall DLESE collections.

  12. Tobacco companies' efforts to undermine ingredient disclosure: the Massachusetts benchmark study.

    PubMed

    Velicer, Clayton; Aguinaga-Bialous, Stella; Glantz, Stanton

    2016-09-01

    To assess the 'Massachusetts Benchmark Study' (MBS) that the tobacco companies presented to the Massachusetts Department of Public Health (MDPH) in 1999 in response to ingredient disclosure regulations in the state. This case study can inform future ingredient disclosure regulations, including implementation of Articles 9 and 10 of the WHO Framework Convention on Tobacco Control (FCTC). We analysed documents available at http://legacy.library.ucsf.edu to identify internal communications regarding the design and execution of the MBS and internal studies on the relationship between tar, nicotine and carbon monoxide and smoke constituents and reviewed publications that further evaluated data published as part of the MBS. The companies conducted extensive studies of cigarette design factors and ingredients that significantly impacted the levels of constituents. While this study asserted that by-brand emissions could be estimated reliably from published tar, nicotine, and carbon monoxide levels, the tobacco companies were well aware that factors beyond tar, nicotine and carbon monoxide influenced levels of constituents included in the study. This severely limited the potential usefulness of the MBS predictor equations. Despite promises to provide data that would allow regulators to predict constituent data for all brands on the market, the final MBS results offered no useful predictive information to inform regulators, the scientific community or consumers. When implementing FCTC Articles 9 and 10, regulatory agencies should demand detailed by-brand information on tobacco product constituents and toxin deliveries to users. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. Benchmarking NWP Kernels on Multi- and Many-core Processors

    NASA Astrophysics Data System (ADS)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  14. A Statewide Collaboration: Ohio Level III Trauma Centers' Approach to the Development of a Benchmarking System.

    PubMed

    Lang, Carrie L; Simon, Diane; Kilgore, Jane

    The American College of Surgeons Committee on Trauma revised the Resources for Optimal Care of the Injured Patient to include the criteria for trauma centers to participate in a risk-adjusted benchmarking system. Trauma Quality Improvement Program is currently the risk-adjusted benchmarking program sponsored by the American College of Surgeons, which will be required of all trauma centers to participate in early 2017. Prior to this, there were no risk-adjusted programs for Level III verified trauma centers. The Ohio Society of Trauma Nurse Leaders is a collaborative group made up of trauma program managers, coordinators, and other trauma leaders who meet 6 times a year. Within this group, a Level III Subcommittee was formed initially to provide a place for the Level III centers to discuss issues specific to the Level III centers. When the new requirement regarding risk-adjustment became official, the subcommittee agreed to begin reporting simple data points with the idea to risk adjust in the future.

  15. Vegetation composition and structure of southern coastal plain pine forests: An ecological comparison

    USGS Publications Warehouse

    Hedman, C.W.; Grace, S.L.; King, S.E.

    2000-01-01

    Longleaf pine (Pinus palustris) ecosystems are characterized by a diverse community of native groundcover species. Critics of plantation forestry claim that loblolly (Pinus taeda) and slash pine (Pinus elliottii) forests are devoid of native groundcover due to associated management practices. As a result of these practices, some believe that ecosystem functions characteristic of longleaf pine are lost under loblolly and slash pine plantation management. Our objective was to quantify and compare vegetation composition and structure of longleaf, loblolly, and slash pine forests of differing ages, management strategies, and land-use histories. Information from this study will further our understanding and lead to inferences about functional differences among pine cover types. Vegetation and environmental data were collected in 49 overstory plots across Southlands Experiment Forest in Bainbridge, GA. Nested plots, i.e. midstory, understory, and herbaceous, were replicated four times within each overstory plot. Over 400 species were identified. Herbaceous species richness was variable for all three pine cover types. Herbaceous richness for longleaf, slash, and loblolly pine averaged 15, 13, and 12 species per m2, respectively. Longleaf pine plots had significantly more (p < 0.029) herbaceous species and greater herbaceous cover (p < 0.001) than loblolly or slash pine plots. Longleaf and slash pine plots were otherwise similar in species richness and stand structure, both having lower overstory density, midstory density, and midstory cover than loblolly pine plots. Multivariate analyses provided additional perspectives on vegetation patterns. Ordination and classification procedures consistently placed herbaceous plots into two groups which we refer to as longleaf pine benchmark (34 plots) and non-benchmark (15 plots). Benchmark plots typically contained numerous herbaceous species characteristic of relic longleaf pine/wiregrass communities found in the area. Conversely, non-benchmark plots contained fewer species characteristic of relic longleaf pine/wiregrass communities and more ruderal species common to highly disturbed sites. The benchmark group included 12 naturally regenerated longleaf plots and 22 loblolly, slash, and longleaf pine plantation plots encompassing a broad range of silvicultural disturbances. Non-benchmark plots included eight afforested old-field plantation plots and seven cutover plantation plots. Regardless of overstory species, all afforested old fields were low either in native species richness or in abundance. Varying degrees of this groundcover condition were also found in some cutover plantation plots that were classified as non-benchmark. Environmental variables strongly influencing vegetation patterns included agricultural history and fire frequency. Results suggest that land-use history, particularly related to agriculture, has a greater influence on groundcover composition and structure in southern pine forests than more recent forest management activities or pine cover type. Additional research is needed to identify the potential for afforested old fields to recover native herbaceous species. In the interim, high-yield plantation management should initially target old-field sites which already support reduced numbers of groundcover species. Sites which have not been farmed in the past 50-60 years should be considered for longleaf pine restoration and multiple-use objectives, since they have the greatest potential for supporting diverse native vegetation. (C) 2000 Elsevier Science B.V.

  16. Communication Strategy of Transboundary Air Pollution Findings in a US-Mexico Border XXI Program Project

    NASA Astrophysics Data System (ADS)

    Mukerjee, Shaibal

    2002-01-01

    From 1996 to 1997, the US Environmental Protection Agency (EPA) and the Texas Natural Resource Conservation Commission (TNRCC) conducted an air quality study known as the Lower Rio Grande Valley Transboundary Air Pollution Project (TAPP). The study was a US-Mexico Border XXI program project and was developed in response to local community requests on a need for more air quality measurements and concerns about the health impact of local air pollutants; this included concerns about emissions from border-dependent industries in Mexico, known as maquiladoras. The TAPP was a follow-up study to environmental monitoring done by EPA in this area in 1993 and incorporated scientific and community participation in development, review of results, and public presentation of findings. In spite of this, critical remarks were leveled by community activists against the study's preliminary "good news" findings regarding local air quality and the influence of transboundary air pollution. To resolve these criticisms and to refine the findings to address these concerns, analyses included comparisons of daily and near real-time measurements to TNRCC effects screening levels and data from other studies along with wind sector analyses. Reassessment of the data suggested that although regional source emissions occurred and outliers of elevated pollutant levels were found, movement of air pollution across the border did not appear to cause noticeable deterioration of air quality. In spite of limitations stated to the community, the TAPP was presented as establishing a benchmark to assess current and future transboundary air quality in the Valley. The study has application in Border XXI Program or other air quality studies where transboundary transport is a concern since it involved interagency coordination, public involvement, and communication of scientifically sound results for local environmental protection efforts.

  17. Communication strategy of transboundary air pollution findings in a US-Mexico Border XXI program project.

    PubMed

    Mukerjee, Shaibal

    2002-01-01

    From 1996 to 1997, the US Environmental Protection Agency (EPA) and the Texas Natural Resource Conservation Commission (TNRCC) conducted an air quality study known as the Lower Rio Grande Valley Transboundary Air Pollution Project (TAPP). The study was a US-Mexico Border XXI program project and was developed in response to local community requests on a need for more air quality measurements and concerns about the health impact of local air pollutants; this included concerns about emissions from border-dependent industries in Mexico, known as maquiladoras. The TAPP was a follow-up study to environmental monitoring done by EPA in this area in 1993 and incorporated scientific and community participation in development, review of results, and public presentation of findings. In spite of this, critical remarks were leveled by community activists against the study's preliminary "good news" findings regarding local air quality and the influence of transboundary air pollution. To resolve these criticisms and to refine the findings to address these concerns, analyses included comparisons of daily and near real-time measurements to TNRCC effects screening levels and data from other studies along with wind sector analyses. Reassessment of the data suggested that although regional source emissions occurred and outliers of elevated pollutant levels were found, movement of air pollution across the border did not appear to cause noticeable deterioration of air quality. In spite of limitations stated to the community, the TAPP was presented as establishing a benchmark to assess current and future transboundary air quality in the Valley. The study has application in Border XXI Program or other air quality studies where transboundary transport is a concern since it involved interagency coordination, public involvement, and communication of scientifically sound results for local environmental protection efforts.

  18. Promoting Child Safety, Permanence, and Well-Being through Safe and Strong Families, Supportive Communities, and Effective Systems. Policy Matters: Setting and Measuring Benchmarks for State Policies. A Discussion Paper for the "Policy Matters" Project

    ERIC Educational Resources Information Center

    Center for the Study of Social Policy, 2009

    2009-01-01

    The "Policy Matters" project provides coherent, comprehensive information regarding the strength and adequacy of state policies affecting children, families, and communities. The project seeks to establish consensus among policy experts and state leaders regarding the mix of policies believed to offer the best opportunity for improving…

  19. 29 CFR 1952.103 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... of 28 health compliance officers. Oregon elected to retain the safety benchmark level established in... State operating an approved State plan. In October 1992, Oregon completed, in conjunction with OSHA, a...

  20. 29 CFR 1952.103 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... of 28 health compliance officers. Oregon elected to retain the safety benchmark level established in... State operating an approved State plan. In October 1992, Oregon completed, in conjunction with OSHA, a...

  1. Brand Management in US Business Schools: Can Yale Learn from Harvard?

    ERIC Educational Resources Information Center

    Heyes, Anthony G.; Liston-Heyes, Catherine

    2004-01-01

    Data Envelopment Analysis (DEA) is used to evaluate the performance of top US business school in maintaining reputation among members of the academic and business communities. The authors generate efficiency measures and identify peers against which underperforming schools should benchmark.

  2. State-and-transition models for heterogeneous landscapes: A strategy for development and application

    USDA-ARS?s Scientific Manuscript database

    Interpretation of assessment and monitoring data requires information about reference conditions and ecological resilience. Reference conditions used as benchmarks can be specified via potential-based land classifications (e.g., ecological sites) that describe the plant communities potentially obser...

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackillop, William J., E-mail: william.mackillop@krcc.on.ca; Department of Public Health Sciences, Queen's University, Kingston, Ontario; Department of Oncology, Queen's University, Kingston, Ontario

    Purpose: Palliative radiation therapy (PRT) benefits many patients with incurable cancer, but the overall need for PRT is unknown. Our primary objective was to estimate the appropriate rate of use of PRT in Ontario. Methods and Materials: The Ontario Cancer Registry identified patients who died of cancer in Ontario between 2006 and 2010. Comprehensive RT records were linked to the registry. Multivariate analysis identified social and health system-related factors affecting the use of PRT, enabling us to define a benchmark population of patients with unimpeded access to PRT. The proportion of cases treated at any time (PRT{sub lifetime}), the proportionmore » of cases treated in the last 2 years of life (PRT{sub 2y}), and number of courses of PRT per thousand cancer deaths were measured in the benchmark population. These benchmarks were standardized to the characteristics of the overall population, and province-wide PRT rates were then compared to benchmarks. Results: Cases diagnosed at hospitals with no RT on-site and residents of poorer communities and those who lived farther from an RT center, were significantly less likely than others to receive PRT. However, availability of RT at the diagnosing hospital was the dominant factor. Neither socioeconomic status nor distance from home to nearest RT center had a significant effect on the use of PRT in patients diagnosed at a hospital with RT facilities. The benchmark population therefore consisted of patients diagnosed at a hospital with RT facilities. The standardized benchmark for PRT{sub lifetime} was 33.9%, and the corresponding province-wide rate was 28.5%. The standardized benchmark for PRT{sub 2y} was 32.4%, and the corresponding province-wide rate was 27.0%. The standardized benchmark for the number of courses of PRT per thousand cancer deaths was 652, and the corresponding province-wide rate was 542. Conclusions: Approximately one-third of patients who die of cancer in Ontario need PRT, but many of them are never treated.« less

  4. Issues to consider in the derivation of water quality benchmarks for the protection of aquatic life.

    PubMed

    Schneider, Uwe

    2014-01-01

    While water quality benchmarks for the protection of aquatic life have been in use in some jurisdictions for several decades (USA, Canada, several European countries), more and more countries are now setting up their own national water quality benchmark development programs. In doing so, they either adopt an existing method from another jurisdiction, update on an existing approach, or develop their own new derivation method. Each approach has its own advantages and disadvantages, and many issues have to be addressed when setting up a water quality benchmark development program or when deriving a water quality benchmark. Each of these tasks requires a special expertise. They may seem simple, but are complex in their details. The intention of this paper was to provide some guidance for this process of water quality benchmark development on the program level, for the derivation methodology development, and in the actual benchmark derivation step, as well as to point out some issues (notably the inclusion of adapted populations and cryptic species and points to consider in the use of the species sensitivity distribution approach) and future opportunities (an international data repository and international collaboration in water quality benchmark development).

  5. Parallel processes: using motivational interviewing as an implementation coaching strategy.

    PubMed

    Hettema, Jennifer E; Ernst, Denise; Williams, Jessica Roberts; Miller, Kristin J

    2014-07-01

    In addition to its clinical efficacy as a communication style for strengthening motivation and commitment to change, motivational interviewing (MI) has been hypothesized to be a potential tool for facilitating evidence-based practice adoption decisions. This paper reports on the rationale and content of MI-based implementation coaching Webinars that, as part of a larger active dissemination strategy, were found to be more effective than passive dissemination strategies at promoting adoption decisions among behavioral health and health providers and administrators. The Motivational Interviewing Treatment Integrity scale (MITI 3.1.1) was used to rate coaching Webinars from 17 community behavioral health organizations and 17 community health centers. The MITI coding system was found to be applicable to the coaching Webinars, and raters achieved high levels of agreement on global and behavior count measurements of fidelity to MI. Results revealed that implementation coaches maintained fidelity to the MI model, exceeding competency benchmarks for almost all measures. Findings suggest that it is feasible to implement MI as a coaching tool.

  6. New Activities of the U.S. National Tsunami Hazard Mitigation Program, Mapping and Modeling Subcommittee

    NASA Astrophysics Data System (ADS)

    Wilson, R. I.; Eble, M. C.

    2013-12-01

    The U.S. National Tsunami Hazard Mitigation Program (NTHMP) is comprised of representatives from coastal states and federal agencies who, under the guidance of NOAA, work together to develop protocols and products to help communities prepare for and mitigate tsunami hazards. Within the NTHMP are several subcommittees responsible for complimentary aspects of tsunami assessment, mitigation, education, warning, and response. The Mapping and Modeling Subcommittee (MMS) is comprised of state and federal scientists who specialize in tsunami source characterization, numerical tsunami modeling, inundation map production, and warning forecasting. Until September 2012, much of the work of the MMS was authorized through the Tsunami Warning and Education Act, an Act that has since expired but the spirit of which is being adhered to in parallel with reauthorization efforts. Over the past several years, the MMS has developed guidance and best practices for states and territories to produce accurate and consistent tsunami inundation maps for community level evacuation planning, and has conducted benchmarking of numerical inundation models. Recent tsunami events have highlighted the need for other types of tsunami hazard analyses and products for improving evacuation planning, vertical evacuation, maritime planning, land-use planning, building construction, and warning forecasts. As the program responsible for producing accurate and consistent tsunami products nationally, the NTHMP-MMS is initiating a multi-year plan to accomplish the following: 1) Create and build on existing demonstration projects that explore new tsunami hazard analysis techniques and products, such as maps identifying areas of strong currents and potential damage within harbors as well as probabilistic tsunami hazard analysis for land-use planning. 2) Develop benchmarks for validating new numerical modeling techniques related to current velocities and landslide sources. 3) Generate guidance and protocols for the production and use of new tsunami hazard analysis products. 4) Identify multistate collaborations and funding partners interested in these new products. Application of these new products will improve the overall safety and resilience of coastal communities exposed to tsunami hazards.

  7. A Simplified Approach for the Rapid Generation of Transient Heat-Shield Environments

    NASA Technical Reports Server (NTRS)

    Wurster, Kathryn E.; Zoby, E. Vincent; Mills, Janelle C.; Kamhawi, Hilmi

    2007-01-01

    A simplified approach has been developed whereby transient entry heating environments are reliably predicted based upon a limited set of benchmark radiative and convective solutions. Heating, pressure and shear-stress levels, non-dimensionalized by an appropriate parameter at each benchmark condition are applied throughout the entry profile. This approach was shown to be valid based on the observation that the fully catalytic, laminar distributions examined were relatively insensitive to altitude as well as velocity throughout the regime of significant heating. In order to establish a best prediction by which to judge the results that can be obtained using a very limited benchmark set, predictions based on a series of benchmark cases along a trajectory are used. Solutions which rely only on the limited benchmark set, ideally in the neighborhood of peak heating, are compared against the resultant transient heating rates and total heat loads from the best prediction. Predictions based on using two or fewer benchmark cases at or near the trajectory peak heating condition, yielded results to within 5-10 percent of the best predictions. Thus, the method provides transient heating environments over the heat-shield face with sufficient resolution and accuracy for thermal protection system design and also offers a significant capability to perform rapid trade studies such as the effect of different trajectories, atmospheres, or trim angle of attack, on convective and radiative heating rates and loads, pressure, and shear-stress levels.

  8. 29 CFR 1952.263 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... State operating an approved State plan. In 1992, Michigan completed, in conjunction with OSHA, a reassessment of the levels initially established in 1980 and proposed revised benchmarks of 56 safety and 45...

  9. 29 CFR 1952.263 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... State operating an approved State plan. In 1992, Michigan completed, in conjunction with OSHA, a reassessment of the levels initially established in 1980 and proposed revised benchmarks of 56 safety and 45...

  10. 29 CFR 1952.363 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... State operating an approved State plan. In May 1992, New Mexico completed, in conjunction with OSHA, a reassessment of the staffing levels initially established in 1980 and proposed revised benchmarks of 7 safety...

  11. 29 CFR 1952.363 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... State operating an approved State plan. In May 1992, New Mexico completed, in conjunction with OSHA, a reassessment of the staffing levels initially established in 1980 and proposed revised benchmarks of 7 safety...

  12. GeneNetWeaver: in silico benchmark generation and performance profiling of network inference methods.

    PubMed

    Schaffter, Thomas; Marbach, Daniel; Floreano, Dario

    2011-08-15

    Over the last decade, numerous methods have been developed for inference of regulatory networks from gene expression data. However, accurate and systematic evaluation of these methods is hampered by the difficulty of constructing adequate benchmarks and the lack of tools for a differentiated analysis of network predictions on such benchmarks. Here, we describe a novel and comprehensive method for in silico benchmark generation and performance profiling of network inference methods available to the community as an open-source software called GeneNetWeaver (GNW). In addition to the generation of detailed dynamical models of gene regulatory networks to be used as benchmarks, GNW provides a network motif analysis that reveals systematic prediction errors, thereby indicating potential ways of improving inference methods. The accuracy of network inference methods is evaluated using standard metrics such as precision-recall and receiver operating characteristic curves. We show how GNW can be used to assess the performance and identify the strengths and weaknesses of six inference methods. Furthermore, we used GNW to provide the international Dialogue for Reverse Engineering Assessments and Methods (DREAM) competition with three network inference challenges (DREAM3, DREAM4 and DREAM5). GNW is available at http://gnw.sourceforge.net along with its Java source code, user manual and supporting data. Supplementary data are available at Bioinformatics online. dario.floreano@epfl.ch.

  13. Cloud-Based Evaluation of Anatomical Structure Segmentation and Landmark Detection Algorithms: VISCERAL Anatomy Benchmarks.

    PubMed

    Jimenez-Del-Toro, Oscar; Muller, Henning; Krenn, Markus; Gruenberg, Katharina; Taha, Abdel Aziz; Winterstein, Marianne; Eggel, Ivan; Foncubierta-Rodriguez, Antonio; Goksel, Orcun; Jakab, Andras; Kontokotsios, Georgios; Langs, Georg; Menze, Bjoern H; Salas Fernandez, Tomas; Schaer, Roger; Walleyo, Anna; Weber, Marc-Andre; Dicente Cid, Yashin; Gass, Tobias; Heinrich, Mattias; Jia, Fucang; Kahl, Fredrik; Kechichian, Razmig; Mai, Dominic; Spanier, Assaf B; Vincent, Graham; Wang, Chunliang; Wyeth, Daniel; Hanbury, Allan

    2016-11-01

    Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.

  14. Using Spoken Language Benchmarks to Characterize the Expressive Language Skills of Young Children With Autism Spectrum Disorders

    PubMed Central

    Weismer, Susan Ellis

    2015-01-01

    Purpose Spoken language benchmarks proposed by Tager-Flusberg et al. (2009) were used to characterize communication profiles of toddlers with autism spectrum disorders and to investigate if there were differences in variables hypothesized to influence language development at different benchmark levels. Method The communication abilities of a large sample of toddlers with autism spectrum disorders (N = 105) were characterized in terms of spoken language benchmarks. The toddlers were grouped according to these benchmarks to investigate whether there were differences in selected variables across benchmark groups at a mean age of 2.5 years. Results The majority of children in the sample presented with uneven communication profiles with relative strengths in phonology and significant weaknesses in pragmatics. When children were grouped according to one expressive language domain, across-group differences were observed in response to joint attention and gestures but not cognition or restricted and repetitive behaviors. Conclusion The spoken language benchmarks are useful for characterizing early communication profiles and investigating features that influence expressive language growth. PMID:26254475

  15. Anthropogenic organic compounds in source water of nine community water systems that withdraw from streams, 2002-05

    USGS Publications Warehouse

    Kingsbury, James A.; Delzer, Gregory C.; Hopple, Jessica A.

    2008-01-01

    Source water, herein defined as stream water collected at a water-system intake prior to water treatment, was sampled at nine community water systems, ranging in size from a system serving about 3,000 people to one that serves about 2 million people. As many as 17 source-water samples were collected at each site over about a 12-month period between 2002 and 2004 for analysis of 258 anthropogenic organic compounds. Most of these compounds are unregulated in drinking water, and the compounds analyzed include pesticides and selected pesticide degradates, gasoline hydrocarbons, personal-care and domestic-use compounds, and solvents. The laboratory analytical methods used in this study have relatively low detection levels - commonly 100 to 1,000 times lower than State and Federal standards and guidelines for protecting water quality. Detections, therefore, do not necessarily indicate a concern to human health but rather help to identify emerging issues and to track changes in occurrence and concentrations over time. About one-half (134) of the compounds were detected at least once in source-water samples. Forty-seven compounds were detected commonly (in 10 percent or more of the samples), and six compounds (chloroform, atrazine, simazine, metolachlor, deethylatrazine, and hexahydrohexamethylcyclopentabenzopyran (HHCB) were detected in more than one-half of the samples. Chloroform was the most commonly detected compound - in every sample (year round) at five sites. Findings for chloroform and the fragrances HHCB and acetyl hexamethyl tetrahydronaphthalene (AHTN) indicate an association between occurrence and the presence of large upstream wastewater discharges in the watersheds. The herbicides atrazine, simazine, and metolachlor also were among the most commonly detected compounds. Degradates of these herbicides, as well as those of a few other commonly occurring herbicides, generally were detected at concentrations similar to or greater than concentrations of the parent compound. Samples typically contained mixtures of two or more compounds. The total number of compounds and their total concentration in samples generally increased with the amount of urban and agricultural land use in a watershed. Annual mean concentrations of all compounds were less than human-health benchmarks. Single-sample concentrations of anthropogenic organic compounds in source water generally were less than 0.1 microgram per liter and less than established human-health benchmarks. Human-health benchmarks used for comparison were U.S. Environmental Protection Agency (USEPA) Maximum Contaminant Levels (MCLs) for regulated compounds and U.S. Geological Survey Health-Based Screening Levels for unregulated compounds. About one-half of all detected compounds do not have human-health benchmarks or adequate toxicity information for evaluating results in a human-health context. During a second sampling phase (2004-05), source water and finished water (water that has passed through all the treatment processes but prior to distribution) were sampled at eight of the nine community water systems. Water-treatment processes differ among the systems. Specifically, treatment at five of the systems is conventional, typically including steps of coagulation, flocculation, sedimentation, filtration, and disinfection. One water system uses slow sand filtration and disinfection, a second system uses ozone as a preliminary treatment step to conventional treatment, and a third system is a direct filtration treatment plant that uses many of the steps employed in conventional treatment. Most of these treatment steps are not designed specifically to remove the compounds monitored in this study. About two-thirds of the compounds detected commonly in source water were detected at similar frequencies in finished water. Although the water-treatment steps differ somewhat among the eight water systems, the amount of change in concentration of the compounds from source- to finish

  16. Can Youth Sport Build Character?

    ERIC Educational Resources Information Center

    Shields, David Light; Bredemeier, Brenda Light; Power, F. Clark

    2001-01-01

    Participation and competition in some sports are associated with lower stages of moral reasoning. Coaches can foster moral development by starting with the right mental model, holding benchmark meetings about team values, setting goals for physical and character skills, making time for guided discussion sessions, building community, modeling…

  17. Active transportation measurement and benchmarking development : New Orleans pedestrian and bicycle count report, 2010-2011.

    DOT National Transportation Integrated Search

    2012-01-01

    Over the last decade, there has been a surge in bicycle and pedestrian use in communities that have invested in active transportation infrastructure and programming. While these increases show potentially promising trends, many of the cities that hav...

  18. Active transportation measurement and benchmarking development : New Orleans state of active transportation report 2010.

    DOT National Transportation Integrated Search

    2012-01-01

    Over the last decade, there has been a surge in bicycle and pedestrian use in communities that have invested in active transportation infrastruc-ture and programming. While these increases show potentially promising trends, many of the cities that ha...

  19. Quality Applications to the Classroom of Tomorrow.

    ERIC Educational Resources Information Center

    Branson, Robert K.; Buckner, Terrelle

    1995-01-01

    Discusses the concept of quality in relation to educational programs. Highlights include quality as a process rather than as excellence; education's relationship to the community and to business and industry; the need for a mission statement, including desired outcomes; horizontal and vertical integration; and benchmarking. (LRW)

  20. A proposed aquatic plant community biotic index for Wisconsin lakes

    USGS Publications Warehouse

    Nichols, S.; Weber, S.; Shaw, B.

    2000-01-01

    The Aquatic Macrophyte Community Index (AMCI) is a multipurpose tool developed to assess the biological quality of aquatic plant communities in lakes. It can be used to specifically analyze aquatic plant communities or as part of a multimetric system to assess overall lake quality for regulatory, planning, management, educational, or research purposes. The components of the index are maximum depth of plant growth; percentage of the littoral zone vegetated; Simpson's diversity index; the relative frequencies of submersed, sensitive, and exotic species; and taxa number. Each parameter was scaled based on data distributions from a statewide database, and scaled values were totaled for the AMCI value. AMCI values were grouped and tested by ecoregion and lake type (natural lakes and impoundments) to define quality on a regional basis. This analysis suggested that aquatic plant communities are divided into four groups: (1) Northern Lakes and Forests lakes and impoundments, (2) North-Central Hardwood Forests lakes and impoundments, (3) Southeastern Wisconsin Till Plains lakes, and (4) Southeastern Wisconsin Till Plains impoundments, Driftless Area Lakes, and Mississippi River Backwater lakes. AMCI values decline from group 1 to group 4 and reflect general water quality and human use trends in Wisconsin. The upper quartile of AMCI values in any region are the highest quality or benchmark plant communities. The interquartile range consists of normally impacted communities for the region and the lower quartile contains severely impacted or degraded plant communities. When AMCI values were applied to case studies, the values reflected known impacts to the lakes. However, quality criteria cannot be used uncritically, especially in lakes that initially have low nutrient levels.The Aquatic Macrophyte Community Index (AMCI) is a multipurpose tool developed to assess the biological quality of aquatic plant communities in lakes. It can be used to specifically analyze aquatic plant communities or as part of a multimetric system to assess overall lake quality for regulatory, planning, management, educational, or research purposes. The components of the index are maximum depth of plant growth; percentage of the littoral zone vegetated; Simpson's diversity index; the relative frequencies of submersed, sensitive, and exotic species; and taxa number. Each parameter was scaled based on data distributions from a statewide database, and scaled values were totaled for the AMCI value, AMCI values were grouped and tested by ecoregion and lake type (natural lakes and impoundments) to define quality on a regional basis. This analysis suggested that aquatic plant communities are divided into four groups: (1) Northern Lakes and Forests lakes and impoundments, (2) North-Central Hardwood Forests lakes and impoundments, (3) Southeastern Wisconsin Till Plains lakes, and (4) Southeastern Wisconsin Till Plains impoundments, Driftless Area Lakes, and Mississippi River Backwater lakes. AMCI values decline from group 1 to group 4 and reflect general water quality and human use trends in Wisconsin. The upper quartile of AMCI values in any region are the highest quality or benchmark plant communities. The interquartile range consists of normally impacted communities for the region and the lower quartile contains severely impacted or degraded plant communities. When AMCI values were applied to case studies, the values reflected known impacts to the lakes. However, quality criteria cannot be used uncritically, especially in lakes that initially have low nutrient levels.In Wisconsin, the Aquatic Macrophyte Community Index (AMCI) was developed and used to define the quality of aquatic macrophyte communities in northern Wisconsin flowages. In this study, the AMCI concept was expanded to lakes and impoundments on a statewide basis. The parameters selected were the maximum depth of plant growth, percentage of littoral area vegetated, Simpson's diversity index, relative frequen

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    The systems resilience research community has developed methods to manually insert additional source-program level assertions to trap errors, and also devised tools to conduct fault injection studies for scalar program codes. In this work, we contribute the first vector oriented LLVM-level fault injector VULFI to help study the effects of faults in vector architectures that are of growing importance, especially for vectorizing loops. Using VULFI, we conduct a resiliency study of nine real-world vector benchmarks using Intel’s AVX and SSE extensions as the target vector instruction sets, and offer the first reported understanding of how faults affect vector instruction sets.more » We take this work further toward automating the insertion of resilience assertions during compilation. This is based on our observation that during intermediate (e.g., LLVM-level) code generation to handle full and partial vectorization, modern compilers exploit (and explicate in their code-documentation) critical invariants. These invariants are turned into error-checking code. We confirm the efficacy of these automatically inserted low-overhead error detectors for vectorized for-loops.« less

  2. Benchmark concentrations for methyl mercury obtained from the 9-year follow-up of the Seychelles Child Development Study.

    PubMed

    van Wijngaarden, Edwin; Beck, Christopher; Shamlaye, Conrad F; Cernichiari, Elsa; Davidson, Philip W; Myers, Gary J; Clarkson, Thomas W

    2006-09-01

    Methyl mercury (MeHg) is highly toxic to the developing nervous system. Human exposure is mainly from fish consumption since small amounts are present in all fish. Findings of developmental neurotoxicity following high-level prenatal exposure to MeHg raised the question of whether children whose mothers consumed fish contaminated with background levels during pregnancy are at an increased risk of impaired neurological function. Benchmark doses determined from studies in New Zealand, and the Faroese and Seychelles Islands indicate that a level of 4-25 parts per million (ppm) measured in maternal hair may carry a risk to the infant. However, there are numerous sources of uncertainty that could affect the derivation of benchmark doses, and it is crucial to continue to investigate the most appropriate derivation of safe consumption levels. Earlier, we published the findings from benchmark analyses applied to the data collected on the Seychelles main cohort at the 66-month follow-up period. Here, we expand on the main cohort analyses by determining the benchmark doses (BMD) of MeHg level in maternal hair based on 643 Seychellois children for whom 26 different neurobehavioral endpoints were measured at 9 years of age. Dose-response models applied to these continuous endpoints incorporated a variety of covariates and included the k-power model, the Weibull model, and the logistic model. The average 95% lower confidence limit of the BMD (BMDL) across all 26 endpoints varied from 20.1 ppm (range=17.2-22.5) for the logistic model to 20.4 ppm (range=17.9-23.0) for the k-power model. These estimates are somewhat lower than those obtained after 66 months of follow-up. The Seychelles Child Development Study continues to provide a firm scientific basis for the derivation of safe levels of MeHg consumption.

  3. Using a complex audit tool to measure workload, staffing and quality in district nursing.

    PubMed

    Kirby, Esther; Hurst, Keith

    2014-05-01

    This major community, workload, staffing and quality study is thought to be the most comprehensive community staffing project in England. It involved over 400 staff from 46 teams in 6 localities and is unique because it ties community staffing activity to workload and quality. Scotland was used to benchmark since the same evidence-based Safer Nursing Care Tool methodology developed by the second-named author was used (apart from quality) and took into account population and geographical similarities. The data collection method tested quality standards, acuity, dependency and nursing interventions by looking at caseloads, staff activity and service quality and funded, actual, temporary and recommended staffing. Key findings showed that 4 out of 6 localities had a heavy workload index that stretched staffing numbers and time spent with patients. The acuity and dependency of patients leaned heavily towards the most dependent and acute categories requiring more face-to-face care. Some areas across the localities had high levels of temporary staff, which affected quality and increased cost. Skill and competency shortages meant that a small number of staff had to travel significantly across the county to deliver complex care to some patients.

  4. Harmonizing lipidomics: NIST interlaboratory comparison exercise for lipidomics using SRM 1950-Metabolites in Frozen Human Plasma.

    PubMed

    Bowden, John A; Heckert, Alan; Ulmer, Candice Z; Jones, Christina M; Koelmel, Jeremy P; Abdullah, Laila; Ahonen, Linda; Alnouti, Yazen; Armando, Aaron M; Asara, John M; Bamba, Takeshi; Barr, John R; Bergquist, Jonas; Borchers, Christoph H; Brandsma, Joost; Breitkopf, Susanne B; Cajka, Tomas; Cazenave-Gassiot, Amaury; Checa, Antonio; Cinel, Michelle A; Colas, Romain A; Cremers, Serge; Dennis, Edward A; Evans, James E; Fauland, Alexander; Fiehn, Oliver; Gardner, Michael S; Garrett, Timothy J; Gotlinger, Katherine H; Han, Jun; Huang, Yingying; Neo, Aveline Huipeng; Hyötyläinen, Tuulia; Izumi, Yoshihiro; Jiang, Hongfeng; Jiang, Houli; Jiang, Jiang; Kachman, Maureen; Kiyonami, Reiko; Klavins, Kristaps; Klose, Christian; Köfeler, Harald C; Kolmert, Johan; Koal, Therese; Koster, Grielof; Kuklenyik, Zsuzsanna; Kurland, Irwin J; Leadley, Michael; Lin, Karen; Maddipati, Krishna Rao; McDougall, Danielle; Meikle, Peter J; Mellett, Natalie A; Monnin, Cian; Moseley, M Arthur; Nandakumar, Renu; Oresic, Matej; Patterson, Rainey; Peake, David; Pierce, Jason S; Post, Martin; Postle, Anthony D; Pugh, Rebecca; Qiu, Yunping; Quehenberger, Oswald; Ramrup, Parsram; Rees, Jon; Rembiesa, Barbara; Reynaud, Denis; Roth, Mary R; Sales, Susanne; Schuhmann, Kai; Schwartzman, Michal Laniado; Serhan, Charles N; Shevchenko, Andrej; Somerville, Stephen E; St John-Williams, Lisa; Surma, Michal A; Takeda, Hiroaki; Thakare, Rhishikesh; Thompson, J Will; Torta, Federico; Triebl, Alexander; Trötzmüller, Martin; Ubhayasekera, S J Kumari; Vuckovic, Dajana; Weir, Jacquelyn M; Welti, Ruth; Wenk, Markus R; Wheelock, Craig E; Yao, Libin; Yuan, Min; Zhao, Xueqing Heather; Zhou, Senlin

    2017-12-01

    As the lipidomics field continues to advance, self-evaluation within the community is critical. Here, we performed an interlaboratory comparison exercise for lipidomics using Standard Reference Material (SRM) 1950-Metabolites in Frozen Human Plasma, a commercially available reference material. The interlaboratory study comprised 31 diverse laboratories, with each laboratory using a different lipidomics workflow. A total of 1,527 unique lipids were measured across all laboratories and consensus location estimates and associated uncertainties were determined for 339 of these lipids measured at the sum composition level by five or more participating laboratories. These evaluated lipids detected in SRM 1950 serve as community-wide benchmarks for intra- and interlaboratory quality control and method validation. These analyses were performed using nonstandardized laboratory-independent workflows. The consensus locations were also compared with a previous examination of SRM 1950 by the LIPID MAPS consortium. While the central theme of the interlaboratory study was to provide values to help harmonize lipids, lipid mediators, and precursor measurements across the community, it was also initiated to stimulate a discussion regarding areas in need of improvement. Copyright © 2017 by the American Society for Biochemistry and Molecular Biology, Inc.

  5. Benchmark matrix and guide: Part III.

    PubMed

    1992-01-01

    The final article in the "Benchmark Matrix and Guide" series developed by Headquarters Air Force Logistics Command completes the discussion of the last three categories that are essential ingredients of a successful total quality management (TQM) program. Detailed behavioral objectives are listed in the areas of recognition, process improvement, and customer focus. These vertical categories are meant to be applied to the levels of the matrix that define the progressive stages of the TQM: business as usual, initiation, implementation, expansion, and integration. By charting the horizontal progress level and the vertical TQM category, the quality management professional can evaluate the current state of TQM in any given organization. As each category is completed, new goals can be defined in order to advance to a higher level. The benchmarking process is integral to quality improvement efforts because it focuses on the highest possible standards to evaluate quality programs.

  6. 76 FR 14670 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-17

    ... Home Visiting Program Needs Assessment and Plan for Responding to Identified Needs. OMB No.: New... Section 511(c)), and include conducting a needs assessment and establishing benchmarks. The Administration..., grantees must (1) conduct a comprehensive community needs assessment and (2) develop a plan and begin to...

  7. 24 CFR 597.403 - Revocation of designation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... (Continued) OFFICE OF ASSISTANT SECRETARY FOR COMMUNITY PLANNING AND DEVELOPMENT, DEPARTMENT OF HOUSING AND... area; (2) Has failed to make progress in achieving the benchmarks set forth in the strategic plan; or (3) Has not complied substantially with the strategic plan. (b) Letter of warning. Before revoking...

  8. 29 CFR 1952.103 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., DEPARTMENT OF LABOR (CONTINUED) APPROVED STATE PLANS FOR ENFORCEMENT OF STATE STANDARDS Oregon § 1952.103... State operating an approved State plan. In October 1992, Oregon completed, in conjunction with OSHA, a... of 28 health compliance officers. Oregon elected to retain the safety benchmark level established in...

  9. 29 CFR 1952.103 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., DEPARTMENT OF LABOR (CONTINUED) APPROVED STATE PLANS FOR ENFORCEMENT OF STATE STANDARDS Oregon § 1952.103... State operating an approved State plan. In October 1992, Oregon completed, in conjunction with OSHA, a... of 28 health compliance officers. Oregon elected to retain the safety benchmark level established in...

  10. Student Interactives--A new Tool for Exploring Science.

    NASA Astrophysics Data System (ADS)

    Turner, C.

    2005-05-01

    Science NetLinks (SNL), a national program that provides online teacher resources created by the American Association for the Advancement of Science (AAAS), has proven to be a leader among educational resource providers in bringing free, high-quality, grade-appropriate materials to the national teaching community in a format that facilitates classroom integration. Now in its ninth year on the Web, Science NetLinks is part of the MarcoPolo Consortium of Web sites and associated state-based training initiatives that help teachers integrate Internet content into the classroom. SNL is a national presence in the K-12 science education community serving over 700,000 teachers each year, who visit the site at least three times a month. SNL features: High-quality, innovative, original lesson plans aligned to Project 2061 Benchmarks for Science Literacy, Original Internet-based interactives and learning challenges, Reviewed Web resources and demonstrations, Award winning, 60-second audio news features (Science Updates). Science NetLinks has an expansive and growing library of this educational material, aligned and sortable by grade band or benchmark. The program currently offers over 500 lessons, covering 72% of the Benchmarks for Science Literacy content areas in grades K-12. Over the past several years, there has been a strong movement to create online resources that support earth and space science education. Funding for various online educational materials has been available from many sources and has produced a variety of useful products for the education community. Teachers, through the Internet, potentially have access to thousands of activities, lessons and multimedia interactive applications for use in the classroom. But, with so many resources available, it is increasingly more difficult for educators to locate quality resources that are aligned to standards and learning goals. To ensure that the education community utilizes the resources, the material must conform to a format that allows easy understanding, evaluation and integration. Science NetLinks' material has been proven to satisfy these criteria and serve thousands of teachers every year. All online interactive materials that are created by AAAS are aligned to AAAS Project 2061 Benchmarks, which mirror National Science Standards, and are developed based on a rigorous set of criteria. For the purpose of this forum we will provide an overview that explains the need for more of these materials in the earth and space education, a review of the criteria for creating these materials and show examples of online materials created by AAAS that support earth and space science.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H.

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, althoughmore » the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage over vector supercomputers, and, if so, which of the parallel offerings would be most useful in real-world scientific computation. In part to draw attention to some of the performance reporting abuses prevalent at the time, the present author wrote a humorous essay 'Twelve Ways to Fool the Masses,' which described in a light-hearted way a number of the questionable ways in which both vendor marketing people and scientists were inflating and distorting their performance results. All of this underscored the need for an objective and scientifically defensible measure to compare performance on these systems.« less

  12. Setting Evidence-Based Language Goals

    ERIC Educational Resources Information Center

    Goertler, Senta; Kraemer, Angelika; Schenker, Theresa

    2016-01-01

    The purpose of this project was to identify target language benchmarks for the German program at Michigan State University (MSU) based on national and international guidelines and previous research, to assess language skills across course levels and class sections in the entire German program, and to adjust the language benchmarks as needed based…

  13. Benchmarking Universities' Efficiency Indicators in the Presence of Internal Heterogeneity

    ERIC Educational Resources Information Center

    Agasisti, Tommaso; Bonomi, Francesca

    2014-01-01

    When benchmarking its performance, a university is usually considered as a single strategic unit. According to the evidence, however, lower levels within an organisation (such as faculties, departments and schools) play a significant role in institutional governance, affecting the overall performance. In this article, an empirical analysis was…

  14. School-Based Cognitive-Behavioral Therapy for Adolescent Depression: A Benchmarking Study

    ERIC Educational Resources Information Center

    Shirk, Stephen R.; Kaplinski, Heather; Gudmundsen, Gretchen

    2009-01-01

    The current study evaluated cognitive-behavioral therapy (CBT) for adolescent depression delivered in health clinics and counseling centers in four high schools. Outcomes were benchmarked to results from prior efficacy trials. Fifty adolescents diagnosed with depressive disorders were treated by eight doctoral-level psychologists who followed a…

  15. Benchmarking initiatives in the water industry.

    PubMed

    Parena, R; Smeets, E

    2001-01-01

    Customer satisfaction and service care are every day pushing professionals in the water industry to seek to improve their performance, lowering costs and increasing the provided service level. Process Benchmarking is generally recognised as a systematic mechanism of comparing one's own utility with other utilities or businesses with the intent of self-improvement by adopting structures or methods used elsewhere. The IWA Task Force on Benchmarking, operating inside the Statistics and Economics Committee, has been committed to developing a general accepted concept of Process Benchmarking to support water decision-makers in addressing issues of efficiency. In a first step the Task Force disseminated among the Committee members a questionnaire focused on providing suggestions about the kind, the evolution degree and the main concepts of Benchmarking adopted in the represented Countries. A comparison among the guidelines adopted in The Netherlands and Scandinavia has recently challenged the Task Force in drafting a methodology for a worldwide process benchmarking in water industry. The paper provides a framework of the most interesting benchmarking experiences in the water sector and describes in detail both the final results of the survey and the methodology focused on identification of possible improvement areas.

  16. The National Practice Benchmark for Oncology: 2015 Report for 2014 Data

    PubMed Central

    Balch, Carla; Ogle, John D.

    2016-01-01

    The National Practice Benchmark (NPB) is a unique tool used to measure oncology practices against others across the country in a meaningful way despite variations in practice demographics, size, and setting. In today’s challenging economic environment, each practice positions service offerings and competitive advantages to attract patients. Although the data in the NPB report are primarily reported by community oncology practices, the business structure and arrangements with regional health care systems are also reflected in the benchmark report. The ability to produce detailed metrics is an accomplishment of excellence in business and clinical management. With these metrics, a practice should be able to measure and analyze its current business practices and make appropriate changes, if necessary. In this report, we build on the foundation initially established by Oncology Metrics (acquired by Flatiron Health in 2014) over years of data collection and refine definitions to deliver the NPB, which is uniquely meaningful in the oncology market. PMID:27006357

  17. Ubiquitousness of link-density and link-pattern communities in real-world networks

    NASA Astrophysics Data System (ADS)

    Šubelj, L.; Bajec, M.

    2012-01-01

    Community structure appears to be an intrinsic property of many complex real-world networks. However, recent work shows that real-world networks reveal even more sophisticated modules than classical cohesive (link-density) communities. In particular, networks can also be naturally partitioned according to similar patterns of connectedness among the nodes, revealing link-pattern communities. We here propose a propagation based algorithm that can extract both link-density and link-pattern communities, without any prior knowledge of the true structure. The algorithm was first validated on different classes of synthetic benchmark networks with community structure, and also on random networks. We have further applied the algorithm to different social, information, technological and biological networks, where it indeed reveals meaningful (composites of) link-density and link-pattern communities. The results thus seem to imply that, similarly as link-density counterparts, link-pattern communities appear ubiquitous in nature and design.

  18. Report on the 1999 ONR Shallow-Water Reverberation Focus Workshop

    DTIC Science & Technology

    1999-12-31

    Pseudo Spectral models. • Develop reverberation and scattering benchmarks accepted by the scientific community. (The ASA penetrable wedge problem has...Paul C. Hines, W. Cary Risley , and Martin P. O’Connor, "A Wide-Band Sonar for underwater acoustics measurements in shallow water," in Oceans 󈨦

  19. Evolving the Role of Campus Security

    ERIC Educational Resources Information Center

    May, Vern

    2008-01-01

    One of the problems security professionals see in security is that there are few benchmarks to quantify the effectiveness of proactive security initiatives. This hurts them with funding support and also with ensuring community buy-in outside of crisis situations. The reactive nature of many institutions makes it difficult to move forward with…

  20. The Principals as Literacy Leaders with Indigenous Communities: Professional Learning and Research

    ERIC Educational Resources Information Center

    Johnson, Greer; Dempster, Neil; McKenzie, Lynanne

    2013-01-01

    The vast proportion of Australia's Indigenous students are represented persistently as well below the national benchmarks for literacy and numeracy. Recent national school-based research and development projects, funded by the Australian Government's "Closing the Gap" strategy, have again targeted improving Indigenous students' literacy…

  1. Curriculum Model for Medical Technology: Lessons from International Benchmarking

    ERIC Educational Resources Information Center

    Pring-Valdez, Anacleta

    2012-01-01

    Curriculum is a crucial component of any educational process. Curriculum development and instructional management serve as effective tools for meeting the present and future needs of the local and national communities. In trying to strengthen the quality assurance system in Philippine higher education, institutions of higher learning were mandated…

  2. Contra-Rotating Open Rotor Tone Noise Prediction

    NASA Technical Reports Server (NTRS)

    Envia, Edmane

    2014-01-01

    Reliable prediction of contra-rotating open rotor (CROR) noise is an essential element of any strategy for the development of low-noise open rotor propulsion systems that can meet both the community noise regulations and the cabin noise limits. Since CROR noise spectra typically exhibits a preponderance of tones, significant efforts have been directed towards predicting their tone spectra. To that end, there has been an ongoing effort at NASA to assess various in-house open rotor tone noise prediction tools using a benchmark CROR blade set for which significant aerodynamic and acoustic data had been acquired in wind tunnel tests. In the work presented here, the focus is on the near-field noise of the benchmark open rotor blade set at the cruise condition. Using an analytical CROR tone noise model with input from high-fidelity aerodynamic simulations, detailed tone noise spectral predictions have been generated and compared with the experimental data. Comparisons indicate that the theoretical predictions are in good agreement with the data, especially for the dominant CROR tones and their overall sound pressure level. The results also indicate that, whereas individual rotor tones are well predicted by the linear sources (i.e., thickness and loading), for the interaction tones it is essential that the quadrupole sources be included in the analysis.

  3. Contra-Rotating Open Rotor Tone Noise Prediction

    NASA Technical Reports Server (NTRS)

    Envia, Edmane

    2014-01-01

    Reliable prediction of contra-rotating open rotor (CROR) noise is an essential element of any strategy for the development of low-noise open rotor propulsion systems that can meet both the community noise regulations and cabin noise limits. Since CROR noise spectra exhibit a preponderance of tones, significant efforts have been directed towards predicting their tone content. To that end, there has been an ongoing effort at NASA to assess various in-house open rotor tone noise prediction tools using a benchmark CROR blade set for which significant aerodynamic and acoustic data have been acquired in wind tunnel tests. In the work presented here, the focus is on the nearfield noise of the benchmark open rotor blade set at the cruise condition. Using an analytical CROR tone noise model with input from high-fidelity aerodynamic simulations, tone noise spectra have been predicted and compared with the experimental data. Comparisons indicate that the theoretical predictions are in good agreement with the data, especially for the dominant tones and for the overall sound pressure level of tones. The results also indicate that, whereas the individual rotor tones are well predicted by the combination of the thickness and loading sources, for the interaction tones it is essential that the quadrupole source is also included in the analysis.

  4. Review of pathogen treatment reductions for onsite non ...

    EPA Pesticide Factsheets

    Communities face a challenge when implementing onsite reuse of collected waters for non-potable purposes given the lack of national microbial standards. Quantitative Microbial Risk Assessment (QMRA) can be used to predict the pathogen risks associated with the non-potable reuse of onsite-collected waters; the present work reviewed the relevant QMRA literature to prioritize knowledge gaps and identify health-protective pathogen treatment reduction targets. The review indicated that ingestion of untreated, onsite-collected graywater, rainwater, seepage water and stormwater from a variety of exposure routes resulted in gastrointestinal infection risks greater than the traditional acceptable level of risk. We found no QMRAs that estimated the pathogen risks associated with onsite, non-potable reuse of blackwater. Pathogen treatment reduction targets for non-potable, onsite reuse that included a suite of reference pathogens (i.e., including relevant bacterial, protozoan, and viral hazards) were limited to graywater (for a limited set of domestic uses) and stormwater (for domestic and municipal uses). These treatment reductions corresponded with the health benchmark of a probability of infection or illness of 10−3 per person per year or less. The pathogen treatment reduction targets varied depending on the target health benchmark, reference pathogen, source water, and water reuse application. Overall, there remains a need for pathogen reduction targets that are heal

  5. How do organisational characteristics influence teamwork and service delivery in lung cancer diagnostic assessment programmes? A mixed-methods study

    PubMed Central

    Honein-AbouHaidar, Gladys N; Stuart-McEwan, Terri; Waddell, Tom; Salvarrey, Alexandra; Smylie, Jennifer; Dobrow, Mark J; Brouwers, Melissa C; Gagliardi, Anna R

    2017-01-01

    Objectives Diagnostic assessment programmes (DAPs) can reduce wait times for cancer diagnosis, but optimal DAP design is unknown. This study explored how organisational characteristics influenced multidisciplinary teamwork and diagnostic service delivery in lung cancer DAPs. Design A mixed-methods approach integrated data from descriptive qualitative interviews and medical record abstraction at 4 lung cancer DAPs. Findings were analysed with the Integrated Team Effectiveness Model. Setting 4 DAPs at 2 teaching and 2 community hospitals in Canada. Participants 22 staff were interviewed about organisational characteristics, target service benchmarks, and teamwork processes, determinants and outcomes; 314 medical records were reviewed for actual service benchmarks. Results Formal, informal and asynchronous team processes enabled service delivery and yielded many perceived benefits at the patient, staff and service levels. However, several DAP characteristics challenged teamwork and service delivery: referral volume/workload, time since launch, days per week of operation, rural–remote population, number and type of full-time/part-time human resources, staff colocation, information systems. As a result, all sites failed to meet target benchmarks (from referral to consultation median 4.0 visits, median wait time 35.0 days). Recommendations included improved information systems, more staff in all specialties, staff colocation and expanded roles for patient navigators. Findings were captured in a conceptual framework of lung cancer DAP teamwork determinants and outcomes. Conclusions This study identified several DAP characteristics that could be improved to facilitate teamwork and enhance service delivery, thereby contributing to knowledge of organisational determinants of teamwork and associated outcomes. Findings can be used to update existing DAP guidelines, and by managers to plan or evaluate lung cancer DAPs. Ongoing research is needed to identify ideal roles for navigators, and staffing models tailored to case volumes. PMID:28235969

  6. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    NASA Astrophysics Data System (ADS)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  7. Performance benchmark of LHCb code on state-of-the-art x86 architectures

    NASA Astrophysics Data System (ADS)

    Campora Perez, D. H.; Neufeld, N.; Schwemmer, R.

    2015-12-01

    For Run 2 of the LHC, LHCb is replacing a significant part of its event filter farm with new compute nodes. For the evaluation of the best performing solution, we have developed a method to convert our high level trigger application into a stand-alone, bootable benchmark image. With additional instrumentation we turned it into a self-optimising benchmark which explores techniques such as late forking, NUMA balancing and optimal number of threads, i.e. it automatically optimises box-level performance. We have run this procedure on a wide range of Haswell-E CPUs and numerous other architectures from both Intel and AMD, including also the latest Intel micro-blade servers. We present results in terms of performance, power consumption, overheads and relative cost.

  8. From politics to policy: a new payment approach in Medicare Advantage.

    PubMed

    Berenson, Robert A

    2008-01-01

    While the Medicare Advantage program's future remains contentious politically, the Medicare Payment Advisory Commission's (MedPAC's) recommended policy of financial neutrality at the local level between private plans and traditional Medicare ignores local market dynamics in important ways. An analysis correlating plan bids against traditional Medicare's local spending levels likely would provide an alternative method of setting benchmarks, by producing a blend of local and national rates. A result would be that the rural and lower-cost urban "floor counties" would have benchmarks below currently inflated levels but above what financial neutrality at the local level--MedPAC's approach--would produce.

  9. Simulation-based comprehensive benchmarking of RNA-seq aligners

    PubMed Central

    Baruzzo, Giacomo; Hayer, Katharina E; Kim, Eun Ji; Di Camillo, Barbara; FitzGerald, Garret A; Grant, Gregory R

    2018-01-01

    Alignment is the first step in most RNA-seq analysis pipelines, and the accuracy of downstream analyses depends heavily on it. Unlike most steps in the pipeline, alignment is particularly amenable to benchmarking with simulated data. We performed a comprehensive benchmarking of 14 common splice-aware aligners for base, read, and exon junction-level accuracy and compared default with optimized parameters. We found that performance varied by genome complexity, and accuracy and popularity were poorly correlated. The most widely cited tool underperforms for most metrics, particularly when using default settings. PMID:27941783

  10. Community-based benchmarking of the CMIP DECK experiments

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2015-12-01

    A diversity of community-based efforts are independently developing "diagnostic packages" with little or no coordination between them. A short list of examples include NCAR's Climate Variability Diagnostics Package (CVDP), ORNL's International Land Model Benchmarking (ILAMB), LBNL's Toolkit for Extreme Climate Analysis (TECA), PCMDI's Metrics Package (PMP), the EU EMBRACE ESMValTool, the WGNE MJO diagnostics package, and CFMIP diagnostics. The full value of these efforts cannot be realized without some coordination. As a first step, a WCRP effort has initiated a catalog to document candidate packages that could potentially be applied in a "repeat-use" fashion to all simulations contributed to the CMIP DECK (Diagnostic, Evaluation and Characterization of Klima) experiments. Some coordination of community-based diagnostics has the additional potential to improve how CMIP modeling groups analyze their simulations during model-development. The fact that most modeling groups now maintain a "CMIP compliant" data stream means that in principal without much effort they could readily adopt a set of well organized diagnostic capabilities specifically designed to operate on CMIP DECK experiments. Ultimately, a detailed listing of and access to analysis codes that are demonstrated to work "out of the box" with CMIP data could enable model developers (and others) to select those codes they wish to implement in-house, potentially enabling more systematic evaluation during the model development process.

  11. Comparative Benchmark Dose Modeling as a Tool to Make the First Estimate of Safe Human Exposure Levels to Lunar Dust

    NASA Technical Reports Server (NTRS)

    James, John T.; Lam, Chiu-wing; Scully, Robert R.

    2013-01-01

    Brief exposures of Apollo Astronauts to lunar dust occasionally elicited upper respiratory irritation; however, no limits were ever set for prolonged exposure ot lunar dust. Habitats for exploration, whether mobile of fixed must be designed to limit human exposure to lunar dust to safe levels. We have used a new technique we call Comparative Benchmark Dose Modeling to estimate safe exposure limits for lunar dust collected during the Apollo 14 mission.

  12. Performance Against WELCOA's Worksite Health Promotion Benchmarks Across Years Among Selected US Organizations.

    PubMed

    Weaver, GracieLee M; Mendenhall, Brandon N; Hunnicutt, David; Picarella, Ryan; Leffelman, Brittanie; Perko, Michael; Bibeau, Daniel L

    2018-05-01

    The purpose of this study was to quantify the performance of organizations' worksite health promotion (WHP) activities against the benchmarking criteria included in the Well Workplace Checklist (WWC). The Wellness Council of America (WELCOA) developed a tool to assess WHP with its 100-item WWC, which represents WELCOA's 7 performance benchmarks. Workplaces. This study includes a convenience sample of organizations who completed the checklist from 2008 to 2015. The sample size was 4643 entries from US organizations. The WWC includes demographic questions, general questions about WHP programs, and scales to measure the performance against the WELCOA 7 benchmarks. Descriptive analyses of WWC items were completed separately for each year of the study period. The majority of the organizations represented each year were multisite, multishift, medium- to large-sized companies mostly in the services industry. Despite yearly changes in participating organizations, results across the WELCOA 7 benchmark scores were consistent year to year. Across all years, benchmarks that organizations performed the lowest were senior-level support, data collection, and programming; wellness teams and supportive environments were the highest scoring benchmarks. In an era marked with economic swings and health-care reform, it appears that organizations are staying consistent in their performance across these benchmarks. The WWC could be useful for organizations, practitioners, and researchers in assessing the quality of WHP programs.

  13. Community detection in complex networks using link prediction

    NASA Astrophysics Data System (ADS)

    Cheng, Hui-Min; Ning, Yi-Zi; Yin, Zhao; Yan, Chao; Liu, Xin; Zhang, Zhong-Yuan

    2018-01-01

    Community detection and link prediction are both of great significance in network analysis, which provide very valuable insights into topological structures of the network from different perspectives. In this paper, we propose a novel community detection algorithm with inclusion of link prediction, motivated by the question whether link prediction can be devoted to improving the accuracy of community partition. For link prediction, we propose two novel indices to compute the similarity between each pair of nodes, one of which aims to add missing links, and the other tries to remove spurious edges. Extensive experiments are conducted on benchmark data sets, and the results of our proposed algorithm are compared with two classes of baselines. In conclusion, our proposed algorithm is competitive, revealing that link prediction does improve the precision of community detection.

  14. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    NASA Astrophysics Data System (ADS)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  15. A Collaborative Recommend Algorithm Based on Bipartite Community

    PubMed Central

    Fu, Yuchen; Liu, Quan; Cui, Zhiming

    2014-01-01

    The recommendation algorithm based on bipartite network is superior to traditional methods on accuracy and diversity, which proves that considering the network topology of recommendation systems could help us to improve recommendation results. However, existing algorithms mainly focus on the overall topology structure and those local characteristics could also play an important role in collaborative recommend processing. Therefore, on account of data characteristics and application requirements of collaborative recommend systems, we proposed a link community partitioning algorithm based on the label propagation and a collaborative recommendation algorithm based on the bipartite community. Then we designed numerical experiments to verify the algorithm validity under benchmark and real database. PMID:24955393

  16. An impatient evolutionary algorithm with probabilistic tabu search for unified solution of some NP-hard problems in graph and set theory via clique finding.

    PubMed

    Guturu, Parthasarathy; Dantu, Ram

    2008-06-01

    Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.

  17. Organisational aspects and benchmarking of e-learning initiatives: a case study with South African community health workers.

    PubMed

    Reisach, Ulrike; Weilemann, Mitja

    2016-06-01

    South Africa desperately needs a comprehensive approach to fight HIV/AIDS. Education is crucial to reach this goal and Internet and e-learning could offer huge opportunities to broaden and deepen the knowledge basis. But due to the huge societal and digital divide between rich and poor areas, e-learning is difficult to realize in the townships. Community health workers often act as mediators and coaches for people seeking medical and personal help. They could give good advice regarding hygiene, nutrition, protection of family members in case of HIV/AIDS and finding legal ways to earn one's living if they were trained to do so. Therefore they need to have a broader general knowledge. Since learning opportunities in the townships are scarce, a system for e-learning has to be created in order to overcome the lack of experience with computers or the Internet and to enable them to implement a network of expertise. The article describes how the best international resources on basic medical knowledge, HIV/AIDS as well as on basic economic and entrepreneurial skills were benchmarked to be integrated into an e-learning system. After tests with community health workers, researchers developed recommendations on building a self-sustaining system for learning, including a network of expertise and best practice sharing. The article explains the opportunities and challenges for community health workers, which could provide information for other parts of the world with similar preconditions of rural poverty. © The Author(s) 2015.

  18. Predicting College Readiness in STEM: A Longitudinal Study of Iowa Students

    NASA Astrophysics Data System (ADS)

    Rickels, Heather Anne

    The demand for STEM college graduates is increasing. However, recent studies show there are not enough STEM majors to fulfill this need. This deficiency can be partially attributed to a gender discrepancy in the number of female STEM graduates and to the high rate of attrition of STEM majors. As STEM attrition has been associated with students being unprepared for STEM coursework, it is important to understand how STEM graduates change in achievement levels from middle school through high school and to have accurate readiness indicators for first-year STEM coursework. This study aimed to address these issues by comparing the achievement growth of STEM majors to non-STEM majors by gender in Science, Math, and Reading from Grade 6 to Grade 11 through latent growth models (LGMs). Then STEM Readiness Benchmarks were established in Science and Math on the Iowas (IAs) for typical first-year STEM courses and validity evidence was provided for the benchmarks. Results from the LGM analyses indicated that STEM graduates start at higher achievement levels in Grade 6 and maintain higher achievement levels through Grade 11 in all subjects. In addition, gender differences were examined. The findings indicate that students with high achievement levels self-select as STEM majors, regardless of gender. In addition, they suggest that students who are not on-track for a STEM degree may need to begin remediation prior to high school. Results from the benchmark analyses indicate that STEM coursework is more demanding and that students need to be better prepared academically in science and math if planning to pursue a STEM degree. In addition, the STEM Readiness Benchmarks were more accurate in predicting success in STEM courses than if general college readiness benchmarks were utilized. Also, students who met the STEM Readiness Benchmarks were more likely to graduate with a STEM degree. This study provides valuable information on STEM readiness to students, educators, and college admissions officers. Findings from this study can be used to better understand the level of academic achievement necessary to be successful as a STEM major and to provide guidance for students considering STEM majors in college. If students are being encouraged to purse STEM majors, it is important they have accurate information regarding their chances of success in STEM coursework.

  19. High-Level Ab Initio Calculations of Intermolecular Interactions: Heavy Main-Group Element π-Interactions.

    PubMed

    Krasowska, Małgorzata; Schneider, Wolfgang B; Mehring, Michael; Auer, Alexander A

    2018-05-02

    This work reports high-level ab initio calculations and a detailed analysis on the nature of intermolecular interactions of heavy main-group element compounds and π systems. For this purpose we have chosen a set of benchmark molecules of the form MR 3 , in which M=As, Sb, or Bi, and R=CH 3 , OCH 3 , or Cl. Several methods for the description of weak intermolecular interactions are benchmarked including DFT-D, DFT-SAPT, MP2, and high-level coupled cluster methods in the DLPNO-CCSD(T) approximation. Using local energy decomposition (LED) and an analysis of the electron density, details of the nature of this interaction are unraveled. The results yield insight into the nature of dispersion and donor-acceptor interactions in this type of system, including systematic trends in the periodic table, and also provide a benchmark for dispersion interactions in heavy main-group element compounds. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Toward community standards in the quest for orthologs

    PubMed Central

    Dessimoz, Christophe; Gabaldón, Toni; Roos, David S.; Sonnhammer, Erik L. L.; Herrero, Javier; Altenhoff, Adrian; Apweiler, Rolf; Ashburner, Michael; Blake, Judith; Boeckmann, Brigitte; Bridge, Alan; Bruford, Elspeth; Cherry, Mike; Conte, Matthieu; Dannie, Durand; Datta, Ruchira; Dessimoz, Christophe; Domelevo Entfellner, Jean-Baka; Ebersberger, Ingo; Gabaldón, Toni; Galperin, Michael; Herrero, Javier; Joseph, Jacob; Koestler, Tina; Kriventseva, Evgenia; Lecompte, Odile; Leunissen, Jack; Lewis, Suzanna; Linard, Benjamin; Livstone, Michael S.; Lu, Hui-Chun; Martin, Maria; Mazumder, Raja; Messina, David; Miele, Vincent; Muffato, Matthieu; Perrière, Guy; Punta, Marco; Roos, David; Rouard, Mathieu; Schmitt, Thomas; Schreiber, Fabian; Silva, Alan; Sjölander, Kimmen; Škunca, Nives; Sonnhammer, Erik; Stanley, Eleanor; Szklarczyk, Radek; Thomas, Paul; Uchiyama, Ikuo; Van Bel, Michiel; Vandepoele, Klaas; Vilella, Albert J.; Yates, Andrew; Zdobnov, Evgeny

    2012-01-01

    The identification of orthologs—genes pairs descended from a common ancestor through speciation, rather than duplication—has emerged as an essential component of many bioinformatics applications, ranging from the annotation of new genomes to experimental target prioritization. Yet, the development and application of orthology inference methods is hampered by the lack of consensus on source proteomes, file formats and benchmarks. The second ‘Quest for Orthologs’ meeting brought together stakeholders from various communities to address these challenges. We report on achievements and outcomes of this meeting, focusing on topics of particular relevance to the research community at large. The Quest for Orthologs consortium is an open community that welcomes contributions from all researchers interested in orthology research and applications. Contact: dessimoz@ebi.ac.uk PMID:22332236

  1. Complete graph model for community detection

    NASA Astrophysics Data System (ADS)

    Sun, Peng Gang; Sun, Xiya

    2017-04-01

    Community detection brings plenty of considerable problems, which has attracted more attention for many years. This paper develops a new framework, which tries to measure the interior and the exterior of a community based on a same metric, complete graph model. In particular, the exterior is modeled as a complete bipartite. We partition a network into subnetworks by maximizing the difference between the interior and the exterior of the subnetworks. In addition, we compare our approach with some state of the art methods on computer-generated networks based on the LFR benchmark as well as real-world networks. The experimental results indicate that our approach obtains better results for community detection, is capable of splitting irregular networks and achieves perfect results on the karate network and the dolphin network.

  2. "It's More than Stick and Rudder Skills": An Aviation Professional Development Community of Practice

    ERIC Educational Resources Information Center

    Bates, P.; O'Brien, W.

    2013-01-01

    In Australian higher education institutions, benchmarks have been directed at developing key competencies and attributes to facilitate students' transition into the workforce. However, for those students whose degree has a specific vocational focus, it is also necessary for them to commence their professional development whilst undergraduates.…

  3. Establishing a Baseline for School Readiness of Washington County Children Entering Kindergarten.

    ERIC Educational Resources Information Center

    Severeide, Rebecca

    The assessment of school readiness needs to include all aspects of children's early learning and indicators of family/community activities that support children's development. This study used a holistic approach to set baseline benchmarks on factors related to school readiness for entering kindergarten children, and to engage schools in Washington…

  4. Individual and community responses in stream mesocosms with different ionic compositions of conductivity and compared to a field-based benchmark

    EPA Science Inventory

    Several anthropogenic activities cause excess total dissolved solids (TDS) content and its correlate, specific conductivity, in surface waters due to increases in the major geochemical ions (e.g., Na, Ca, Cl, SO4). However, the relative concentrations of major ions varies with t...

  5. Preparing Students for Education, Work, and Community: Activity Theory in Task-Based Curriculum Design

    ERIC Educational Resources Information Center

    Campbell, Chris; MacPherson, Seonaigh; Sawkins, Tanis

    2014-01-01

    This case study describes how sociocultural and activity theory were applied in the design of a publicly funded, Canadian Language Benchmark (CLB)-based English as a Second Language (ESL) credential program and curriculum for immigrant and international students in postsecondary institutions in British Columbia, Canada. The ESL Pathways Project…

  6. Stochastic fluctuations and the detectability limit of network communities.

    PubMed

    Floretta, Lucio; Liechti, Jonas; Flammini, Alessandro; De Los Rios, Paolo

    2013-12-01

    We have analyzed the detectability limits of network communities in the framework of the popular Girvan and Newman benchmark. By carefully taking into account the inevitable stochastic fluctuations that affect the construction of each and every instance of the benchmark, we come to the conclusion that the native, putative partition of the network is completely lost even before the in-degree/out-degree ratio becomes equal to that of a structureless Erdös-Rényi network. We develop a simple iterative scheme, analytically well described by an infinite branching process, to provide an estimate of the true detectability limit. Using various algorithms based on modularity optimization, we show that all of them behave (semiquantitatively) in the same way, with the same functional form of the detectability threshold as a function of the network parameters. Because the same behavior has also been found by further modularity-optimization methods and for methods based on different heuristics implementations, we conclude that indeed a correct definition of the detectability limit must take into account the stochastic fluctuations of the network construction.

  7. Clinical effectiveness of a cognitive behavioral group treatment program for anxiety disorders: a benchmarking study.

    PubMed

    Oei, Tian P S; Boschen, Mark J

    2009-10-01

    Previous research has established efficacy of cognitive behavioral therapy (CBT) for anxiety disorders, yet it has not been widely assessed in routine community clinic practices. Efficacy research sacrifices external validity to achieve maximum internal validity. Recently, effectiveness research has been advocated as more ecologically valid for assessing routine clinical work in community clinics. Furthermore, there is a lack of effectiveness research in group CBT. This study aims to extend existing research on the effectiveness of CBT from individual therapy into group therapy delivery. It aimed also to examine outcome using not only symptom measures, but also measures of related symptoms, cognitions, and life quality and satisfaction. Results from a cohort of patients with various anxiety disorders demonstrated that treatment was effective in reducing anxiety symptoms to an extent comparable with other effectiveness studies. Despite this, only 43% of individuals showed reliable change, and 17% were 'recovered' from their anxiety symptoms, and the post-treatment measures were still significantly different from the level of anxiety symptoms observed in the general population.

  8. An Exploration of the Gap between Highest and Lowest Ability Readers across 20 Countries

    ERIC Educational Resources Information Center

    Alivernini, Fabio

    2013-01-01

    The aim of the present study, based on data from 20 countries, is to identify the pattern of variables (at country, school and student levels), which are typical of students performing below the low international benchmark compared to students performing at the advanced performance benchmark, in the Progress in International Reading Literacy Study…

  9. Using Kentucky State Standards as Benchmarks: Quantifying Incoming Ed.S. Students' Knowledge as They Journey toward Principalship

    ERIC Educational Resources Information Center

    Hearn, Jessica E.

    2015-01-01

    Principal preparation programs in Kentucky can use the items in the Dispositions, Dimensions, and Functions for School Leaders (EPSB, 2008) as mastery benchmarks to quantify incoming Educational Specialist (Ed.S) students' perceived level of mastery. This can serve both internal and external purposes by providing diagnostic feedback to students…

  10. A cooperative game framework for detecting overlapping communities in social networks

    NASA Astrophysics Data System (ADS)

    Jonnalagadda, Annapurna; Kuppusamy, Lakshmanan

    2018-02-01

    Community detection in social networks is a challenging and complex task, which received much attention from researchers of multiple domains in recent years. The evolution of communities in social networks happens merely due to the self-interest of the nodes. The interesting feature of community structure in social networks is the multi membership of the nodes resulting in overlapping communities. Assuming the nodes of the social network as self-interested players, the dynamics of community formation can be captured in the form of a game. In this paper, we propose a greedy algorithm, namely, Weighted Graph Community Game (WGCG), in order to model the interactions among the self-interested nodes of the social network. The proposed algorithm employs the Shapley value mechanism to discover the inherent communities of the underlying social network. The experimental evaluation on the real-world and synthetic benchmark networks demonstrates that the performance of the proposed algorithm is superior to the state-of-the-art overlapping community detection algorithms.

  11. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.

  12. LipidQC: Method Validation Tool for Visual Comparison to SRM 1950 Using NIST Interlaboratory Comparison Exercise Lipid Consensus Mean Estimate Values.

    PubMed

    Ulmer, Candice Z; Ragland, Jared M; Koelmel, Jeremy P; Heckert, Alan; Jones, Christina M; Garrett, Timothy J; Yost, Richard A; Bowden, John A

    2017-12-19

    As advances in analytical separation techniques, mass spectrometry instrumentation, and data processing platforms continue to spur growth in the lipidomics field, more structurally unique lipid species are detected and annotated. The lipidomics community is in need of benchmark reference values to assess the validity of various lipidomics workflows in providing accurate quantitative measurements across the diverse lipidome. LipidQC addresses the harmonization challenge in lipid quantitation by providing a semiautomated process, independent of analytical platform, for visual comparison of experimental results of National Institute of Standards and Technology Standard Reference Material (SRM) 1950, "Metabolites in Frozen Human Plasma", against benchmark consensus mean concentrations derived from the NIST Lipidomics Interlaboratory Comparison Exercise.

  13. Network clustering and community detection using modulus of families of loops.

    PubMed

    Shakeri, Heman; Poggi-Corradini, Pietro; Albin, Nathan; Scoglio, Caterina

    2017-01-01

    We study the structure of loops in networks using the notion of modulus of loop families. We introduce an alternate measure of network clustering by quantifying the richness of families of (simple) loops. Modulus tries to minimize the expected overlap among loops by spreading the expected link usage optimally. We propose weighting networks using these expected link usages to improve classical community detection algorithms. We show that the proposed method enhances the performance of certain algorithms, such as spectral partitioning and modularity maximization heuristics, on standard benchmarks.

  14. Benchmarking successional progress in a quantitative food web.

    PubMed

    Boit, Alice; Gaedke, Ursula

    2014-01-01

    Central to ecology and ecosystem management, succession theory aims to mechanistically explain and predict the assembly and development of ecological communities. Yet processes at lower hierarchical levels, e.g. at the species and functional group level, are rarely mechanistically linked to the under-investigated system-level processes which drive changes in ecosystem properties and functioning and are comparable across ecosystems. As a model system for secondary succession, seasonal plankton succession during the growing season is readily observable and largely driven autogenically. We used a long-term dataset from large, deep Lake Constance comprising biomasses, auto- and heterotrophic production, food quality, functional diversity, and mass-balanced food webs of the energy and nutrient flows between functional guilds of plankton and partly fish. Extracting population- and system-level indices from this dataset, we tested current hypotheses about the directionality of successional progress which are rooted in ecosystem theory, the metabolic theory of ecology, quantitative food web theory, thermodynamics, and information theory. Our results indicate that successional progress in Lake Constance is quantifiable, passing through predictable stages. Mean body mass, functional diversity, predator-prey weight ratios, trophic positions, system residence times of carbon and nutrients, and the complexity of the energy flow patterns increased during succession. In contrast, both the mass-specific metabolic activity and the system export decreased, while the succession rate exhibited a bimodal pattern. The weighted connectance introduced here represents a suitable index for assessing the evenness and interconnectedness of energy flows during succession. Diverging from earlier predictions, ascendency and eco-exergy did not increase during succession. Linking aspects of functional diversity to metabolic theory and food web complexity, we reconcile previously disjoint bodies of ecological theory to form a complete picture of successional progress within a pelagic food web. This comprehensive synthesis may be used as a benchmark for quantifying successional progress in other ecosystems.

  15. Benchmarking Successional Progress in a Quantitative Food Web

    PubMed Central

    Boit, Alice; Gaedke, Ursula

    2014-01-01

    Central to ecology and ecosystem management, succession theory aims to mechanistically explain and predict the assembly and development of ecological communities. Yet processes at lower hierarchical levels, e.g. at the species and functional group level, are rarely mechanistically linked to the under-investigated system-level processes which drive changes in ecosystem properties and functioning and are comparable across ecosystems. As a model system for secondary succession, seasonal plankton succession during the growing season is readily observable and largely driven autogenically. We used a long-term dataset from large, deep Lake Constance comprising biomasses, auto- and heterotrophic production, food quality, functional diversity, and mass-balanced food webs of the energy and nutrient flows between functional guilds of plankton and partly fish. Extracting population- and system-level indices from this dataset, we tested current hypotheses about the directionality of successional progress which are rooted in ecosystem theory, the metabolic theory of ecology, quantitative food web theory, thermodynamics, and information theory. Our results indicate that successional progress in Lake Constance is quantifiable, passing through predictable stages. Mean body mass, functional diversity, predator-prey weight ratios, trophic positions, system residence times of carbon and nutrients, and the complexity of the energy flow patterns increased during succession. In contrast, both the mass-specific metabolic activity and the system export decreased, while the succession rate exhibited a bimodal pattern. The weighted connectance introduced here represents a suitable index for assessing the evenness and interconnectedness of energy flows during succession. Diverging from earlier predictions, ascendency and eco-exergy did not increase during succession. Linking aspects of functional diversity to metabolic theory and food web complexity, we reconcile previously disjoint bodies of ecological theory to form a complete picture of successional progress within a pelagic food web. This comprehensive synthesis may be used as a benchmark for quantifying successional progress in other ecosystems. PMID:24587353

  16. A shallow water table fluctuation model in response to precipitation with consideration of unsaturated gravitational flow

    NASA Astrophysics Data System (ADS)

    Park, E.; Jeong, J.

    2017-12-01

    A precise estimation of groundwater fluctuation is studied by considering delayed recharge flux (DRF) and unsaturated zone drainage (UZD). Both DRF and UZD are due to gravitational flow impeded in the unsaturated zone, which may nonnegligibly affect groundwater level changes. In the validation, a previous model without the consideration of unsaturated flow is benchmarked where the actual groundwater level and precipitation data are divided into three periods based on the climatic condition. The estimation capability of the new model is superior to the benchmarked model as indicated by the significantly improved representation of groundwater level with physically interpretable model parameters.

  17. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  18. Employing Nested OpenMP for the Parallelization of Multi-Zone Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Jost, Gabriele

    2004-01-01

    In this paper we describe the parallelization of the multi-zone code versions of the NAS Parallel Benchmarks employing multi-level OpenMP parallelism. For our study we use the NanosCompiler, which supports nesting of OpenMP directives and provides clauses to control the grouping of threads, load balancing, and synchronization. We report the benchmark results, compare the timings with those of different hybrid parallelization paradigms and discuss OpenMP implementation issues which effect the performance of multi-level parallel applications.

  19. Benchmark Problems for Space Mission Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Folta, David C.; Burns, Richard

    2003-01-01

    To provide a high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested for space mission formation flying. The problems cover formation flying in low altitude, near-circular Earth orbit, high altitude, highly elliptical Earth orbits, and large amplitude lissajous trajectories about co-linear libration points of the Sun-Earth/Moon system. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions that are of interest to various agencies.

  20. Evaluation of triclosan in Minnesota lakes and rivers: Part II - human health risk assessment.

    PubMed

    Yost, Lisa J; Barber, Timothy R; Gentry, P Robinan; Bock, Michael J; Lyndall, Jennifer L; Capdevielle, Marie C; Slezak, Brian P

    2017-08-01

    Triclosan, an antimicrobial compound found in consumer products, has been detected in low concentrations in Minnesota municipal wastewater treatment plant (WWTP) effluent. This assessment evaluates potential health risks for exposure of adults and children to triclosan in Minnesota surface water, sediments, and fish. Potential exposures via fish consumption are considered for recreational or subsistence-level consumers. This assessment uses two chronic oral toxicity benchmarks, which bracket other available toxicity values. The first benchmark is a lower bound on a benchmark dose associated with a 10% risk (BMDL 10 ) of 47mg per kilogram per day (mg/kg-day) for kidney effects in hamsters. This value was identified as the most sensitive endpoint and species in a review by Rodricks et al. (2010) and is used herein to derive an estimated reference dose (RfD (Rodricks) ) of 0.47mg/kg-day. The second benchmark is a reference dose (RfD) of 0.047mg/kg-day derived from a no observed adverse effect level (NOAEL) of 10mg/kg-day for hepatic and hematopoietic effects in mice (Minnesota Department of Health [MDH] 2014). Based on conservative assumptions regarding human exposures to triclosan, calculated risk estimates are far below levels of concern. These estimates are likely to overestimate risks for potential receptors, particularly because sample locations were generally biased towards known discharges (i.e., WWTP effluent). Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Establishing objective benchmarks in robotic virtual reality simulation at the level of a competent surgeon using the RobotiX Mentor simulator.

    PubMed

    Watkinson, William; Raison, Nicholas; Abe, Takashige; Harrison, Patrick; Khan, Shamim; Van der Poel, Henk; Dasgupta, Prokar; Ahmed, Kamran

    2018-05-01

    To establish objective benchmarks at the level of a competent robotic surgeon across different exercises and metrics for the RobotiX Mentor virtual reality (VR) simulator suitable for use within a robotic surgical training curriculum. This retrospective observational study analysed results from multiple data sources, all of which used the RobotiX Mentor VR simulator. 123 participants with varying experience from novice to expert completed the exercises. Competency was established as the 25th centile of the mean advanced intermediate score. Three basic skill exercises and two advanced skill exercises were used. King's College London. 84 Novice, 26 beginner intermediates, 9 advanced intermediates and 4 experts were used in this retrospective observational study. Objective benchmarks derived from the 25th centile of the mean scores of the advanced intermediates provided suitably challenging yet also achievable targets for training surgeons. The disparity in scores was greatest for the advanced exercises. Novice surgeons are able to achieve the benchmarks across all exercises in the majority of metrics. We have successfully created this proof-of-concept study, which requires validation in a larger cohort. Objective benchmarks obtained from the 25th centile of the mean scores of advanced intermediates provide clinically relevant benchmarks at the standard of a competent robotic surgeon that are challenging yet also attainable. That can be used within a VR training curriculum allowing participants to track and monitor their progress in a structured and progressional manner through five exercises. Providing clearly defined targets, ensuring that a universal training standard has been achieved across training surgeons. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  2. Benchmark and Framework for Encouraging Research on Multi-Threaded Testing Tools

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Stoller, Scott D.; Ur, Shmuel

    2003-01-01

    A problem that has been getting prominence in testing is that of looking for intermittent bugs. Multi-threaded code is becoming very common, mostly on the server side. As there is no silver bullet solution, research focuses on a variety of partial solutions. In this paper (invited by PADTAD 2003) we outline a proposed project to facilitate research. The project goals are as follows. The first goal is to create a benchmark that can be used to evaluate different solutions. The benchmark, apart from containing programs with documented bugs, will include other artifacts, such as traces, that are useful for evaluating some of the technologies. The second goal is to create a set of tools with open API s that can be used to check ideas without building a large system. For example an instrumentor will be available, that could be used to test temporal noise making heuristics. The third goal is to create a focus for the research in this area around which a community of people who try to solve similar problems with different techniques, could congregate.

  3. Saturn Dynamo Model (Invited)

    NASA Astrophysics Data System (ADS)

    Glatzmaier, G. A.

    2010-12-01

    There has been considerable interest during the past few years about the banded zonal winds and global magnetic field on Saturn (and Jupiter). Questions regarding the depth to which the intense winds extend below the surface and the role they play in maintaining the dynamo continue to be debated. The types of computer models employed to address these questions fall into two main classes: general circulation models (GCMs) based on hydrostatic shallow-water assumptions from the atmospheric and ocean modeling communities and global non-hydrostatic deep convection models from the geodynamo and solar dynamo communities. The latter class can be further divided into Boussinesq models, which do not account for density stratification, and anelastic models, which do. Recent efforts to convert GCMs to deep circulation anelastic models have succeeded in producing fluid flows similar to those obtained from the original deep convection anelastic models. We describe results from one of the original anelastic convective dynamo simulations and compare them to a recent anelastic dynamo benchmark for giant gas planets. This benchmark is based on a polytropic reference state that spans five density scale heights with a radius and rotation rate similar to those of our solar system gas giants. The resulting magnetic Reynolds number is about 3000. Better spatial resolution will be required to produce more realistic predictions that capture the effects of both the density and electrical conductivity stratifications and include enough of the turbulent kinetic energy spectrum. Important additional physics may also be needed in the models. However, the basic models used in all simulation studies of the global dynamics of giant planets will hopefully first be validated by doing these simpler benchmarks.

  4. Benchmarking: Another Attempt to Introduce Market-Oriented Policies into Irish Second-Level Education?

    ERIC Educational Resources Information Center

    Halton, Michael J.

    2003-01-01

    Teachers in Ireland fear that benchmarking in the context of the present review of pay and conditions for all public service workers camouflages a shift of concern away from the development of the individual student to concern for the quality of the educational process provided by schools. A recent dispute between secondary teachers and the Irish…

  5. Evaluating the Effectiveness of a State-Mandated Benchmark Reading Assessment: mClass Reading 3D (Text Reading and Comprehension)

    ERIC Educational Resources Information Center

    Snow, Amie B.; Morris, Darrell; Perney, Jan

    2018-01-01

    We examined which of two instruments (Text Reading and Comprehension inventory [TRC] or a traditional informal reading inventory [IRI]) provides the more valid assessment of a primary-grade student's reading instructional level. The TRC is currently the required, benchmark reading assessment for students in grades K-3 in the state of North…

  6. Social Studies: Grades 4, 8, & 11. Content Specifications for Statewide Assessment by Standard.

    ERIC Educational Resources Information Center

    Missouri State Dept. of Elementary and Secondary Education, Jefferson City.

    This state of Missouri guide to content specifications for social studies assessment is designed to give teachers direction for assessment at the benchmark levels of grades 4, 8, and 11 for each standard that is appropriate for a statewide assessment. The guide includes specifications of what students are expected to know at the benchmark levels…

  7. Nonparametric estimation of benchmark doses in environmental risk assessment

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  8. Taking the Lead in Science Education: Forging Next-Generation Science Standards. International Science Benchmarking Report

    ERIC Educational Resources Information Center

    Achieve, Inc., 2010

    2010-01-01

    In response to concerns over the need for a scientifically literate workforce, increasing the STEM pipeline, and aging science standards documents, the scientific and science education communities are embarking on the development of a new conceptual framework for science, led by the National Research Council (NRC), and aligned next generation…

  9. Assessing Proficiencies in Higher Education: Benchmarking Knowledge and ICT Skills of Students at an Urban Community College

    ERIC Educational Resources Information Center

    McManus, Teresa L.

    2005-01-01

    Colleges and universities seeking to assess proficiencies in information and communications technology may wish to learn more about new assessment tools developed by the Educational Testing Service (ETS), in collaboration with higher education partners. This article describes the administration of the Information and Communication Technology (ICT)…

  10. In-Home Toxic Exposures and the Community of Individuals Who Are Developmentally Disabled

    ERIC Educational Resources Information Center

    Trousdale, Kristie A.; Martin, Joyce; Abulafia, Laura; Del Bene Davis, Allison

    2010-01-01

    Chemicals are ubiquitous in the environment, and human exposure to them is inevitable. A benchmark investigation of industrial chemicals, pollutants, and pesticides in umbilical cord blood indicated that humans are born with an average of 200 pollutants already present in their bodies. The study found a total of 287 chemicals, of which, 180 are…

  11. Effect of Guided Collaboration on General and Special Educators' Perceptions of Collaboration and Student Achievement

    ERIC Educational Resources Information Center

    Laine, Sandra

    2013-01-01

    This study investigated the effects of a guided collaboration approach during professional learning community meetings (PLC's) on the perceptions of general and special educators as well as the effect on student performance as measured by benchmark evaluation. A mixed methodology approach was used to collect data through surveys, weekly…

  12. Benchmark duration of work hours for development of fatigue symptoms in Japanese workers with adjustment for job-related stress.

    PubMed

    Suwazono, Yasushi; Dochi, Mirei; Kobayashi, Etsuko; Oishi, Mitsuhiro; Okubo, Yasushi; Tanaka, Kumihiko; Sakata, Kouichi

    2008-12-01

    The objective of this study was to calculate benchmark durations and lower 95% confidence limits for benchmark durations of working hours associated with subjective fatigue symptoms by applying the benchmark dose approach while adjusting for job-related stress using multiple logistic regression analyses. A self-administered questionnaire was completed by 3,069 male and 412 female daytime workers (age 18-67 years) in a Japanese steel company. The eight dependent variables in the Cumulative Fatigue Symptoms Index were decreased vitality, general fatigue, physical disorders, irritability, decreased willingness to work, anxiety, depressive feelings, and chronic tiredness. Independent variables were daily working hours, four subscales (job demand, job control, interpersonal relationship, and job suitability) of the Brief Job Stress Questionnaire, and other potential covariates. Using significant parameters for working hours and those for other covariates, the benchmark durations of working hours were calculated for the corresponding Index property. Benchmark response was set at 5% or 10%. Assuming a condition of worst job stress, the benchmark duration/lower 95% confidence limit for benchmark duration of working hours per day with a benchmark response of 5% or 10% were 10.0/9.4 or 11.7/10.7 (irritability) and 9.2/8.9 or 10.4/9.8 (chronic tiredness) in men and 8.9/8.4 or 9.8/8.9 (chronic tiredness) in women. The threshold amounts of working hours for fatigue symptoms under the worst job-related stress were very close to the standard daily working hours in Japan. The results strongly suggest that special attention should be paid to employees whose working hours exceed threshold amounts based on individual levels of job-related stress.

  13. Assessing Ecosystem Model Performance in Semiarid Systems

    NASA Astrophysics Data System (ADS)

    Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.

    2017-12-01

    In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.

  14. A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.

    1998-01-01

    This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.

  15. Organic Compounds in Clackamas River Water Used for Public Supply near Portland, Oregon, 2003-05

    USGS Publications Warehouse

    Carpenter, Kurt D.; McGhee, Gordon

    2009-01-01

    Organic compounds studied in this U.S. Geological Survey (USGS) assessment generally are man-made, including pesticides, gasoline hydrocarbons, solvents, personal care and domestic-use products, disinfection by-products, and manufacturing additives. In all, 56 compounds were detected in samples collected approximately monthly during 2003-05 at the intake for the Clackamas River Water plant, one of four community water systems on the lower Clackamas River. The diversity of compounds detected suggests a variety of different sources and uses (including wastewater discharges, industrial, agricultural, domestic, and others) and different pathways to drinking-water supplies (point sources, precipitation, overland runoff, ground-water discharge, and formation during water treatment). A total of 20 organic compounds were commonly detected (in at least 20 percent of the samples) in source water and (or) finished water. Fifteen compounds were commonly detected in source water, and five of these compounds (benzene, m- and p-xylene, diuron, simazine, and chloroform) also were commonly detected in finished water. With the exception of gasoline hydrocarbons, disinfection by-products, chloromethane, and the herbicide diuron, concentrations in source and finished water were less than 0.1 microgram per liter and always less than human-health benchmarks, which are available for about 60 percent of the compounds detected. On the basis of this screening-level assessment, adverse effects to human health are assumed to be negligible (subject to limitations of available human-health benchmarks).

  16. ASIS healthcare security benchmarking study.

    PubMed

    2001-01-01

    Effective security has aligned itself into the everyday operations of a healthcare organization. This is evident in every regional market segment, regardless of size, location, and provider clinical expertise or organizational growth. This research addresses key security issues from an acute care provider to freestanding facilities, from rural hospitals and community hospitals to large urban teaching hospitals. Security issues and concerns are identified and addressed daily by senior and middle management. As provider campuses become larger and more diverse, the hospitals surveyed have identified critical changes and improvements that are proposed or pending. Mitigating liabilities and improving patient, visitor, and/or employee safety are consequential to the performance and viability of all healthcare providers. Healthcare organizations have identified the requirement to compete for patient volume and revenue. The facility that can deliver high-quality healthcare in a comfortable, safe, secure, and efficient atmosphere will have a significant competitive advantage over a facility where patient or visitor security and safety is deficient. Continuing changes in healthcare organizations' operating structure and healthcare geographic layout mean changes in leadership and direction. These changes have led to higher levels of corporate responsibility. As a result, each organization participating in this benchmark study has added value and will derive value for the overall benefit of the healthcare providers throughout the nation. This study provides a better understanding of how the fundamental security needs of security in healthcare organizations are being addressed and its solutions identified and implemented.

  17. Evaluating the Quantitative Capabilities of Metagenomic Analysis Software.

    PubMed

    Kerepesi, Csaba; Grolmusz, Vince

    2016-05-01

    DNA sequencing technologies are applied widely and frequently today to describe metagenomes, i.e., microbial communities in environmental or clinical samples, without the need for culturing them. These technologies usually return short (100-300 base-pairs long) DNA reads, and these reads are processed by metagenomic analysis software that assign phylogenetic composition-information to the dataset. Here we evaluate three metagenomic analysis software (AmphoraNet--a webserver implementation of AMPHORA2--, MG-RAST, and MEGAN5) for their capabilities of assigning quantitative phylogenetic information for the data, describing the frequency of appearance of the microorganisms of the same taxa in the sample. The difficulties of the task arise from the fact that longer genomes produce more reads from the same organism than shorter genomes, and some software assign higher frequencies to species with longer genomes than to those with shorter ones. This phenomenon is called the "genome length bias." Dozens of complex artificial metagenome benchmarks can be found in the literature. Because of the complexity of those benchmarks, it is usually difficult to judge the resistance of a metagenomic software to this "genome length bias." Therefore, we have made a simple benchmark for the evaluation of the "taxon-counting" in a metagenomic sample: we have taken the same number of copies of three full bacterial genomes of different lengths, break them up randomly to short reads of average length of 150 bp, and mixed the reads, creating our simple benchmark. Because of its simplicity, the benchmark is not supposed to serve as a mock metagenome, but if a software fails on that simple task, it will surely fail on most real metagenomes. We applied three software for the benchmark. The ideal quantitative solution would assign the same proportion to the three bacterial taxa. We have found that AMPHORA2/AmphoraNet gave the most accurate results and the other two software were under-performers: they counted quite reliably each short read to their respective taxon, producing the typical genome length bias. The benchmark dataset is available at http://pitgroup.org/static/3RandomGenome-100kavg150bps.fna.

  18. A health risk benchmark for the neurologic effects of styrene: comparison with NOAEL/LOAEL approach.

    PubMed

    Rabovsky, J; Fowles, J; Hill, M D; Lewis, D C

    2001-02-01

    Benchmark dose (BMD) analysis was used to estimate an inhalation benchmark concentration for styrene neurotoxicity. Quantal data on neuropsychologic test results from styrene-exposed workers [Mutti et al. (1984). American Journal of Industrial Medicine, 5, 275-286] were used to quantify neurotoxicity, defined as the percent of tested workers who responded abnormally to > or = 1, > or = 2, or > or = 3 out of a battery of eight tests. Exposure was based on previously published results on mean urinary mandelic- and phenylglyoxylic acid levels in the workers, converted to air styrene levels (15, 44, 74, or 115 ppm). Nonstyrene-exposed workers from the same region served as a control group. Maximum-likelihood estimates (MLEs) and BMDs at 5 and 10% response levels of the exposed population were obtained from log-normal analysis of the quantal data. The highest MLE was 9 ppm (BMD = 4 ppm) styrene and represents abnormal responses to > or = 3 tests by 10% of the exposed population. The most health-protective MLE was 2 ppm styrene (BMD = 0.3 ppm) and represents abnormal responses to > or = 1 test by 5% of the exposed population. A no observed adverse effect level/lowest observed adverse effect level (NOAEL/LOAEL) analysis of the same quantal data showed workers in all styrene exposure groups responded abnormally to > or = 1, > or = 2, or > or = 3 tests, compared to controls, and the LOAEL was 15 ppm. A comparison of the BMD and NOAEL/LOAEL analyses suggests that at air styrene levels below the LOAEL, a segment of the worker population may be adversely affected. The benchmark approach will be useful for styrene noncancer risk assessment purposes by providing a more accurate estimate of potential risk that should, in turn, help to reduce the uncertainty that is a common problem in setting exposure levels.

  19. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    deWit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure, This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  20. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  1. RASSP Benchmark 4 Technical Description.

    DTIC Science & Technology

    1998-01-09

    be carried out. Based on results of the study, an implementation of all, or part, of the system described in this benchmark technical description...validate interface and timing constraints. The ISA level of modeling defines the limit of detail expected in the VHDL virtual prototype. It does not...develop a set of candidate architectures and perform an architecture trade-off study. Candidate proces- sor implementations must then be examined for

  2. Middle Level Teachers' Perceptions of Interim Reading Assessments: An Exploratory Study of Data-Based Decision Making

    ERIC Educational Resources Information Center

    Reed, Deborah K.

    2015-01-01

    This study explored the data-based decision making of 12 teachers in grades 6-8 who were asked about their perceptions and use of three required interim measures of reading performance: oral reading fluency (ORF), retell, and a benchmark comprised of released state test items. Focus group participants reported they did not believe the benchmark or…

  3. Integrated Sensing Processor, Phase 2

    DTIC Science & Technology

    2005-12-01

    performance analysis for several baseline classifiers including neural nets, linear classifiers, and kNN classifiers. Use of CCDR as a preprocessing step...below the level of the benchmark non-linear classifier for this problem ( kNN ). Furthermore, the CCDR preconditioned kNN achieved a 10% improvement over...the benchmark kNN without CCDR. Finally, we found an important connection between intrinsic dimension estimation via entropic graphs and the optimal

  4. Impact of the Minimum Pricing Policy and introduction of brand (generic) substitution into the Pharmaceutical Benefits Scheme in Australia.

    PubMed

    McManus, P; Birkett, D J; Dudley, J; Stevens, A

    2001-01-01

    To describe the effects of introducing the Minimum Pricing Policy (MPP) and generic (brand) substitution in 1990 and 1994 respectively on the dispensing of Pharmaceutical Benefits Scheme (PBS) prescriptions both at the aggregate and individual patient level. The relative proportion of prescriptions with a brand premium and those at benchmark was examined 4 years after introduction of the MPP and again 5 years later after generic substitution by pharmacists was permitted. To determine the impact of a price signal at the individual level, case studies involving a patient tracking methodology were conducted on two drugs (fluoxetine and ranitidine) that received a brand premium. From a zero base when the MPP was introduced in 1990, there were 5.4 million prescriptions (17%) dispensed for benchmark products 4 years later in 1994. At this stage generic (brand) substitution by pharmacists was then permitted and the market share of benchmark brands increased to 45% (25.2 million) by 1999. In the patient tracking studies, a significantly lower proportion of patients was still taking the premium brand of fluoxetine 3 months after the introduction of a price signal compared with patients taking paroxetine which did not have a generic competitor. This was also the case for the premium brand of ranitidine when compared to famotidine. The size of the price signal also had a marked effect on dispensing behaviour with the drug with the larger premium (fluoxetine) showing a significantly greater switch away from the premium brand to the benchmark product. The introduction in 1990 of the Minimum Pricing Policy without allowing generic substitution had a relatively small impact on the selection of medicines within the Pharmaceutical Benefits Scheme. However the effect of generic substitution at the pharmacist level, which was introduced in December 1994, resulted in a marked increase in the percentage of eligible PBS items dispensed at benchmark. Case studies showed a larger premium resulted in a greater shift of patients from drugs with a brand premium to the benchmark alternative.

  5. Improving Seismic Data Accessibility and Performance Using HDF Containers

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Wang, J.; Yang, R.

    2017-12-01

    The performance of computational geophysical data processing and forward modelling relies on both computational and data. Significant efforts on developing new data formats and libraries have been made the community, such as IRIS/PASSCAL and ASDF in data, and programs and utilities such as ObsPy and SPECFEM. The National Computational Infrastructure hosts a national significant geophysical data collection that is co-located with a high performance computing facility and provides an opportunity to investigate how to improve the data formats from both a data management and a performance point of view. This paper investigates how to enhance the data usability in several perspectives: 1) propose a convention for the seismic (both active and passive) community to improve the data accessibility and interoperability; 2) recommend the convention used in the HDF container when data is made available in PH5 or ASDF formats; 3) provide tools to convert between various seismic data formats; 4) provide performance benchmark cases using ObsPy library and SPECFEM3D to demonstrate how different data organization in terms of chunking size and compression impact on the performance by comparing new data formats, such as PH5 and ASDF to traditional formats such as SEGY, SEED, SAC, etc. In this work we apply our knowledge and experience on data standards and conventions, such as CF and ACDD from the climate community to the seismology community. The generic global attributes widely used in climate community are combined with the existing convention in the seismology community, such as CMT and QuakeML, StationXML, SEGY header convention. We also extend such convention by including the provenance and benchmarking records so that the r user can learn the footprint of the data together with its baseline performance. In practise we convert the example wide angle reflection seismic data from SEGY to PH5 or ASDF by using ObsPy and pyasdf libraries. It quantitatively demonstrates how the accessibility can be improved if the seismic data are stored in the HDF container.

  6. Hypothesis generation using network structures on community health center cancer-screening performance.

    PubMed

    Carney, Timothy Jay; Morgan, Geoffrey P; Jones, Josette; McDaniel, Anna M; Weaver, Michael T; Weiner, Bryan; Haggstrom, David A

    2015-10-01

    Nationally sponsored cancer-care quality-improvement efforts have been deployed in community health centers to increase breast, cervical, and colorectal cancer-screening rates among vulnerable populations. Despite several immediate and short-term gains, screening rates remain below national benchmark objectives. Overall improvement has been both difficult to sustain over time in some organizational settings and/or challenging to diffuse to other settings as repeatable best practices. Reasons for this include facility-level changes, which typically occur in dynamic organizational environments that are complex, adaptive, and unpredictable. This study seeks to understand the factors that shape community health center facility-level cancer-screening performance over time. This study applies a computational-modeling approach, combining principles of health-services research, health informatics, network theory, and systems science. To investigate the roles of knowledge acquisition, retention, and sharing within the setting of the community health center and to examine their effects on the relationship between clinical decision support capabilities and improvement in cancer-screening rate improvement, we employed Construct-TM to create simulated community health centers using previously collected point-in-time survey data. Construct-TM is a multi-agent model of network evolution. Because social, knowledge, and belief networks co-evolve, groups and organizations are treated as complex systems to capture the variability of human and organizational factors. In Construct-TM, individuals and groups interact by communicating, learning, and making decisions in a continuous cycle. Data from the survey was used to differentiate high-performing simulated community health centers from low-performing ones based on computer-based decision support usage and self-reported cancer-screening improvement. This virtual experiment revealed that patterns of overall network symmetry, agent cohesion, and connectedness varied by community health center performance level. Visual assessment of both the agent-to-agent knowledge sharing network and agent-to-resource knowledge use network diagrams demonstrated that community health centers labeled as high performers typically showed higher levels of collaboration and cohesiveness among agent classes, faster knowledge-absorption rates, and fewer agents that were unconnected to key knowledge resources. Conclusions and research implications: Using the point-in-time survey data outlining community health center cancer-screening practices, our computational model successfully distinguished between high and low performers. Results indicated that high-performance environments displayed distinctive network characteristics in patterns of interaction among agents, as well as in the access and utilization of key knowledge resources. Our study demonstrated how non-network-specific data obtained from a point-in-time survey can be employed to forecast community health center performance over time, thereby enhancing the sustainability of long-term strategic-improvement efforts. Our results revealed a strategic profile for community health center cancer-screening improvement via simulation over a projected 10-year period. The use of computational modeling allows additional inferential knowledge to be drawn from existing data when examining organizational performance in increasingly complex environments. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Using a visual plate waste study to monitor menu performance.

    PubMed

    Connors, Priscilla L; Rozell, Sarah B

    2004-01-01

    Two visual plate waste studies were conducted in 1-week phases over a 1-year period in an acute care hospital. A total of 383 trays were evaluated in the first phase and 467 in the second. Food items were ranked for consumption from a low (1) to high (6) score, with a score of 4.0 set as the benchmark denoting a minimum level of acceptable consumption. In the first phase two entrees, four starches, all of the vegetables, sliced white bread, and skim milk scored below the benchmark. As a result six menu items were replaced and one was modified. In the second phase all entrees scored at or above 4.0, as did seven vegetables, and a dinner roll that replaced sliced white bread. Skim milk continued to score below the benchmark. A visual plate waste study assists in benchmarking performance, planning menu changes, and assessing effectiveness.

  8. Results of the GABLS3 diurnal-cycle benchmark for wind energy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodrigo, J. Sanz; Allaerts, D.; Avila, M.

    We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less

  9. Results of the GABLS3 diurnal-cycle benchmark for wind energy applications

    DOE PAGES

    Rodrigo, J. Sanz; Allaerts, D.; Avila, M.; ...

    2017-06-13

    We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less

  10. Anthropogenic Organic Compounds in Source and Finished Groundwater of Community Water Systems in the Piedmont Physiographic Province, Potomac River Basin, Maryland and Virginia, 2003-04

    USGS Publications Warehouse

    Banks, William S.L.; Reyes, Betzaida

    2009-01-01

    A source- and finished-water-quality assessment of groundwater was conducted in the Piedmont Physiographic Province of Maryland and Virginia in the Potomac River Basin during 2003-04 as part of the U.S. Geological Survey's National Water-Quality Assessment Program. This assessment used a two-phased approach to sampling that allowed investigators to evaluate the occurrence of more than 280 anthropogenic organic compounds (volatile organic compounds, pesticides and pesticide degradates, and other anthropogenic organic compounds). Analysis of waters from 15 of the largest community water systems in the study area were included in the assessment. Source-water samples (raw-water samples collected prior to treatment) were collected at the well head. Finished-water samples (raw water that had been treated and disinfected) were collected after treatment and prior to distribution. Phase one samples, collected in August and September 2003, focused on source water. Phase two analyzed both source and finished water, and samples were collected in August and October of 2004. The results from phase one showed that samples collected from the source water for 15 community water systems contained 92 anthropogenic organic compounds (41 volatile organic compounds, 37 pesticides and pesticide degradates, and 14 other anthropogenic organic compounds). The 5 most frequently occurring anthropogenic organic compounds were detected in 11 of the 15 source-water samples. Deethylatrazine, a degradate of atrazine, was present in all 15 samples and metolachlor ethanesulfonic acid, a degradate of metolachlor, and chloroform were present in 13 samples. Atrazine and metolachlor were present in 12 and 11 samples, respectively. All samples contained a mixture of compounds with an average of about 14 compounds per sample. Phase two sampling focused on 10 of the 15 community water systems that were selected for resampling on the basis of occurrence of anthropogenic organic compounds detected most frequently during the first phase. A total of 48 different anthropogenic organic compounds were detected in samples collected from source and finished water. There were a similar number of compounds detected in finished water (41) and in source water (39). The most commonly detected group of anthropogenic organic compounds in finished water was trihalomethanes - compounds associated with the disinfection of drinking water. This group of compounds accounted for 30 percent of the detections in source water and 44 percent of the detections in finished water, and were generally found in higher concentrations in finished water. Excluding trihalomethanes, the number of total detections was about the same in source-water samples (33) as it was in finished-water samples (35). During both phases of the study, two measurements for human-health assessment were used. The first, the Maximum Contaminant Level for drinking water, is set by the U.S. Environmental Protection Agency and represents a legally enforceable maximum concentration of a contaminant permitted in drinking water. The second, the Health-Based Screening Level, was developed by the U.S. Geological Survey, is not legally enforceable, and represents a limit for more chronic exposures. Maximum concentrations for each detected compound were compared with either the Maximum Contaminant Level or the Health-Based Screening Level when available. More than half of the compounds detected had either a Maximum Contaminant Level or a Health-Based Screening Level. A benchmark quotient was set at 10 percent (greater than or equal to 0.1) of the ratio of the detected concentration of a particular compound to its Maximum Contaminant Level, or Health-Based Screening Level. This was considered a threshold for further monitoring. During phase one, when only source water was sampled, seven compounds (chloroform, benzene, acrylonitrile, methylene chloride, atrazine, alachlor, and dieldrin) met or exceeded a benchmark quotient. No de

  11. Assessing Student Understanding of the "New Biology": Development and Evaluation of a Criterion-Referenced Genomics and Bioinformatics Assessment

    NASA Astrophysics Data System (ADS)

    Campbell, Chad Edward

    Over the past decade, hundreds of studies have introduced genomics and bioinformatics (GB) curricula and laboratory activities at the undergraduate level. While these publications have facilitated the teaching and learning of cutting-edge content, there has yet to be an evaluation of these assessment tools to determine if they are meeting the quality control benchmarks set forth by the educational research community. An analysis of these assessment tools indicated that <10% referenced any quality control criteria and that none of the assessments met more than one of the quality control benchmarks. In the absence of evidence that these benchmarks had been met, it is unclear whether these assessment tools are capable of generating valid and reliable inferences about student learning. To remedy this situation the development of a robust GB assessment aligned with the quality control benchmarks was undertaken in order to ensure evidence-based evaluation of student learning outcomes. Content validity is a central piece of construct validity, and it must be used to guide instrument and item development. This study reports on: (1) the correspondence of content validity evidence gathered from independent sources; (2) the process of item development using this evidence; (3) the results from a pilot administration of the assessment; (4) the subsequent modification of the assessment based on the pilot administration results and; (5) the results from the second administration of the assessment. Twenty-nine different subtopics within GB (Appendix B: Genomics and Bioinformatics Expert Survey) were developed based on preliminary GB textbook analyses. These subtopics were analyzed using two methods designed to gather content validity evidence: (1) a survey of GB experts (n=61) and (2) a detailed content analyses of GB textbooks (n=6). By including only the subtopics that were shown to have robust support across these sources, 22 GB subtopics were established for inclusion in the assessment. An expert panel subsequently developed, evaluated, and revised two multiple-choice items to align with each of the 22 subtopics, producing a final item pool of 44 items. These items were piloted with student samples of varying content exposure levels. Both Classical Test Theory (CTT) and Item Response Theory (IRT) methodologies were used to evaluate the assessment's validity, reliability and ability inferences, and its ability to differentiate students with different magnitudes of content exposure. A total of 18 items were subsequently modified and reevaluated by an expert panel. The 26 original and 18 modified items were once again piloted with student samples of varying content exposure levels. Both CTT and IRT methodologies were once again used to evaluate student responses in order to evaluate the assessment's validity and reliability inferences as well as its ability to differentiate students with different magnitudes of content exposure. Interviews with students from different content exposure levels were also performed in order to gather convergent validity evidence (external validity evidence) as well as substantive validity evidence. Also included are the limitations of the assessment and a set of guidelines on how the assessment can best be used.

  12. School Leaders as both Colonized and Colonizers: Understanding Professional Identity in an Era of No Child Left Behind

    ERIC Educational Resources Information Center

    Lewis, Alisha Lauren

    2010-01-01

    This study positioned the federal No Child Left Behind (NCLB) Act of 2002 as a reified colonizing entity, inscribing its hegemonic authority upon the professional identity and work of school principals within their school communities of practice. Pressure on educators and students intensifies each year as the benchmark for Adequate Yearly Progress…

  13. Validation and Verification of Operational Land Analysis Activities at the Air Force Weather Agency

    NASA Technical Reports Server (NTRS)

    Shaw, Michael; Kumar, Sujay V.; Peters-Lidard, Christa D.; Cetola, Jeffrey

    2012-01-01

    The NASA developed Land Information System (LIS) is the Air Force Weather Agency's (AFWA) operational Land Data Assimilation System (LDAS) combining real time precipitation observations and analyses, global forecast model data, vegetation, terrain, and soil parameters with the community Noah land surface model, along with other hydrology module options, to generate profile analyses of global soil moisture, soil temperature, and other important land surface characteristics. (1) A range of satellite data products and surface observations used to generate the land analysis products (2) Global, 1/4 deg spatial resolution (3) Model analysis generated at 3 hours. AFWA recognizes the importance of operational benchmarking and uncertainty characterization for land surface modeling and is developing standard methods, software, and metrics to verify and/or validate LIS output products. To facilitate this and other needs for land analysis activities at AFWA, the Model Evaluation Toolkit (MET) -- a joint product of the National Center for Atmospheric Research Developmental Testbed Center (NCAR DTC), AFWA, and the user community -- and the Land surface Verification Toolkit (LVT), developed at the Goddard Space Flight Center (GSFC), have been adapted to operational benchmarking needs of AFWA's land characterization activities.

  14. Uncertainty in Earth System Models: Benchmarks for Ocean Model Performance and Validation

    NASA Astrophysics Data System (ADS)

    Ogunro, O. O.; Elliott, S.; Collier, N.; Wingenter, O. W.; Deal, C.; Fu, W.; Hoffman, F. M.

    2017-12-01

    The mean ocean CO2 sink is a major component of the global carbon budget, with marine reservoirs holding about fifty times more carbon than the atmosphere. Phytoplankton play a significant role in the net carbon sink through photosynthesis and drawdown, such that about a quarter of anthropogenic CO2 emissions end up in the ocean. Biology greatly increases the efficiency of marine environments in CO2 uptake and ultimately reduces the impact of the persistent rise in atmospheric concentrations. However, a number of challenges remain in appropriate representation of marine biogeochemical processes in Earth System Models (ESM). These threaten to undermine the community effort to quantify seasonal to multidecadal variability in ocean uptake of atmospheric CO2. In a bid to improve analyses of marine contributions to climate-carbon cycle feedbacks, we have developed new analysis methods and biogeochemistry metrics as part of the International Ocean Model Benchmarking (IOMB) effort. Our intent is to meet the growing diagnostic and benchmarking needs of ocean biogeochemistry models. The resulting software package has been employed to validate DOE ocean biogeochemistry results by comparison with observational datasets. Several other international ocean models contributing results to the fifth phase of the Coupled Model Intercomparison Project (CMIP5) were analyzed simultaneously. Our comparisons suggest that the biogeochemical processes determining CO2 entry into the global ocean are not well represented in most ESMs. Polar regions continue to show notable biases in many critical biogeochemical and physical oceanographic variables. Some of these disparities could have first order impacts on the conversion of atmospheric CO2 to organic carbon. In addition, single forcing simulations show that the current ocean state can be partly explained by the uptake of anthropogenic emissions. Combined effects of two or more of these forcings on ocean biogeochemical cycles and ecosystems are challenging to predict since additive or antagonistic effects may occur. A benchmarking tool for accurate assessment and validation of marine biogeochemical outputs will be indispensable as the model community continues to improve ESM developments. It will provide a first order tool in understanding climate-carbon cycle feedbacks.

  15. Experimental validation benchmark data for CFD of transient convection from forced to natural with flow reversal on a vertical flat plate

    DOE PAGES

    Lance, Blake W.; Smith, Barton L.

    2016-06-23

    Transient convection has been investigated experimentally for the purpose of providing Computational Fluid Dynamics (CFD) validation benchmark data. A specialized facility for validation benchmark experiments called the Rotatable Buoyancy Tunnel was used to acquire thermal and velocity measurements of flow over a smooth, vertical heated plate. The initial condition was forced convection downward with subsequent transition to mixed convection, ending with natural convection upward after a flow reversal. Data acquisition through the transient was repeated for ensemble-averaged results. With simple flow geometry, validation data were acquired at the benchmark level. All boundary conditions (BCs) were measured and their uncertainties quantified.more » Temperature profiles on all four walls and the inlet were measured, as well as as-built test section geometry. Inlet velocity profiles and turbulence levels were quantified using Particle Image Velocimetry. System Response Quantities (SRQs) were measured for comparison with CFD outputs and include velocity profiles, wall heat flux, and wall shear stress. Extra effort was invested in documenting and preserving the validation data. Details about the experimental facility, instrumentation, experimental procedure, materials, BCs, and SRQs are made available through this paper. As a result, the latter two are available for download and the other details are included in this work.« less

  16. Groundwater-quality data in the Santa Cruz, San Gabriel, and Peninsular Ranges Hard Rock Aquifers study unit, 2011-2012: results from the California GAMA program

    USGS Publications Warehouse

    Davis, Tracy A.; Shelton, Jennifer L.

    2014-01-01

    Results for constituents with nonregulatory benchmarks set for aesthetic concerns showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in samples from 19 grid wells. Manganese concentrations greater than the SMCL-CA of 50 μg/L were detected in 27 grid wells. Chloride was detected at a concentration greater than the SMCL-CA upper benchmark of 500 mg/L in one grid well. TDS concentrations in three grid wells were greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  17. Subgroup Benchmark Calculations for the Intra-Pellet Nonuniform Temperature Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Jung, Yeon Sang; Liu, Yuxuan

    A benchmark suite has been developed by Seoul National University (SNU) for intrapellet nonuniform temperature distribution cases based on the practical temperature profiles according to the thermal power levels. Though a new subgroup capability for nonuniform temperature distribution was implemented in MPACT, no validation calculation has been performed for the new capability. This study focuses on bench-marking the new capability through a code-to-code comparison. Two continuous-energy Monte Carlo codes, McCARD and CE-KENO, are engaged in obtaining reference solutions, and the MPACT results are compared to the SNU nTRACER using a similar cross section library and subgroup method to obtain self-shieldedmore » cross sections.« less

  18. Anthropogenic Organic Compounds in Ground Water and Finished Water of Community Water Systems near Dayton, Ohio, 2002-04

    USGS Publications Warehouse

    Thomas, Mary Ann

    2007-01-01

    Source water for 15 community-water-system (CWS) wells in the vicinity of Dayton, Ohio, was sampled to evaluate the occurrence of 258 anthropogenic compounds (AOCs). At least one AOC was detected in 12 of the 15 samples. Most samples contained a mixture of compounds (average of four compounds per sample). The compounds that were detected in more than 30 percent of the samples included three volatile organic compounds (VOCs) (trichloroethene, chloroform, and 1,1,1-trichloroethane) and four pesticides or pesticide breakdown products (prometon, simazine, atrazine, and deethylatrazine). In general, VOCs were detected at higher concentrations than pesticides were; among the VOCs, the maximum detected concentration was 4.8 ?g/L (for trichloroethene), whereas among the pesticides, the maximum detected concentration was 0.041 ?g/L (for atrazine). During a later phase of the study, samples of source water from five CWS wells were compared to samples of finished water associated with each well. In general, VOC detections were higher in finished water than in source water, primarily due to the occurrence of trihalomethanes, which are compounds that can form during the treatment process. In contrast, pesticide detections were relatively similar between source- and finished-water samples. To assess the human-health relevance of the data, concentrations of AOCs were compared to their respective human-health benchmarks. For pesticides, the maximum detected concentrations were at least 2 orders of magnitude less than the benchmark values. However, three VOCs - trichloroethene, carbon tetrachloride, and tetrachloromethane - were detected at concentrations that approach human-health benchmarks and therefore may warrant inclusion in a low-concentration, trends monitoring program.

  19. Benchmarking novel approaches for modelling species range dynamics

    PubMed Central

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.

    2016-01-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. PMID:26872305

  20. Benchmarking novel approaches for modelling species range dynamics.

    PubMed

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. © 2016 John Wiley & Sons Ltd.

  1. How do organisational characteristics influence teamwork and service delivery in lung cancer diagnostic assessment programmes? A mixed-methods study.

    PubMed

    Honein-AbouHaidar, Gladys N; Stuart-McEwan, Terri; Waddell, Tom; Salvarrey, Alexandra; Smylie, Jennifer; Dobrow, Mark J; Brouwers, Melissa C; Gagliardi, Anna R

    2017-02-23

    Diagnostic assessment programmes (DAPs) can reduce wait times for cancer diagnosis, but optimal DAP design is unknown. This study explored how organisational characteristics influenced multidisciplinary teamwork and diagnostic service delivery in lung cancer DAPs. A mixed-methods approach integrated data from descriptive qualitative interviews and medical record abstraction at 4 lung cancer DAPs. Findings were analysed with the Integrated Team Effectiveness Model. 4 DAPs at 2 teaching and 2 community hospitals in Canada. 22 staff were interviewed about organisational characteristics, target service benchmarks, and teamwork processes, determinants and outcomes; 314 medical records were reviewed for actual service benchmarks. Formal, informal and asynchronous team processes enabled service delivery and yielded many perceived benefits at the patient, staff and service levels. However, several DAP characteristics challenged teamwork and service delivery: referral volume/workload, time since launch, days per week of operation, rural-remote population, number and type of full-time/part-time human resources, staff colocation, information systems. As a result, all sites failed to meet target benchmarks (from referral to consultation median 4.0 visits, median wait time 35.0 days). Recommendations included improved information systems, more staff in all specialties, staff colocation and expanded roles for patient navigators. Findings were captured in a conceptual framework of lung cancer DAP teamwork determinants and outcomes. This study identified several DAP characteristics that could be improved to facilitate teamwork and enhance service delivery, thereby contributing to knowledge of organisational determinants of teamwork and associated outcomes. Findings can be used to update existing DAP guidelines, and by managers to plan or evaluate lung cancer DAPs. Ongoing research is needed to identify ideal roles for navigators, and staffing models tailored to case volumes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  2. Organic contaminants, trace and major elements, and nutrients in water and sediment sampled in response to the Deepwater Horizon oil spill

    USGS Publications Warehouse

    Nowell, Lisa H.; Ludtke, Amy S.; Mueller, David K.; Scott, Jonathon C.

    2012-01-01

    Beach water and sediment samples were collected along the Gulf of Mexico coast to assess differences in contaminant concentrations before and after landfall of Macondo-1 well oil released into the Gulf of Mexico from the sinking of the British Petroleum Corporation's Deepwater Horizon drilling platform. Samples were collected at 70 coastal sites between May 7 and July 7, 2010, to document baseline, or "pre-landfall" conditions. A subset of 48 sites was resampled during October 4 to 14, 2010, after oil had made landfall on the Gulf of Mexico coast, called the "post-landfall" sampling period, to determine if actionable concentrations of oil were present along shorelines. Few organic contaminants were detected in water; their detection frequencies generally were low and similar in pre-landfall and post-landfall samples. Only one organic contaminant--toluene--had significantly higher concentrations in post-landfall than pre-landfall water samples. No water samples exceeded any human-health benchmarks, and only one post-landfall water sample exceeded an aquatic-life benchmark--the toxic-unit benchmark for polycyclic aromatic hydrocarbons (PAH) mixtures. In sediment, concentrations of 3 parent PAHs and 17 alkylated PAH groups were significantly higher in post-landfall samples than pre-landfall samples. One pre-landfall sample from Texas exceeded the sediment toxic-unit benchmark for PAH mixtures; this site was not sampled during the post-landfall period. Empirical upper screening-value benchmarks for PAHs in sediment were exceeded at 37 percent of post-landfall samples and 22 percent of pre-landfall samples, but there was no significant difference in the proportion of samples exceeding benchmarks between paired pre-landfall and post-landfall samples. Seven sites had the largest concentration differences between post-landfall and pre-landfall samples for 15 alkylated PAHs. Five of these seven sites, located in Louisiana, Mississippi, and Alabama, had diagnostic geochemical evidence of Macondo-1 oil in post-landfall sediments and tarballs. For trace and major elements in water, analytical reporting levels for several elements were high and variable. No human-health benchmarks were exceeded, although these were available for only two elements. Aquatic-life benchmarks for trace elements were exceeded in 47 percent of water samples overall. The elements responsible for the most exceedances in post-landfall samples were boron, copper, and manganese. Benchmark exceedances in water could be substantially underestimated because some samples had reporting levels higher than the applicable benchmarks (such as cobalt, copper, lead and zinc) and some elements (such as boron and vanadium) were analyzed in samples from only one sampling period. For trace elements in whole sediment, empirical upper screening-value benchmarks were exceeded in 57 percent of post-landfall samples and 40 percent of pre-landfall samples, but there was no significant difference in the proportion of samples exceeding benchmarks between paired pre-landfall and post-landfall samples. Benchmark exceedance frequencies could be conservatively high because they are based on measurements of total trace-element concentrations in sediment. In the less than 63-micrometer sediment fraction, one or more trace or major elements were anthropogenically enriched relative to national baseline values for U.S. streams for all sediment samples except one. Sixteen percent of sediment samples exceeded upper screening-value benchmarks for, and were enriched in, one or more of the following elements: barium, vanadium, aluminum, manganese, arsenic, chromium, and cobalt. These samples were evenly divided between the sampling periods. Aquatic-life benchmarks were frequently exceeded along the Gulf of Mexico coast by trace elements in both water and sediment and by PAHs in sediment. For the most part, however, significant differences between pre-landfall and post-landfall samples were limited to concentrations of PAHs in sediment. At five sites along the coast, the higher post-landfall concentrations of PAHs were associated with diagnostic geochemical evidence of Deepwater Horizon Macondo-1 oil.

  3. Stylized facts in social networks: Community-based static modeling

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun; Murase, Yohsuke; Török, János; Kertész, János; Kaski, Kimmo

    2018-06-01

    The past analyses of datasets of social networks have enabled us to make empirical findings of a number of aspects of human society, which are commonly featured as stylized facts of social networks, such as broad distributions of network quantities, existence of communities, assortative mixing, and intensity-topology correlations. Since the understanding of the structure of these complex social networks is far from complete, for deeper insight into human society more comprehensive datasets and modeling of the stylized facts are needed. Although the existing dynamical and static models can generate some stylized facts, here we take an alternative approach by devising a community-based static model with heterogeneous community sizes and larger communities having smaller link density and weight. With these few assumptions we are able to generate realistic social networks that show most stylized facts for a wide range of parameters, as demonstrated numerically and analytically. Since our community-based static model is simple to implement and easily scalable, it can be used as a reference system, benchmark, or testbed for further applications.

  4. A Gravimetric Geoid Model for Vertical Datum in Canada

    NASA Astrophysics Data System (ADS)

    Veronneau, M.; Huang, J.

    2004-05-01

    The need to realize a new vertical datum for Canada dates back to 1976 when a study group at Geodetic Survey Division (GSD) investigated problems related to the existing vertical system (CGVD28) and recommended a redefinition of the vertical datum. The US National Geodetic Survey and GSD cooperated in the development of a new North American Vertical Datum (NAVD88). Although the USA adopted NAVD88 in 1993 as its datum, Canada declined to do so as a result of unexplained discrepancies of about 1.5 m from east to west coasts (likely due to systematic errors). The high cost of maintaining the vertical datum by the traditional spirit leveling technique coupled with budgetary constraints has forced GSD to modify its approach. A new attempt (project) to modernize the vertical datum is currently in process in Canada. The advance in space-based technologies (e.g. GPS, satellite radar altimetry, satellite gravimetry) and new developments in geoid modeling offer an alternative to spirit leveling. GSD is planning to implement, after stakeholder consultations, a geoid model as the new vertical datum for Canada, which will allow space-based technology users access to an accurate and uniform datum all across the Canadian landmass and surrounding oceans. CGVD28 is only accessible through a limited number of benchmarks, primarily located in southern Canada. The new vertical datum would be less sensitive to geodynamic activities (post-glacial rebound and earthquake), local uplift and subsidence, and deterioration of the benchmarks. The adoption of a geoid model as a vertical datum does not mean that GSD is neglecting the current benchmarks. New heights will be given to the benchmarks by a new adjustment of the leveling observations, which will be constrained to the geoid model at selected stations of the Active Control System (ACS) and Canadian Base Network (CBN). This adjustment will not correct vertical motion at benchmarks, which has occurred since the last leveling observations. The presentation provides an overview of the "Height Modernization" project, and discusses the accuracy of the existing geoid models in Canada.

  5. Cloud flexibility using DIRAC interware

    NASA Astrophysics Data System (ADS)

    Fernandez Albor, Víctor; Seco Miguelez, Marcos; Fernandez Pena, Tomas; Mendez Muñoz, Victor; Saborido Silva, Juan Jose; Graciani Diaz, Ricardo

    2014-06-01

    Communities of different locations are running their computing jobs on dedicated infrastructures without the need to worry about software, hardware or even the site where their programs are going to be executed. Nevertheless, this usually implies that they are restricted to use certain types or versions of an Operating System because either their software needs an definite version of a system library or a specific platform is required by the collaboration to which they belong. On this scenario, if a data center wants to service software to incompatible communities, it has to split its physical resources among those communities. This splitting will inevitably lead to an underuse of resources because the data centers are bound to have periods where one or more of its subclusters are idle. It is, in this situation, where Cloud Computing provides the flexibility and reduction in computational cost that data centers are searching for. This paper describes a set of realistic tests that we ran on one of such implementations. The test comprise software from three different HEP communities (Auger, LHCb and QCD phenomelogists) and the Parsec Benchmark Suite running on one or more of three Linux flavors (SL5, Ubuntu 10.04 and Fedora 13). The implemented infrastructure has, at the cloud level, CloudStack that manages the virtual machines (VM) and the hosts on which they run, and, at the user level, the DIRAC framework along with a VM extension that will submit, monitorize and keep track of the user jobs and also requests CloudStack to start or stop the necessary VM's. In this infrastructure, the community software is distributed via the CernVM-FS, which has been proven to be a reliable and scalable software distribution system. With the resulting infrastructure, users are allowed to send their jobs transparently to the Data Center. The main purpose of this system is the creation of flexible cluster, multiplatform with an scalable method for software distribution for several VOs. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine, which is transparent to the user.

  6. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thrower, A.W.; Patric, J.; Keister, M.

    2008-07-01

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how thesemore » findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in safely and efficiently shipping spent nuclear fuel and other radioactive materials. Additional business processes may be examined in this phase. The findings of these benchmarking efforts will help determine the organizational structure and requirements of the national transportation system. (authors)« less

  7. TRECVID: the utility of a content-based video retrieval evaluation

    NASA Astrophysics Data System (ADS)

    Hauptmann, Alexander G.

    2006-01-01

    TRECVID, an annual retrieval evaluation benchmark organized by NIST, encourages research in information retrieval from digital video. TRECVID benchmarking covers both interactive and manual searching by end users, as well as the benchmarking of some supporting technologies including shot boundary detection, extraction of semantic features, and the automatic segmentation of TV news broadcasts. Evaluations done in the context of the TRECVID benchmarks show that generally, speech transcripts and annotations provide the single most important clue for successful retrieval. However, automatically finding the individual images is still a tremendous and unsolved challenge. The evaluations repeatedly found that none of the multimedia analysis and retrieval techniques provide a significant benefit over retrieval using only textual information such as from automatic speech recognition transcripts or closed captions. In interactive systems, we do find significant differences among the top systems, indicating that interfaces can make a huge difference for effective video/image search. For interactive tasks efficient interfaces require few key clicks, but display large numbers of images for visual inspection by the user. The text search finds the right context region in the video in general, but to select specific relevant images we need good interfaces to easily browse the storyboard pictures. In general, TRECVID has motivated the video retrieval community to be honest about what we don't know how to do well (sometimes through painful failures), and has focused us to work on the actual task of video retrieval, as opposed to flashy demos based on technological capabilities.

  8. Nonlinear viscoplasticity in ASPECT: benchmarking and applications to subduction

    NASA Astrophysics Data System (ADS)

    Glerum, Anne; Thieulot, Cedric; Fraters, Menno; Blom, Constantijn; Spakman, Wim

    2018-03-01

    ASPECT (Advanced Solver for Problems in Earth's ConvecTion) is a massively parallel finite element code originally designed for modeling thermal convection in the mantle with a Newtonian rheology. The code is characterized by modern numerical methods, high-performance parallelism and extensibility. This last characteristic is illustrated in this work: we have extended the use of ASPECT from global thermal convection modeling to upper-mantle-scale applications of subduction.

    Subduction modeling generally requires the tracking of multiple materials with different properties and with nonlinear viscous and viscoplastic rheologies. To this end, we implemented a frictional plasticity criterion that is combined with a viscous diffusion and dislocation creep rheology. Because ASPECT uses compositional fields to represent different materials, all material parameters are made dependent on a user-specified number of fields.

    The goal of this paper is primarily to describe and verify our implementations of complex, multi-material rheology by reproducing the results of four well-known two-dimensional benchmarks: the indentor benchmark, the brick experiment, the sandbox experiment and the slab detachment benchmark. Furthermore, we aim to provide hands-on examples for prospective users by demonstrating the use of multi-material viscoplasticity with three-dimensional, thermomechanical models of oceanic subduction, putting ASPECT on the map as a community code for high-resolution, nonlinear rheology subduction modeling.

  9. The Paucity Problem: Where Have All the Space Reactor Experiments Gone?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Marshall, Margaret A.

    2016-10-01

    The Handbooks of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) together contain a plethora of documented and evaluated experiments essential in the validation of nuclear data, neutronics codes, and modeling of various nuclear systems. Unfortunately, only a minute selection of handbook data (twelve evaluations) are of actual experimental facilities and mockups designed specifically for space nuclear research. There is a paucity problem, such that the multitude of space nuclear experimental activities performed in the past several decades have yet to be recovered and made available in such detail that themore » international community could benefit from these valuable historical research efforts. Those experiments represent extensive investments in infrastructure, expertise, and cost, as well as constitute significantly valuable resources of data supporting past, present, and future research activities. The ICSBEP and IRPhEP were established to identify and verify comprehensive sets of benchmark data; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data. See full abstract in attached document.« less

  10. Principal Angle Enrichment Analysis (PAEA): Dimensionally Reduced Multivariate Gene Set Enrichment Analysis Tool

    PubMed Central

    Clark, Neil R.; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D.; Jones, Matthew R.; Ma’ayan, Avi

    2016-01-01

    Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community. PMID:26848405

  11. Principal Angle Enrichment Analysis (PAEA): Dimensionally Reduced Multivariate Gene Set Enrichment Analysis Tool.

    PubMed

    Clark, Neil R; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D; Jones, Matthew R; Ma'ayan, Avi

    2015-11-01

    Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community.

  12. AmWeb: a novel interactive web tool for antimicrobial resistance surveillance, applicable to both community and hospital patients.

    PubMed

    Ironmonger, Dean; Edeghere, Obaghe; Gossain, Savita; Bains, Amardeep; Hawkey, Peter M

    2013-10-01

    Antimicrobial resistance (AMR) is recognized as one of the most significant threats to human health. Local and regional AMR surveillance enables the monitoring of temporal changes in susceptibility to antibiotics and can provide prescribing guidance to healthcare providers to improve patient management and help slow the spread of antibiotic resistance in the community. There is currently a paucity of routine community-level AMR surveillance information. The HPA in England sponsored the development of an AMR surveillance system (AmSurv) to collate local laboratory reports. In the West Midlands region of England, routine reporting of AMR data has been established via the AmSurv system from all diagnostic microbiology laboratories. The HPA Regional Epidemiology Unit developed a web-enabled database application (AmWeb) to provide microbiologists, pharmacists and other stakeholders with timely access to AMR data using user-configurable reporting tools. AmWeb was launched in the West Midlands in January 2012 and is used by microbiologists and pharmacists to monitor resistance profiles, perform local benchmarking and compile data for infection control reports. AmWeb is now being rolled out to all English regions. It is expected that AmWeb will become a valuable tool for monitoring the threat from newly emerging or currently circulating resistant organisms and helping antibiotic prescribers to select the best treatment options for their patients.

  13. Benchmarking viromics: an in silico evaluation of metagenome-enabled estimates of viral community composition and diversity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roux, Simon; Emerson, Joanne B.; Eloe-Fadrosh, Emiley A.

    BackgroundViral metagenomics (viromics) is increasingly used to obtain uncultivated viral genomes, evaluate community diversity, and assess ecological hypotheses. While viromic experimental methods are relatively mature and widely accepted by the research community, robust bioinformatics standards remain to be established. Here we usedin silicomock viral communities to evaluate the viromic sequence-to-ecological-inference pipeline, including (i) read pre-processing and metagenome assembly, (ii) thresholds applied to estimate viral relative abundances based on read mapping to assembled contigs, and (iii) normalization methods applied to the matrix of viral relative abundances for alpha and beta diversity estimates. ResultsTools specifically designed for metagenomes, specifically metaSPAdes, MEGAHIT, andmore » IDBA-UD, were the most effective at assembling viromes. Read pre-processing, such as partitioning, had virtually no impact on assembly output, but may be useful when hardware is limited. Viral populations with 2–5 × coverage typically assembled well, whereas lesser coverage led to fragmented assembly. Strain heterogeneity within populations hampered assembly, especially when strains were closely related (average nucleotide identity, or ANI ≥97%) and when the most abundant strain represented <50% of the population. Viral community composition assessments based on read recruitment were generally accurate when the following thresholds for detection were applied: (i) ≥10 kb contig lengths to define populations, (ii) coverage defined from reads mapping at ≥90% identity, and (iii) ≥75% of contig length with ≥1 × coverage. Finally, although data are limited to the most abundant viruses in a community, alpha and beta diversity patterns were robustly estimated (±10%) when comparing samples of similar sequencing depth, but more divergent (up to 80%) when sequencing depth was uneven across the dataset. In the latter cases, the use of normalization methods specifically developed for metagenomes provided the best estimates. ConclusionsThese simulations provide benchmarks for selecting analysis cut-offs and establish that an optimized sample-to-ecological-inference viromics pipeline is robust for making ecological inferences from natural viral communities. Continued development to better accessing RNA, rare, and/or diverse viral populations and improved reference viral genome availability will alleviate many of viromics remaining limitations.« less

  14. Benchmarking viromics: an in silico evaluation of metagenome-enabled estimates of viral community composition and diversity

    DOE PAGES

    Roux, Simon; Emerson, Joanne B.; Eloe-Fadrosh, Emiley A.; ...

    2017-09-21

    BackgroundViral metagenomics (viromics) is increasingly used to obtain uncultivated viral genomes, evaluate community diversity, and assess ecological hypotheses. While viromic experimental methods are relatively mature and widely accepted by the research community, robust bioinformatics standards remain to be established. Here we usedin silicomock viral communities to evaluate the viromic sequence-to-ecological-inference pipeline, including (i) read pre-processing and metagenome assembly, (ii) thresholds applied to estimate viral relative abundances based on read mapping to assembled contigs, and (iii) normalization methods applied to the matrix of viral relative abundances for alpha and beta diversity estimates. ResultsTools specifically designed for metagenomes, specifically metaSPAdes, MEGAHIT, andmore » IDBA-UD, were the most effective at assembling viromes. Read pre-processing, such as partitioning, had virtually no impact on assembly output, but may be useful when hardware is limited. Viral populations with 2–5 × coverage typically assembled well, whereas lesser coverage led to fragmented assembly. Strain heterogeneity within populations hampered assembly, especially when strains were closely related (average nucleotide identity, or ANI ≥97%) and when the most abundant strain represented <50% of the population. Viral community composition assessments based on read recruitment were generally accurate when the following thresholds for detection were applied: (i) ≥10 kb contig lengths to define populations, (ii) coverage defined from reads mapping at ≥90% identity, and (iii) ≥75% of contig length with ≥1 × coverage. Finally, although data are limited to the most abundant viruses in a community, alpha and beta diversity patterns were robustly estimated (±10%) when comparing samples of similar sequencing depth, but more divergent (up to 80%) when sequencing depth was uneven across the dataset. In the latter cases, the use of normalization methods specifically developed for metagenomes provided the best estimates. ConclusionsThese simulations provide benchmarks for selecting analysis cut-offs and establish that an optimized sample-to-ecological-inference viromics pipeline is robust for making ecological inferences from natural viral communities. Continued development to better accessing RNA, rare, and/or diverse viral populations and improved reference viral genome availability will alleviate many of viromics remaining limitations.« less

  15. Microbiological effectiveness of disinfecting water by boiling in rural Guatemala.

    PubMed

    Rosa, Ghislaine; Miller, Laura; Clasen, Thomas

    2010-03-01

    Boiling is the most common means of treating water in the home and the benchmark against which alternative point-of-use water treatment options must be compared. In a 5-week study in rural Guatemala among 45 households who claimed they always or almost always boiled their drinking water, boiling was associated with a 86.2% reduction in geometric mean thermotolerant coliforms (TTC) (N = 206, P < 0.0001). Despite consistent levels of fecal contamination in source water, 71.2% of stored water samples from self-reported boilers met the World Health Organization guidelines for safe drinking water (0 TTC/100 mL), and 10.7% fell within the commonly accepted low-risk category of (1-10 TTC/100 mL). As actually practiced in the study community, boiling significantly improved the microbiological quality of drinking water, though boiled and stored drinking water is not always free of fecal contaminations.

  16. Linear antenna array optimization using flower pollination algorithm.

    PubMed

    Saxena, Prerna; Kothari, Ashwin

    2016-01-01

    Flower pollination algorithm (FPA) is a new nature-inspired evolutionary algorithm used to solve multi-objective optimization problems. The aim of this paper is to introduce FPA to the electromagnetics and antenna community for the optimization of linear antenna arrays. FPA is applied for the first time to linear array so as to obtain optimized antenna positions in order to achieve an array pattern with minimum side lobe level along with placement of deep nulls in desired directions. Various design examples are presented that illustrate the use of FPA for linear antenna array optimization, and subsequently the results are validated by benchmarking along with results obtained using other state-of-the-art, nature-inspired evolutionary algorithms such as particle swarm optimization, ant colony optimization and cat swarm optimization. The results suggest that in most cases, FPA outperforms the other evolutionary algorithms and at times it yields a similar performance.

  17. Microbiological Effectiveness of Disinfecting Water by Boiling in Rural Guatemala

    PubMed Central

    Rosa, Ghislaine; Miller, Laura; Clasen, Thomas

    2010-01-01

    Boiling is the most common means of treating water in the home and the benchmark against which alternative point-of-use water treatment options must be compared. In a 5-week study in rural Guatemala among 45 households who claimed they always or almost always boiled their drinking water, boiling was associated with a 86.2% reduction in geometric mean thermotolerant coliforms (TTC) (N = 206, P < 0.0001). Despite consistent levels of fecal contamination in source water, 71.2% of stored water samples from self-reported boilers met the World Health Organization guidelines for safe drinking water (0 TTC/100 mL), and 10.7% fell within the commonly accepted low-risk category of (1–10 TTC/100 mL). As actually practiced in the study community, boiling significantly improved the microbiological quality of drinking water, though boiled and stored drinking water is not always free of fecal contaminations. PMID:20207876

  18. The adenosine triphosphate test is a rapid and reliable audit tool to assess manual cleaning adequacy of flexible endoscope channels.

    PubMed

    Alfa, Michelle J; Fatima, Iram; Olson, Nancy

    2013-03-01

    The study objective was to verify that the adenosine triphosphate (ATP) benchmark of <200 relative light units (RLUs) was achievable in a busy endoscopy clinic that followed the manufacturer's manual cleaning instructions. All channels from patient-used colonoscopes (20) and duodenoscopes (20) in a tertiary care hospital endoscopy clinic were sampled after manual cleaning and tested for residual ATP. The ATP test benchmark for adequate manual cleaning was set at <200 RLUs. The benchmark for protein was <6.4 μg/cm(2), and, for bioburden, it was <4-log10 colony-forming units/cm(2). Our data demonstrated that 96% (115/120) of channels from 20 colonoscopes and 20 duodenoscopes evaluated met the ATP benchmark of <200 RLUs. The 5 channels that exceeded 200 RLUs were all elevator guide-wire channels. All 120 of the manually cleaned endoscopes tested had protein and bioburden levels that were compliant with accepted benchmarks for manual cleaning for suction-biopsy, air-water, and auxiliary water channels. Our data confirmed that, by following the endoscope manufacturer's manual cleaning recommendations, 96% of channels in gastrointestinal endoscopes would have <200 RLUs for the ATP test kit evaluated and would meet the accepted clean benchmarks for protein and bioburden. Copyright © 2013 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  19. An approach to radiation safety department benchmarking in academic and medical facilities.

    PubMed

    Harvey, Richard P

    2015-02-01

    Based on anecdotal evidence and networking with colleagues at other facilities, it has become evident that some radiation safety departments are not adequately staffed and radiation safety professionals need to increase their staffing levels. Discussions with management regarding radiation safety department staffing often lead to similar conclusions. Management acknowledges the Radiation Safety Officer (RSO) or Director of Radiation Safety's concern but asks the RSO to provide benchmarking and justification for additional full-time equivalents (FTEs). The RSO must determine a method to benchmark and justify additional staffing needs while struggling to maintain a safe and compliant radiation safety program. Benchmarking and justification are extremely important tools that are commonly used to demonstrate the need for increased staffing in other disciplines and are tools that can be used by radiation safety professionals. Parameters that most RSOs would expect to be positive predictors of radiation safety staff size generally are and can be emphasized in benchmarking and justification report summaries. Facilities with large radiation safety departments tend to have large numbers of authorized users, be broad-scope programs, be subject to increased controls regulations, have large clinical operations, have significant numbers of academic radiation-producing machines, and have laser safety responsibilities.

  20. North Dakota Dance Performance Standards.

    ERIC Educational Resources Information Center

    Anderson, Sue; Farrell, Renee; Robbins, Susan; Stanley, Melissa

    This document outlines the performance standards for dance in North Dakota public schools, grades K-12. Four levels of performance are provided for each benchmark by North Dakota educators for K-4, 5-8, and 9-12 grade levels. Level 4 describes advanced proficiency; Level 3, proficiency; Level 2, partial proficiency; and Level 1, novice. Each grade…

  1. Benchmarking routine psychological services: a discussion of challenges and methods.

    PubMed

    Delgadillo, Jaime; McMillan, Dean; Leach, Chris; Lucock, Mike; Gilbody, Simon; Wood, Nick

    2014-01-01

    Policy developments in recent years have led to important changes in the level of access to evidence-based psychological treatments. Several methods have been used to investigate the effectiveness of these treatments in routine care, with different approaches to outcome definition and data analysis. To present a review of challenges and methods for the evaluation of evidence-based treatments delivered in routine mental healthcare. This is followed by a case example of a benchmarking method applied in primary care. High, average and poor performance benchmarks were calculated through a meta-analysis of published data from services working under the Improving Access to Psychological Therapies (IAPT) Programme in England. Pre-post treatment effect sizes (ES) and confidence intervals were estimated to illustrate a benchmarking method enabling services to evaluate routine clinical outcomes. High, average and poor performance ES for routine IAPT services were estimated to be 0.91, 0.73 and 0.46 for depression (using PHQ-9) and 1.02, 0.78 and 0.52 for anxiety (using GAD-7). Data from one specific IAPT service exemplify how to evaluate and contextualize routine clinical performance against these benchmarks. The main contribution of this report is to summarize key recommendations for the selection of an adequate set of psychometric measures, the operational definition of outcomes, and the statistical evaluation of clinical performance. A benchmarking method is also presented, which may enable a robust evaluation of clinical performance against national benchmarks. Some limitations concerned significant heterogeneity among data sources, and wide variations in ES and data completeness.

  2. Staff confidence in dealing with aggressive patients: a benchmarking exercise.

    PubMed

    McGowan, S; Wynaden, D; Harding, N; Yassine, A; Parker, J

    1999-09-01

    Interacting with potentially aggressive patients is a common occurrence for nurses working in psychiatric intensive care units. Although the literature highlights the need to educate staff in the prevention and management of aggression, often little, or no, training is provided by employers. This article describes a benchmarking exercise conducted in psychiatric intensive care units at two Western Australian hospitals to assess staff confidence in coping with patient aggression. Results demonstrated that staff in the hospital where regular training was undertaken were significantly more confident in dealing with aggression. Following the completion of a safe physical restraint module at the other hospital staff reported a significant increase in their level of confidence that either matched or bettered the results of their benchmark colleagues.

  3. Open Rotor - Analysis of Diagnostic Data

    NASA Technical Reports Server (NTRS)

    Envia, Edmane

    2011-01-01

    NASA is researching open rotor propulsion as part of its technology research and development plan for addressing the subsonic transport aircraft noise, emission and fuel burn goals. The low-speed wind tunnel test for investigating the aerodynamic and acoustic performance of a benchmark blade set at the approach and takeoff conditions has recently concluded. A high-speed wind tunnel diagnostic test campaign has begun to investigate the performance of this benchmark open rotor blade set at the cruise condition. Databases from both speed regimes will comprise a comprehensive collection of benchmark open rotor data for use in assessing/validating aerodynamic and noise prediction tools (component & system level) as well as providing insights into the physics of open rotors to help guide the development of quieter open rotors.

  4. Chemotherapy Extravasation: Establishing a National Benchmark for Incidence Among Cancer Centers.

    PubMed

    Jackson-Rose, Jeannette; Del Monte, Judith; Groman, Adrienne; Dial, Linda S; Atwell, Leah; Graham, Judy; O'Neil Semler, Rosemary; O'Sullivan, Maryellen; Truini-Pittman, Lisa; Cunningham, Terri A; Roman-Fischetti, Lisa; Costantinou, Eileen; Rimkus, Chris; Banavage, Adrienne J; Dietz, Barbara; Colussi, Carol J; Catania, Kimberly; Wasko, Michelle; Schreffler, Kevin A; West, Colleen; Siefert, Mary Lou; Rice, Robert David

    2017-08-01

    Given the high-risk nature and nurse sensitivity of chemotherapy infusion and extravasation prevention, as well as the absence of an industry benchmark, a group of nurses studied oncology-specific nursing-sensitive indicators. 
. The purpose was to establish a benchmark for the incidence of chemotherapy extravasation with vesicants, irritants, and irritants with vesicant potential.
. Infusions with actual or suspected extravasations of vesicant and irritant chemotherapies were evaluated. Extravasation events were reviewed by type of agent, occurrence by drug category, route of administration, level of harm, follow-up, and patient referrals to surgical consultation.
. A total of 739,812 infusions were evaluated, with 673 extravasation events identified. Incidence for all extravasation events was 0.09%.

  5. Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool

    NASA Astrophysics Data System (ADS)

    Torlapati, Jagadish; Prabhakar Clement, T.

    2013-01-01

    We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.

  6. Sensitivity Analysis of OECD Benchmark Tests in BISON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less

  7. Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2013-01-01

    The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.

  8. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  9. Linking user and staff perspectives in the evaluation of innovative transition projects for youth with disabilities.

    PubMed

    McAnaney, Donal F; Wynne, Richard F

    2016-06-01

    A key challenge in formative evaluation is to gather appropriate evidence to inform the continuous improvement of initiatives. In the absence of outcome data, the programme evaluator often must rely on the perceptions of beneficiaries and staff in generating insight into what is making a difference. The article describes the approach adopted in an evaluation of 15 innovative projects supporting school-leavers with disabilities in making the transition to education, work and life in community settings. Two complementary processes provided an insight into what project staff and leadership viewed as the key project activities and features that facilitated successful transition as well as the areas of quality of life (QOL) that participants perceived as having been impacted positively by the projects. A comparison was made between participants' perceptions of QOL impact with the views of participants in services normally offered by the wider system. This revealed that project participants were significantly more positive in their views than participants in traditional services. In addition, the processes and activities of the more highly rated projects were benchmarked against less highly rated projects and also with usually available services. Even in the context of a range of intervening variables such as level and complexity of participant needs and variations in the stage of development of individual projects, the benchmarking process indicated a number of project characteristics that were highly valued by participants. © The Author(s) 2016.

  10. Organic Compounds in Potomac River Water Used for Public Supply near Washington, D.C., 2003-05

    USGS Publications Warehouse

    Brayton, Michael J.; Denver, Judith M.; Delzer, Gregory C.; Hamilton, Pixie A.

    2008-01-01

    Organic compounds studied in this U.S. Geological Survey (USGS) assessment generally are man-made, including, in part, pesticides, solvents, gasoline hydrocarbons, personal care and domestic-use products, and refrigerants and propellants. A total of 85 of 277 compounds were detected at least once among the 25 samples collected approximately monthly during 2003-05 at the intake of the Washington Aqueduct, one of several community water systems on the Potomac River upstream from Washington, D.C. The diversity of compounds detected indicate a variety of different sources and uses (including wastewater discharge, industrial, agricultural, domestic, and others) and different pathways (including treated wastewater outfalls located upstream, overland runoff, and ground-water discharge) to drinking-water supplies. Seven compounds were detected year-round in source-water intake samples, including selected herbicide compounds commonly used in the Potomac River Basin and in other agricultural areas across the United States. Two-thirds of the 26 compounds detected most commonly in source water (in at least 20 percent of the samples) also were detected most commonly in finished water (after treatment but prior to distribution). Concentrations for all detected compounds in source and finished water generally were less than 0.1 microgram per liter and always less than human-health benchmarks, which are available for about one-half of the detected compounds. On the basis of this screening-level assessment, adverse effects to human health are expected to be negligible (subject to limitations of available human-health benchmarks).

  11. Exposures to volatile organic compounds (VOCs) and associated health risks of socio-economically disadvantaged population in a "hot spot" in Camden, New Jersey

    NASA Astrophysics Data System (ADS)

    Wu, Xiangmei (May); Fan, Zhihua (Tina); Zhu, Xianlei; Jung, Kyung Hwa; Ohman-Strickland, Pamela; Weisel, Clifford P.; Lioy, Paul J.

    2012-09-01

    To address disparities in health risks associated with ambient air pollution for racial/ethnic minority groups, this study characterized personal and ambient concentrations of volatile organic compounds (VOCs) in a suspected hot spot of air pollution - the Village of Waterfront South (WFS), and an urban reference community - the Copewood/Davis Streets (CDS) neighborhood in Camden, New Jersey. Both are minority-dominant, impoverished communities. We collected 24-h integrated personal air samples from 54 WFS residents and 53 CDS residents, with one sample on a weekday and one on a weekend day during the summer and winter seasons of 2004-2006. Ambient air samples from the center of each community were also collected simultaneously during personal air sampling. Toluene, ethylbenzene, and xylenes (TEX) presented higher (p < 0.05) ambient levels in WFS than in CDS, particularly during weekdays. A stronger association between personal and ambient concentrations of MTBE and TEX was found in WFS than in CDS. Fourteen to forty-two percent of the variation in personal MTBE, hexane, benzene, and TEX was explained by local outdoor air pollution. These observations indicated that local sources impacted the community air pollution and personal exposure in WFS. The estimated cancer risks resulting from two locally emitted VOCs, benzene and ethylbenzene, and non-cancer neurological and respiratory effects resulting from hexane, benzene, toluene, and xylenes exceeded the US EPA risk benchmarks in both communities. These findings emphasized the need to address disparity in health risks associated with ambient air pollution for the socio-economically disadvantaged groups. This study also demonstrated that air pollution hot spots similar to WFS can provide robust setting to investigate health effects of ambient air pollution.

  12. Addiction to sugar and its link to health morbidity: a primer for newer primary care and public health initiatives in Malaysia.

    PubMed

    Swarna Nantha, Yogarabindranath

    2014-10-01

    The average consumption of sugar in the Malaysian population has reached an alarming rate, exceeding the benchmark recommended by experts. This article argues the need of a paradigm shift in the management of sugar consumption in the country through evidence derived from addiction research. "Food addiction" could lead to high levels of sugar consumption. This probable link could accelerate the development of diabetes and obesity in the community. A total of 94 reports and studies that describe the importance of addiction theory-based interventions were found through a search on PubMed, Google Scholar, and Academic Search Complete. Research in the field of addiction medicine has revealed the addictive potential of high levels of sugar intake. Preexisting health promotion strategies could benefit from the integration of the concept of sugar addiction. A targeted intervention could yield more positive results in health outcomes within the country. Current literature seems to support food environment changes, targeted health policies, and special consultation skills as cost-effective remedies to curb the rise of sugar-related health morbidities. © The Author(s) 2014.

  13. The use of quality benchmarking in assessing web resources for the dermatology virtual branch library of the National electronic Library for Health (NeLH).

    PubMed

    Kamel Boulos, M N; Roudsari, A V; Gordon, C; Muir Gray, J A

    2001-01-01

    In 1998, the U.K. National Health Service Information for Health Strategy proposed the implementation of a National electronic Library for Health to provide clinicians, healthcare managers and planners, patients and the public with easy, round the clock access to high quality, up-to-date electronic information on health and healthcare. The Virtual Branch Libraries are among the most important components of the National electronic Library for Health. They aim at creating online knowledge based communities, each concerned with some specific clinical and other health-related topics. This study is about the envisaged Dermatology Virtual Branch Libraries of the National electronic Library for Health. It aims at selecting suitable dermatology Web resources for inclusion in the forthcoming Virtual Branch Libraries after establishing preliminary quality benchmarking rules for this task. Psoriasis, being a common dermatological condition, has been chosen as a starting point. Because quality is a principal concern of the National electronic Library for Health, the study includes a review of the major quality benchmarking systems available today for assessing health-related Web sites. The methodology of developing a quality benchmarking system has been also reviewed. Aided by metasearch Web tools, candidate resources were hand-selected in light of the reviewed benchmarking systems and specific criteria set by the authors. Over 90 professional and patient-oriented Web resources on psoriasis and dermatology in general are suggested for inclusion in the forthcoming Dermatology Virtual Branch Libraries. The idea of an all-in knowledge-hallmarking instrument for the National electronic Library for Health is also proposed based on the reviewed quality benchmarking systems. Skilled, methodical, organized human reviewing, selection and filtering based on well-defined quality appraisal criteria seems likely to be the key ingredient in the envisaged National electronic Library for Health service. Furthermore, by promoting the application of agreed quality guidelines and codes of ethics by all health information providers and not just within the National electronic Library for Health, the overall quality of the Web will improve with time and the Web will ultimately become a reliable and integral part of the care space.

  14. Enabling the High Level Synthesis of Data Analytics Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minutoli, Marco; Castellana, Vito G.; Tumeo, Antonino

    Conventional High Level Synthesis (HLS) tools mainly tar- get compute intensive kernels typical of digital signal pro- cessing applications. We are developing techniques and ar- chitectural templates to enable HLS of data analytics appli- cations. These applications are memory intensive, present fine-grained, unpredictable data accesses, and irregular, dy- namic task parallelism. We discuss an architectural tem- plate based around a distributed controller to efficiently ex- ploit thread level parallelism. We present a memory in- terface that supports parallel memory subsystems and en- ables implementing atomic memory operations. We intro- duce a dynamic task scheduling approach to efficiently ex- ecute heavilymore » unbalanced workload. The templates are val- idated by synthesizing queries from the Lehigh University Benchmark (LUBM), a well know SPARQL benchmark.« less

  15. The Royal Australian and New Zealand College of Radiologists (RANZCR) relative value unit workload model, its limitations and the evolution to a safety, quality and performance framework.

    PubMed

    Pitman, A; Jones, D N; Stuart, D; Lloydhope, K; Mallitt, K; O'Rourke, P

    2009-10-01

    The study reports on the evolution of the Australian radiologist relative value unit (RVU) model of measuring radiologist reporting workloads in teaching hospital departments, and aims to outline a way forward for the development of a broad national safety, quality and performance framework that enables value mapping, measurement and benchmarking. The Radiology International Benchmarking Project of Queensland Health provided a suitable high-level national forum where the existing Pitman-Jones RVU model was applied to contemporaneous data, and its shortcomings and potential avenues for future development were analysed. Application of the Pitman-Jones model to Queensland data and also a Victorian benchmark showed that the original recommendation of 40,000 crude RVU per full-time equivalent consultant radiologist (97-98 baseline level) has risen only moderately, to now lie around 45,000 crude RVU/full-time equivalent. Notwithstanding this, the model has a number of weaknesses and is becoming outdated, as it cannot capture newer time-consuming examinations particularly in CT. A significant re-evaluation of the value of medical imaging is required, and is now occurring. We must rethink how we measure, benchmark, display and continually improve medical imaging safety, quality and performance, throughout the imaging care cycle and beyond. It will be necessary to ensure alignment with patient needs, as well as clinical and organisational objectives. Clear recommendations for the development of an updated national reporting workload RVU system are available, and an opportunity now exists for developing a much broader national model. A more sophisticated and balanced multidimensional safety, quality and performance framework that enables measurement and benchmarking of all important elements of health-care service is needed.

  16. Quantifying ecological impacts of mass extinctions with network analysis of fossil communities

    PubMed Central

    Muscente, A. D.; Prabhu, Anirudh; Zhong, Hao; Eleish, Ahmed; Meyer, Michael B.; Fox, Peter; Hazen, Robert M.; Knoll, Andrew H.

    2018-01-01

    Mass extinctions documented by the fossil record provide critical benchmarks for assessing changes through time in biodiversity and ecology. Efforts to compare biotic crises of the past and present, however, encounter difficulty because taxonomic and ecological changes are decoupled, and although various metrics exist for describing taxonomic turnover, no methods have yet been proposed to quantify the ecological impacts of extinction events. To address this issue, we apply a network-based approach to exploring the evolution of marine animal communities over the Phanerozoic Eon. Network analysis of fossil co-occurrence data enables us to identify nonrandom associations of interrelated paleocommunities. These associations, or evolutionary paleocommunities, dominated total diversity during successive intervals of relative community stasis. Community turnover occurred largely during mass extinctions and radiations, when ecological reorganization resulted in the decline of one association and the rise of another. Altogether, we identify five evolutionary paleocommunities at the generic and familial levels in addition to three ordinal associations that correspond to Sepkoski’s Cambrian, Paleozoic, and Modern evolutionary faunas. In this context, we quantify magnitudes of ecological change by measuring shifts in the representation of evolutionary paleocommunities over geologic time. Our work shows that the Great Ordovician Biodiversification Event had the largest effect on ecology, followed in descending order by the Permian–Triassic, Cretaceous–Paleogene, Devonian, and Triassic–Jurassic mass extinctions. Despite its taxonomic severity, the Ordovician extinction did not strongly affect co-occurrences of taxa, affirming its limited ecological impact. Network paleoecology offers promising approaches to exploring ecological consequences of extinctions and radiations. PMID:29686079

  17. Quantifying ecological impacts of mass extinctions with network analysis of fossil communities.

    PubMed

    Muscente, A D; Prabhu, Anirudh; Zhong, Hao; Eleish, Ahmed; Meyer, Michael B; Fox, Peter; Hazen, Robert M; Knoll, Andrew H

    2018-05-15

    Mass extinctions documented by the fossil record provide critical benchmarks for assessing changes through time in biodiversity and ecology. Efforts to compare biotic crises of the past and present, however, encounter difficulty because taxonomic and ecological changes are decoupled, and although various metrics exist for describing taxonomic turnover, no methods have yet been proposed to quantify the ecological impacts of extinction events. To address this issue, we apply a network-based approach to exploring the evolution of marine animal communities over the Phanerozoic Eon. Network analysis of fossil co-occurrence data enables us to identify nonrandom associations of interrelated paleocommunities. These associations, or evolutionary paleocommunities, dominated total diversity during successive intervals of relative community stasis. Community turnover occurred largely during mass extinctions and radiations, when ecological reorganization resulted in the decline of one association and the rise of another. Altogether, we identify five evolutionary paleocommunities at the generic and familial levels in addition to three ordinal associations that correspond to Sepkoski's Cambrian, Paleozoic, and Modern evolutionary faunas. In this context, we quantify magnitudes of ecological change by measuring shifts in the representation of evolutionary paleocommunities over geologic time. Our work shows that the Great Ordovician Biodiversification Event had the largest effect on ecology, followed in descending order by the Permian-Triassic, Cretaceous-Paleogene, Devonian, and Triassic-Jurassic mass extinctions. Despite its taxonomic severity, the Ordovician extinction did not strongly affect co-occurrences of taxa, affirming its limited ecological impact. Network paleoecology offers promising approaches to exploring ecological consequences of extinctions and radiations. Copyright © 2018 the Author(s). Published by PNAS.

  18. Dispersion Forces of Solids under Stress. Chemisorption under Stress.

    DTIC Science & Technology

    1984-08-01

    The objective of the research summerized here was to determine the stress ce dependence of the chemical potential of atoms chemisorbed to metal...received by the scientific- engineering community. In- terest was shown to carry out the experiments suggested in our paper and we hope that this phase...out several benchmark theoretical investi- gations on our chemostress effect. These papers were well received by both .* scientific and engineering

  19. Food Recognition: A New Dataset, Experiments, and Results.

    PubMed

    Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo

    2017-05-01

    We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.

  20. SkData: data sets and algorithm evaluation protocols in Python

    NASA Astrophysics Data System (ADS)

    Bergstra, James; Pinto, Nicolas; Cox, David D.

    2015-01-01

    Machine learning benchmark data sets come in all shapes and sizes, whereas classification algorithms assume sanitized input, such as (x, y) pairs with vector-valued input x and integer class label y. Researchers and practitioners know all too well how tedious it can be to get from the URL of a new data set to a NumPy ndarray suitable for e.g. pandas or sklearn. The SkData library handles that work for a growing number of benchmark data sets (small and large) so that one-off in-house scripts for downloading and parsing data sets can be replaced with library code that is reliable, community-tested, and documented. The SkData library also introduces an open-ended formalization of training and testing protocols that facilitates direct comparison with published research. This paper describes the usage and architecture of the SkData library.

  1. Benchmarking In-Flight Icing Detection Products for Future Upgrades

    NASA Technical Reports Server (NTRS)

    Politovich, M. K.; Minnis, P.; Johnson, D. B.; Wolff, C. A.; Chapman, M.; Heck, P. W.; Haggerty, J. A.

    2004-01-01

    This paper summarizes the results of a benchmarking exercise conducted as part of the NASA supported Advanced Satellite Aviation-Weather Products (ASAP) Program. The goal of ASAP is to increase and optimize the use of satellite data sets within the existing FAA Aviation Weather Research Program (AWRP) Product Development Team (PDT) structure and to transfer advanced satellite expertise to the PDTs. Currently, ASAP fosters collaborative efforts between NASA Laboratories, the University of Wisconsin Cooperative Institute for Meteorological Satellite Studies (UW-CIMSS), the University of Alabama in Huntsville (UAH), and the AWRP PDTs. This collaboration involves the testing and evaluation of existing satellite algorithms developed or proposed by AWRP teams, the introduction of new techniques and data sets to the PDTs from the satellite community, and enhanced access to new satellite data sets available through CIMSS and NASA Langley Research Center for evaluation and testing.

  2. Feedback on the Surveillance 8 challenge: Vibration-based diagnosis of a Safran aircraft engine

    NASA Astrophysics Data System (ADS)

    Antoni, Jérôme; Griffaton, Julien; André, Hugo; Avendaño-Valencia, Luis David; Bonnardot, Frédéric; Cardona-Morales, Oscar; Castellanos-Dominguez, German; Daga, Alessandro Paolo; Leclère, Quentin; Vicuña, Cristián Molina; Acuña, David Quezada; Ompusunggu, Agusmian Partogi; Sierra-Alonso, Edgar F.

    2017-12-01

    This paper presents the content and outcomes of the Safran contest organized during the International Conference Surveillance 8, October 20-21, 2015, at the Roanne Institute of Technology, France. The contest dealt with the diagnosis of a civil aircraft engine based on vibration data measured in a transient operating mode and provided by Safran. Based on two independent exercises, the contest offered the possibility to benchmark current diagnostic methods on real data supplemented with several challenges. Outcomes of seven competing teams are reported and discussed. The object of the paper is twofold. It first aims at giving a picture of the current state-of-the-art in vibration-based diagnosis of rolling-element bearings in nonstationary operating conditions. Second, it aims at providing the scientific community with a benchmark and some baseline solutions. In this respect, the data used in the contest are made available as supplementary material.

  3. A review of genomic data warehousing systems.

    PubMed

    Triplet, Thomas; Butler, Gregory

    2014-07-01

    To facilitate the integration and querying of genomics data, a number of generic data warehousing frameworks have been developed. They differ in their design and capabilities, as well as their intended audience. We provide a comprehensive and quantitative review of those genomic data warehousing frameworks in the context of large-scale systems biology. We reviewed in detail four genomic data warehouses (BioMart, BioXRT, InterMine and PathwayTools) freely available to the academic community. We quantified 20 aspects of the warehouses, covering the accuracy of their responses, their computational requirements and development efforts. Performance of the warehouses was evaluated under various hardware configurations to help laboratories optimize hardware expenses. Each aspect of the benchmark may be dynamically weighted by scientists using our online tool BenchDW (http://warehousebenchmark.fungalgenomics.ca/benchmark/) to build custom warehouse profiles and tailor our results to their specific needs.

  4. Modeling Urban Scenarios & Experiments: Fort Indiantown Gap Data Collections Summary and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Daniel E.; Bandstra, Mark S.; Davidson, Gregory G.

    This report summarizes experimental radiation detector, contextual sensor, weather, and global positioning system (GPS) data collected to inform and validate a comprehensive, operational radiation transport modeling framework to evaluate radiation detector system and algorithm performance. This framework will be used to study the influence of systematic effects (such as geometry, background activity, background variability, environmental shielding, etc.) on detector responses and algorithm performance using synthetic time series data. This work consists of performing data collection campaigns at a canonical, controlled environment for complete radiological characterization to help construct and benchmark a high-fidelity model with quantified system geometries, detector response functions,more » and source terms for background and threat objects. This data also provides an archival, benchmark dataset that can be used by the radiation detection community. The data reported here spans four data collection campaigns conducted between May 2015 and September 2016.« less

  5. Savanna elephant numbers are only a quarter of their expected values

    PubMed Central

    Robson, Ashley S.; Trimble, Morgan J.; Purdon, Andrew; Young-Overton, Kim D.; Pimm, Stuart L.; van Aarde, Rudi J.

    2017-01-01

    Savannas once constituted the range of many species that human encroachment has now reduced to a fraction of their former distribution. Many survive only in protected areas. Poaching reduces the savanna elephant, even where protected, likely to the detriment of savanna ecosystems. While resources go into estimating elephant populations, an ecological benchmark by which to assess counts is lacking. Knowing how many elephants there are and how many poachers kill is important, but on their own, such data lack context. We collated savanna elephant count data from 73 protected areas across the continent estimated to hold ~50% of Africa’s elephants and extracted densities from 18 broadly stable population time series. We modeled these densities using primary productivity, water availability, and an index of poaching as predictors. We then used the model to predict stable densities given current conditions and poaching for all 73 populations. Next, to generate ecological benchmarks, we predicted such densities for a scenario of zero poaching. Where historical data are available, they corroborate or exceed benchmarks. According to recent counts, collectively, the 73 savanna elephant populations are at 75% of the size predicted based on current conditions and poaching levels. However, populations are at <25% of ecological benchmarks given a scenario of zero poaching (~967,000)—a total deficit of ~730,000 elephants. Populations in 30% of the 73 protected areas were <5% of their benchmarks, and the median current density as a percentage of ecological benchmark across protected areas was just 13%. The ecological context provided by these benchmark values, in conjunction with ongoing census projects, allow efficient targeting of conservation efforts. PMID:28414784

  6. Neutron Deep Penetration Calculations in Light Water with Monte Carlo TRIPOLI-4® Variance Reduction Techniques

    NASA Astrophysics Data System (ADS)

    Lee, Yi-Kang

    2017-09-01

    Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.

  7. Alternative industrial carbon emissions benchmark based on input-output analysis

    NASA Astrophysics Data System (ADS)

    Han, Mengyao; Ji, Xi

    2016-12-01

    Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.

  8. A survey assessment of the level of preparedness for domestic terrorism and mass casualty incidents among Eastern Association for the Surgery of Trauma members.

    PubMed

    Ciraulo, David L; Frykberg, Eric R; Feliciano, David V; Knuth, Thomas E; Richart, Charles M; Westmoreland, Christy D; Williams, Kathryn A

    2004-05-01

    The goal of this survey was to establish a benchmark for trauma surgeons' level of operational understanding of the command structure for a pre-hospital incident, a mass casualty incident (MCI), and weapons of mass destruction (WMD). The survey was distributed before the World Trade Center destruction on September 11, 2001. The survey was developed by the authors and reviewed by a statistician for clarity and performance. The survey was sent to the membership of the 2000 Eastern Association for the Surgery of Trauma spring mailing, with two subsequent mailings and a final sampling at the Eastern Association for the Surgery of Trauma 2001 meeting. Of 723 surveys mailed, 243 were returned and statistically analyzed (significance indicated by p < 0.05). No statistical difference existed between level of designation of a trauma center (state or American College of Surgeons) and a facility's level of pre-paredness for MCIs or WMD. Physicians in communities with chemical plants, railways, and waterway traffic were statistically more likely to work at facilities with internal disaster plans addressing chemical and biological threats. Across all variables, physicians with military training were significantly better prepared for response to catastrophic events. With the exception of cyanide (50%), less than 30% of the membership was prepared to manage exposure to a nerve agent, less than 50% was prepared to manage illness from intentional biological exposure, and only 73% understood and were prepared to manage blast injury. Mobile medical response teams were present in 46% of the respondents' facilities, but only 30% of those teams deployed a trauma surgeon. Approximately 70% of the membership had been involved in an MCI, although only 60% understood the command structure for a prehospital incident. Only 33% of the membership had training regarding hazardous materials. Of interest, 76% and 65%, respectively, felt that education about MCIs and WMD should be included in residency training. A facility's level of pre-paredness for MCIs or WMD was not related to level of designation as a trauma center, but may be positively influenced by local physicians with prior military background. Benchmark information from this survey will provide the architecture for the development and implementation of further training in these areas for trauma surgeons.

  9. Benchmarking government action for obesity prevention--an innovative advocacy strategy.

    PubMed

    Martin, J; Peeters, A; Honisett, S; Mavoa, H; Swinburn, B; de Silva-Sanigorski, A

    2014-01-01

    Successful obesity prevention will require a leading role for governments, but internationally they have been slow to act. League tables of benchmark indicators of action can be a valuable advocacy and evaluation tool. To develop a benchmarking tool for government action on obesity prevention, implement it across Australian jurisdictions and to publicly award the best and worst performers. A framework was developed which encompassed nine domains, reflecting best practice government action on obesity prevention: whole-of-government approaches; marketing restrictions; access to affordable, healthy food; school food and physical activity; food in public facilities; urban design and transport; leisure and local environments; health services, and; social marketing. A scoring system was used by non-government key informants to rate the performance of their government. National rankings were generated and the results were communicated to all Premiers/Chief Ministers, the media and the national obesity research and practice community. Evaluation of the initial tool in 2010 showed it to be feasible to implement and able to discriminate the better and worse performing governments. Evaluation of the rubric in 2011 confirmed this to be a robust and useful method. In relation to government action, the best performing governments were those with whole-of-government approaches, had extended common initiatives and demonstrated innovation and strong political will. This new benchmarking tool, the Obesity Action Award, has enabled identification of leading government action on obesity prevention and the key characteristics associated with their success. We recommend this tool for other multi-state/country comparisons. Copyright © 2013 Asian Oceanian Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  10. Agroterrorism: where are we in the ongoing war on terrorism?

    PubMed

    Crutchley, Tamara M; Rodgers, Joel B; Whiteside, Heustis P; Vanier, Marty; Terndrup, Thomas E

    2007-03-01

    The U.S. agricultural infrastructure is one of the most productive and efficient food-producing systems in the world. Many of the characteristics that contribute to its high productivity and efficiency also make this infrastructure extremely vulnerable to a terrorist attack by a biological weapon. Several experts have repeatedly stated that taking advantage of these vulnerabilities would not require a significant undertaking and that the nation's agricultural infrastructure remains highly vulnerable. As a result of continuing criticism, many initiatives at all levels of government and within the private sector have been undertaken to improve our ability to detect and respond to an agroterrorist attack. However, outbreaks, such as the 1999 West Nile outbreak, the 2001 anthrax attacks, the 2003 monkeypox outbreak, and the 2004 Escherichia coli O157:H7 outbreak, have demonstrated the need for improvements in the areas of communication, emergency response and surveillance efforts, and education for all levels of government, the agricultural community, and the private sector. We recommend establishing an interdisciplinary advisory group that consists of experts from public health, human health, and animal health communities to prioritize improvement efforts in these areas. The primary objective of this group would include establishing communication, surveillance, and education benchmarks to determine current weaknesses in preparedness and activities designed to mitigate weaknesses. We also recommend broader utilization of current food and agricultural preparedness guidelines, such as those developed by the U.S. Department of Agriculture and the U.S. Food and Drug Administration.

  11. Benchmark solutions for the galactic ion transport equations: Energy and spatially dependent problems

    NASA Technical Reports Server (NTRS)

    Ganapol, Barry D.; Townsend, Lawrence W.; Wilson, John W.

    1989-01-01

    Nontrivial benchmark solutions are developed for the galactic ion transport (GIT) equations in the straight-ahead approximation. These equations are used to predict potential radiation hazards in the upper atmosphere and in space. Two levels of difficulty are considered: (1) energy independent, and (2) spatially independent. The analysis emphasizes analytical methods never before applied to the GIT equations. Most of the representations derived have been numerically implemented and compared to more approximate calculations. Accurate ion fluxes are obtained (3 to 5 digits) for nontrivial sources. For monoenergetic beams, both accurate doses and fluxes are found. The benchmarks presented are useful in assessing the accuracy of transport algorithms designed to accommodate more complex radiation protection problems. In addition, these solutions can provide fast and accurate assessments of relatively simple shield configurations.

  12. A suite of benchmark and challenge problems for enhanced geothermal systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark; Fu, Pengcheng; McClure, Mark

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilitiesmore » to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners. We present the suite of benchmark and challenge problems developed for the GTO-CCS, providing problem descriptions and sample solutions.« less

  13. New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)

    NASA Astrophysics Data System (ADS)

    Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.

    2017-09-01

    Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by adjustment of the original ENDF format file.

  14. Validation of numerical codes for impact and explosion cratering: Impacts on strengthless and metal targets

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.

    2008-12-01

    Over the last few decades, rapid improvement of computer capabilities has allowed impact cratering to be modeled with increasing complexity and realism, and has paved the way for a new era of numerical modeling of the impact process, including full, three-dimensional (3D) simulations. When properly benchmarked and validated against observation, computer models offer a powerful tool for understanding the mechanics of impact crater formation. This work presents results from the first phase of a project to benchmark and validate shock codes. A variety of 2D and 3D codes were used in this study, from commercial products like AUTODYN, to codes developed within the scientific community like SOVA, SPH, ZEUS-MP, iSALE, and codes developed at U.S. National Laboratories like CTH, SAGE/RAGE, and ALE3D. Benchmark calculations of shock wave propagation in aluminum-on-aluminum impacts were performed to examine the agreement between codes for simple idealized problems. The benchmark simulations show that variability in code results is to be expected due to differences in the underlying solution algorithm of each code, artificial stability parameters, spatial and temporal resolution, and material models. Overall, the inter-code variability in peak shock pressure as a function of distance is around 10 to 20%. In general, if the impactor is resolved by at least 20 cells across its radius, the underestimation of peak shock pressure due to spatial resolution is less than 10%. In addition to the benchmark tests, three validation tests were performed to examine the ability of the codes to reproduce the time evolution of crater radius and depth observed in vertical laboratory impacts in water and two well-characterized aluminum alloys. Results from these calculations are in good agreement with experiments. There appears to be a general tendency of shock physics codes to underestimate the radius of the forming crater. Overall, the discrepancy between the model and experiment results is between 10 and 20%, similar to the inter-code variability.

  15. SU-D-BRD-03: A Gateway for GPU Computing in Cancer Radiotherapy Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jia, X; Folkerts, M; Shi, F

    Purpose: Graphics Processing Unit (GPU) has become increasingly important in radiotherapy. However, it is still difficult for general clinical researchers to access GPU codes developed by other researchers, and for developers to objectively benchmark their codes. Moreover, it is quite often to see repeated efforts spent on developing low-quality GPU codes. The goal of this project is to establish an infrastructure for testing GPU codes, cross comparing them, and facilitating code distributions in radiotherapy community. Methods: We developed a system called Gateway for GPU Computing in Cancer Radiotherapy Research (GCR2). A number of GPU codes developed by our group andmore » other developers can be accessed via a web interface. To use the services, researchers first upload their test data or use the standard data provided by our system. Then they can select the GPU device on which the code will be executed. Our system offers all mainstream GPU hardware for code benchmarking purpose. After the code running is complete, the system automatically summarizes and displays the computing results. We also released a SDK to allow the developers to build their own algorithm implementation and submit their binary codes to the system. The submitted code is then systematically benchmarked using a variety of GPU hardware and representative data provided by our system. The developers can also compare their codes with others and generate benchmarking reports. Results: It is found that the developed system is fully functioning. Through a user-friendly web interface, researchers are able to test various GPU codes. Developers also benefit from this platform by comprehensively benchmarking their codes on various GPU platforms and representative clinical data sets. Conclusion: We have developed an open platform allowing the clinical researchers and developers to access the GPUs and GPU codes. This development will facilitate the utilization of GPU in radiation therapy field.« less

  16. Role of the standard deviation in the estimation of benchmark doses with continuous data.

    PubMed

    Gaylor, David W; Slikker, William

    2004-12-01

    For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.

  17. Processes of Compression-Expansion and Subsidence-Uplift detected by the Spatial Inclinometer (IESHI) in the El Hierro Island eruption (October, 2011)

    NASA Astrophysics Data System (ADS)

    Prates, G.; Berrocoso, M.; Fernández-Ros, A.; García, A.; Ortiz, R.

    2012-04-01

    El Hierro Island (Canary Islands, Spain) has undergone a submarine eruption a few kilometers to its southeast, detected October 10, on the rift alignment that cuts across the island. However, the seismicity level suddenly increased around July 17 and ground deformation was detected by the only continuously observed GNSS-GPS (Global Navigation Satellite Systems - Global Positioning System) benchmark FRON in the El Golfo area. Based on that information several other GNSS-GPS benchmarks were installed, some of which continuously observed as well. A normal vector analysis was applied to these collected data. The normal vector magnitude variation identified local extension-compression regimes, while the normal vector inclination showed the relative height variation between the three benchmarks that define the plan to which normal vector is analyzed. To accomplish this analysis the data was previously processed to achieve positioning solutions every 30 minutes using the Bernese GPS Software 5.0, further enhanced by a Discrete Kalman Filter, giving an overall millimeter level precision. These solutions were reached using the IGS (International GNSS Service) ultra-rapid orbits and the double-differenced ionosphere-free combination. With this strategy the positioning solutions were attained in near real-time. Later with the IGS rapid orbits the data was reprocessed to provide added confidence to the solutions. Two triangles were then considered, a smaller one located in the El Golfo area within the historically collapsed caldera, and a larger one defined by benchmarks placed in Valverde, El Golfo and La Restinga, the town closest to the eruption's location, covering almost the entire Island's surface above sea level. With these two triangles the pre-eruption and post-eruption deformation suffered by El Hierro's surface will be further analyzed.

  18. A European benchmarking system to evaluate in-hospital mortality rates in acute coronary syndrome: the EURHOBOP project.

    PubMed

    Dégano, Irene R; Subirana, Isaac; Torre, Marina; Grau, María; Vila, Joan; Fusco, Danilo; Kirchberger, Inge; Ferrières, Jean; Malmivaara, Antti; Azevedo, Ana; Meisinger, Christa; Bongard, Vanina; Farmakis, Dimitros; Davoli, Marina; Häkkinen, Unto; Araújo, Carla; Lekakis, John; Elosua, Roberto; Marrugat, Jaume

    2015-03-01

    Hospital performance models in acute myocardial infarction (AMI) are useful to assess patient management. While models are available for individual countries, mainly US, cross-European performance models are lacking. Thus, we aimed to develop a system to benchmark European hospitals in AMI and percutaneous coronary intervention (PCI), based on predicted in-hospital mortality. We used the EURopean HOspital Benchmarking by Outcomes in ACS Processes (EURHOBOP) cohort to develop the models, which included 11,631 AMI patients and 8276 acute coronary syndrome (ACS) patients who underwent PCI. Models were validated with a cohort of 55,955 European ACS patients. Multilevel logistic regression was used to predict in-hospital mortality in European hospitals for AMI and PCI. Administrative and clinical models were constructed with patient- and hospital-level covariates, as well as hospital- and country-based random effects. Internal cross-validation and external validation showed good discrimination at the patient level and good calibration at the hospital level, based on the C-index (0.736-0.819) and the concordance correlation coefficient (55.4%-80.3%). Mortality ratios (MRs) showed excellent concordance between administrative and clinical models (97.5% for AMI and 91.6% for PCI). Exclusion of transfers and hospital stays ≤1day did not affect in-hospital mortality prediction in sensitivity analyses, as shown by MR concordance (80.9%-85.4%). Models were used to develop a benchmarking system to compare in-hospital mortality rates of European hospitals with similar characteristics. The developed system, based on the EURHOBOP models, is a simple and reliable tool to compare in-hospital mortality rates between European hospitals in AMI and PCI. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Overlapping community detection based on link graph using distance dynamics

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Zhang, Jing; Cai, Li-Jun

    2018-01-01

    The distance dynamics model was recently proposed to detect the disjoint community of a complex network. To identify the overlapping structure of a network using the distance dynamics model, an overlapping community detection algorithm, called L-Attractor, is proposed in this paper. The process of L-Attractor mainly consists of three phases. In the first phase, L-Attractor transforms the original graph to a link graph (a new edge graph) to assure that one node has multiple distances. In the second phase, using the improved distance dynamics model, a dynamic interaction process is introduced to simulate the distance dynamics (shrink or stretch). Through the dynamic interaction process, all distances converge, and the disjoint community structure of the link graph naturally manifests itself. In the third phase, a recovery method is designed to convert the disjoint community structure of the link graph to the overlapping community structure of the original graph. Extensive experiments are conducted on the LFR benchmark networks as well as real-world networks. Based on the results, our algorithm demonstrates higher accuracy and quality than other state-of-the-art algorithms.

  20. Public Interest Energy Research (PIER) Program Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Tengfang; Flapper, Joris; Ke, Jing

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry - including four dairy processes - cheese, fluid milk, butter, and milk powder. BEST-Dairy tool developed in this project provides three options for the user to benchmark each of the dairy product included in the tool, with each option differentiated based on specific detail level of process or plant, i.e., 1) plant level; 2) process-group level, and 3) process-step level. For each detail level, the tool accounts for differences in production and other variablesmore » affecting energy use in dairy processes. The dairy products include cheese, fluid milk, butter, milk powder, etc. The BEST-Dairy tool can be applied to a wide range of dairy facilities to provide energy and water savings estimates, which are based upon the comparisons with the best available reference cases that were established through reviewing information from international and national samples. We have performed and completed alpha- and beta-testing (field testing) of the BEST-Dairy tool, through which feedback from voluntary users in the U.S. dairy industry was gathered to validate and improve the tool's functionality. BEST-Dairy v1.2 was formally published in May 2011, and has been made available for free downloads from the internet (i.e., http://best-dairy.lbl.gov). A user's manual has been developed and published as the companion documentation for use with the BEST-Dairy tool. In addition, we also carried out technology transfer activities by engaging the dairy industry in the process of tool development and testing, including field testing, technical presentations, and technical assistance throughout the project. To date, users from more than ten countries in addition to those in the U.S. have downloaded the BEST-Dairy from the LBNL website. It is expected that the use of BEST-Dairy tool will advance understanding of energy and water usage in individual dairy plants, augment benchmarking activities in the market places, and facilitate implementation of efficiency measures and strategies to save energy and water usage in the dairy industry. Industrial adoption of this emerging tool and technology in the market is expected to benefit dairy plants, which are important customers of California utilities. Further demonstration of this benchmarking tool is recommended, for facilitating its commercialization and expansion in functions of the tool. Wider use of this BEST-Dairy tool and its continuous expansion (in functionality) will help to reduce the actual consumption of energy and water in the dairy industry sector. The outcomes comply very well with the goals set by the AB 1250 for PIER program.« less

  1. An international land-biosphere model benchmarking activity for the IPCC Fifth Assessment Report (AR5)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Forrest M; Randerson, James T; Thornton, Peter E

    2009-12-01

    The need to capture important climate feedbacks in general circulation models (GCMs) has resulted in efforts to include atmospheric chemistry and land and ocean biogeochemistry into the next generation of production climate models, called Earth System Models (ESMs). While many terrestrial and ocean carbon models have been coupled to GCMs, recent work has shown that such models can yield a wide range of results (Friedlingstein et al., 2006). This work suggests that a more rigorous set of global offline and partially coupled experiments, along with detailed analyses of processes and comparisons with measurements, are needed. The Carbon-Land Model Intercomparison Projectmore » (C-LAMP) was designed to meet this need by providing a simulation protocol and model performance metrics based upon comparisons against best-available satellite- and ground-based measurements (Hoffman et al., 2007). Recently, a similar effort in Europe, called the International Land Model Benchmark (ILAMB) Project, was begun to assess the performance of European land surface models. These two projects will now serve as prototypes for a proposed international land-biosphere model benchmarking activity for those models participating in the IPCC Fifth Assessment Report (AR5). Initially used for model validation for terrestrial biogeochemistry models in the NCAR Community Land Model (CLM), C-LAMP incorporates a simulation protocol for both offline and partially coupled simulations using a prescribed historical trajectory of atmospheric CO2 concentrations. Models are confronted with data through comparisons against AmeriFlux site measurements, MODIS satellite observations, NOAA Globalview flask records, TRANSCOM inversions, and Free Air CO2 Enrichment (FACE) site measurements. Both sets of experiments have been performed using two different terrestrial biogeochemistry modules coupled to the CLM version 3 in the Community Climate System Model version 3 (CCSM3): the CASA model of Fung, et al., and the carbon-nitrogen (CN) model of Thornton. Comparisons of the CLM3 offline results against observational datasets have been performed and are described in Randerson et al. (2009). CLM version 4 has been evaluated using C-LAMP, showing improvement in many of the metrics. Efforts are now underway to initiate a Nitrogen-Land Model Intercomparison Project (N-LAMP) to better constrain the effects of the nitrogen cycle in biosphere models. Presented will be new results from C-LAMP for CLM4, initial N-LAMP developments, and the proposed land-biosphere model benchmarking activity.« less

  2. Revisiting the PLUMBER Experiments from a Process-Diagnostics Perspective

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.; Ruddell, B. L.; Clark, M. P.; Nijssen, B.; Peters-Lidard, C. D.

    2017-12-01

    The PLUMBER benchmarking experiments [1] showed that some of the most sophisticated land models (CABLE, CH-TESSEL, COLA-SSiB, ISBA-SURFEX, JULES, Mosaic, Noah, ORCHIDEE) were outperformed - in simulations of half-hourly surface energy fluxes - by instantaneous, out-of-sample, and globally-stationary regressions with no state memory. One criticism of PLUMBER is that the benchmarking methodology was not derived formally, so that applying a similar methodology with different performance metrics can result in qualitatively different results. Another common criticism of model intercomparison projects in general is that they offer little insight into process-level deficiencies in the models, and therefore are of marginal value for helping to improve the models. We address both of these issues by proposing a formal benchmarking methodology that also yields a formal and quantitative method for process-level diagnostics. We apply this to the PLUMBER experiments to show that (1) the PLUMBER conclusions were generally correct - the models use only a fraction of the information available to them from met forcing data (<50% by our analysis), and (2) all of the land models investigated by PLUMBER have similar process-level error structures, and therefore together do not represent a meaningful sample of structural or epistemic uncertainty. We conclude by suggesting two ways to improve the experimental design of model intercomparison and/or model benchmarking studies like PLUMBER. First, PLUMBER did not report model parameter values, and it is necessary to know these values to separate parameter uncertainty from structural uncertainty. This is a first order requirement if we want to use intercomparison studies to provide feedback to model development. Second, technical documentation of land models is inadequate. Future model intercomparison projects should begin with a collaborative effort by model developers to document specific differences between model structures. This could be done in a reproducible way using a unified, process-flexible system like SUMMA [2]. [1] Best, M.J. et al. (2015) 'The plumbing of land surface models: benchmarking model performance', J. Hydrometeor. [2] Clark, M.P. et al. (2015) 'A unified approach for process-based hydrologic modeling: 1. Modeling concept', Water Resour. Res.

  3. Information dynamics algorithm for detecting communities in networks

    NASA Astrophysics Data System (ADS)

    Massaro, Emanuele; Bagnoli, Franco; Guazzini, Andrea; Lió, Pietro

    2012-11-01

    The problem of community detection is relevant in many scientific disciplines, from social science to statistical physics. Given the impact of community detection in many areas, such as psychology and social sciences, we have addressed the issue of modifying existing well performing algorithms by incorporating elements of the domain application fields, i.e. domain-inspired. We have focused on a psychology and social network-inspired approach which may be useful for further strengthening the link between social network studies and mathematics of community detection. Here we introduce a community-detection algorithm derived from the van Dongen's Markov Cluster algorithm (MCL) method [4] by considering networks' nodes as agents capable to take decisions. In this framework we have introduced a memory factor to mimic a typical human behavior such as the oblivion effect. The method is based on information diffusion and it includes a non-linear processing phase. We test our method on two classical community benchmark and on computer generated networks with known community structure. Our approach has three important features: the capacity of detecting overlapping communities, the capability of identifying communities from an individual point of view and the fine tuning the community detectability with respect to prior knowledge of the data. Finally we discuss how to use a Shannon entropy measure for parameter estimation in complex networks.

  4. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  5. Optimization of a solid-state electron spin qubit using Gate Set Tomography

    DOE PAGES

    Dehollain, Juan P.; Muhonen, Juha T.; Blume-Kohout, Robin J.; ...

    2016-10-13

    Here, state of the art qubit systems are reaching the gate fidelities required for scalable quantum computation architectures. Further improvements in the fidelity of quantum gates demands characterization and benchmarking protocols that are efficient, reliable and extremely accurate. Ideally, a benchmarking protocol should also provide information on how to rectify residual errors. Gate Set Tomography (GST) is one such protocol designed to give detailed characterization of as-built qubits. We implemented GST on a high-fidelity electron-spin qubit confined by a single 31P atom in 28Si. The results reveal systematic errors that a randomized benchmarking analysis could measure but not identify, whereasmore » GST indicated the need for improved calibration of the length of the control pulses. After introducing this modification, we measured a new benchmark average gate fidelity of 99.942(8)%, an improvement on the previous value of 99.90(2)%. Furthermore, GST revealed high levels of non-Markovian noise in the system, which will need to be understood and addressed when the qubit is used within a fault-tolerant quantum computation scheme.« less

  6. Benchmarking NNWSI flow and transport codes: COVE 1 results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of themore » codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs.« less

  7. Verification of cardiac mechanics software: benchmark problems and solutions for testing active and passive material behaviour.

    PubMed

    Land, Sander; Gurev, Viatcheslav; Arens, Sander; Augustin, Christoph M; Baron, Lukas; Blake, Robert; Bradley, Chris; Castro, Sebastian; Crozier, Andrew; Favino, Marco; Fastl, Thomas E; Fritz, Thomas; Gao, Hao; Gizzi, Alessio; Griffith, Boyce E; Hurtado, Daniel E; Krause, Rolf; Luo, Xiaoyu; Nash, Martyn P; Pezzuto, Simone; Plank, Gernot; Rossi, Simone; Ruprecht, Daniel; Seemann, Gunnar; Smith, Nicolas P; Sundnes, Joakim; Rice, J Jeremy; Trayanova, Natalia; Wang, Dafang; Jenny Wang, Zhinuo; Niederer, Steven A

    2015-12-08

    Models of cardiac mechanics are increasingly used to investigate cardiac physiology. These models are characterized by a high level of complexity, including the particular anisotropic material properties of biological tissue and the actively contracting material. A large number of independent simulation codes have been developed, but a consistent way of verifying the accuracy and replicability of simulations is lacking. To aid in the verification of current and future cardiac mechanics solvers, this study provides three benchmark problems for cardiac mechanics. These benchmark problems test the ability to accurately simulate pressure-type forces that depend on the deformed objects geometry, anisotropic and spatially varying material properties similar to those seen in the left ventricle and active contractile forces. The benchmark was solved by 11 different groups to generate consensus solutions, with typical differences in higher-resolution solutions at approximately 0.5%, and consistent results between linear, quadratic and cubic finite elements as well as different approaches to simulating incompressible materials. Online tools and solutions are made available to allow these tests to be effectively used in verification of future cardiac mechanics software.

  8. Optimization of Deep Drilling Performance - Development and Benchmark Testing of Advanced Diamond Product Drill Bits & HP/HT Fluids to Significantly Improve Rates of Penetration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2005-09-30

    This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2004 through September 2005. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for amore » next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all Phase 1 testing and is planning Phase 2 development.« less

  9. A global perspective of the richness and evenness of traditional crop-variety diversity maintained by farming communities

    PubMed Central

    Jarvis, Devra I.; Brown, Anthony H. D.; Cuong, Pham Hung; Collado-Panduro, Luis; Latournerie-Moreno, Luis; Gyawali, Sanjaya; Tanto, Tesema; Sawadogo, Mahamadou; Mar, Istvan; Sadiki, Mohammed; Hue, Nguyen Thi-Ngoc; Arias-Reyes, Luis; Balma, Didier; Bajracharya, Jwala; Castillo, Fernando; Rijal, Deepak; Belqadi, Loubna; Rana, Ram; Saidi, Seddik; Ouedraogo, Jeremy; Zangre, Roger; Rhrib, Keltoum; Chavez, Jose Luis; Schoen, Daniel; Sthapit, Bhuwon; De Santis, Paola; Fadda, Carlo; Hodgkin, Toby

    2008-01-01

    Varietal data from 27 crop species from five continents were drawn together to determine overall trends in crop varietal diversity on farm. Measurements of richness, evenness, and divergence showed that considerable crop genetic diversity continues to be maintained on farm, in the form of traditional crop varieties. Major staples had higher richness and evenness than nonstaples. Variety richness for clonal species was much higher than that of other breeding systems. A close linear relationship between traditional variety richness and evenness (both transformed), empirically derived from data spanning a wide range of crops and countries, was found both at household and community levels. Fitting a neutral “function” to traditional variety diversity relationships, comparable to a species abundance distribution of “neutral ecology,” provided a benchmark to assess the standing diversity on farm. In some cases, high dominance occurred, with much of the variety richness held at low frequencies. This suggested that diversity may be maintained as an insurance to meet future environmental changes or social and economic needs. In other cases, a more even frequency distribution of varieties was found, possibly implying that farmers are selecting varieties to service a diversity of current needs and purposes. Divergence estimates, measured as the proportion of community evenness displayed among farmers, underscore the importance of a large number of small farms adopting distinctly diverse varietal strategies as a major force that maintains crop genetic diversity on farm. PMID:18362337

  10. Perspectives of the optical coherence tomography community on code and data sharing

    NASA Astrophysics Data System (ADS)

    Lurie, Kristen L.; Mistree, Behram F. T.; Ellerbee, Audrey K.

    2015-03-01

    As optical coherence tomography (OCT) grows to be a mature and successful field, it is important for the research community to develop a stronger practice of sharing code and data. A prolific culture of sharing can enable new and emerging laboratories to enter the field, allow research groups to gain new exposure and notoriety, and enable benchmarking of new algorithms and methods. Our long-term vision is to build tools to facilitate a stronger practice of sharing within this community. In line with this goal, our first aim was to understand the perceptions and practices of the community with respect to sharing research contributions (i.e., as code and data). We surveyed 52 members of the OCT community using an online polling system. Our main findings indicate that while researchers infrequently share their code and data, they are willing to contribute their research resources to a shared repository, and they believe that such a repository would benefit both their research and the OCT community at large. We plan to use the results of this survey to design a platform targeted to the OCT research community - an effort that ultimately aims to facilitate a more prolific culture of sharing.

  11. Protein Models Docking Benchmark 2

    PubMed Central

    Anishchenko, Ivan; Kundrotas, Petras J.; Tuzikov, Alexander V.; Vakser, Ilya A.

    2015-01-01

    Structural characterization of protein-protein interactions is essential for our ability to understand life processes. However, only a fraction of known proteins have experimentally determined structures. Such structures provide templates for modeling of a large part of the proteome, where individual proteins can be docked by template-free or template-based techniques. Still, the sensitivity of the docking methods to the inherent inaccuracies of protein models, as opposed to the experimentally determined high-resolution structures, remains largely untested, primarily due to the absence of appropriate benchmark set(s). Structures in such a set should have pre-defined inaccuracy levels and, at the same time, resemble actual protein models in terms of structural motifs/packing. The set should also be large enough to ensure statistical reliability of the benchmarking results. We present a major update of the previously developed benchmark set of protein models. For each interactor, six models were generated with the model-to-native Cα RMSD in the 1 to 6 Å range. The models in the set were generated by a new approach, which corresponds to the actual modeling of new protein structures in the “real case scenario,” as opposed to the previous set, where a significant number of structures were model-like only. In addition, the larger number of complexes (165 vs. 63 in the previous set) increases the statistical reliability of the benchmarking. We estimated the highest accuracy of the predicted complexes (according to CAPRI criteria), which can be attained using the benchmark structures. The set is available at http://dockground.bioinformatics.ku.edu. PMID:25712716

  12. Applying ILAMB to data from several generations of the Community Land Model to assess the relative contribution of model improvements and forcing uncertainty to model-data agreement

    NASA Astrophysics Data System (ADS)

    Lawrence, D. M.; Fisher, R.; Koven, C.; Oleson, K. W.; Swenson, S. C.; Hoffman, F. M.; Randerson, J. T.; Collier, N.; Mu, M.

    2017-12-01

    The International Land Model Benchmarking (ILAMB) project is a model-data intercomparison and integration project designed to assess and help improve land models. The current package includes assessment of more than 25 land variables across more than 60 global, regional, and site-level (e.g., FLUXNET) datasets. ILAMB employs a broad range of metrics including RMSE, mean error, spatial distributions, interannual variability, and functional relationships. Here, we apply ILAMB for the purpose of assessment of several generations of the Community Land Model (CLM4, CLM4.5, and CLM5). Encouragingly, CLM5, which is the result of model development over the last several years by more than 50 researchers from 15 different institutions, shows broad improvements across many ILAMB metrics including LAI, GPP, vegetation carbon stocks, and the historical net ecosystem carbon balance among others. We will also show that considerable uncertainty arises from the historical climate forcing data used (GSWP3v1 and CRUNCEPv7). ILAMB score variations due to forcing data can be as large for many variables as that due to model structural differences. Strengths and weaknesses and persistent biases across model generations will also be presented.

  13. Organic contaminants, trace and major elements, and nutrients in water and sediment sampled in response to the Deepwater Horizon oil spill

    USGS Publications Warehouse

    Nowell, Lisa H.; Ludtke, Amy S.; Mueller, David K.; Scott, Jonathon C.

    2011-01-01

    Considering all the information evaluated in this report, there were significant differences between pre-landfall and post-landfall samples for PAH concentrations in sediment. Pre-landfall and post-landfall samples did not differ significantly in concentrations or benchmark exceedances for most organics in water or trace elements in sediment. For trace elements in water, aquatic-life benchmarks were exceeded in almost 50 percent of samples, but the high and variable analytical reporting levels precluded statistical comparison of benchmark exceedances between sampling periods. Concentrations of several PAH compounds in sediment were significantly higher in post-landfall samples than pre-landfall samples, and five of seven sites with the largest differences in PAH concentrations also had diagnostic geochemical evidence of Deepwater Horizon Macondo-1 oil from Rosenbauer and others (2010).

  14. The Alpha consensus meeting on cryopreservation key performance indicators and benchmarks: proceedings of an expert meeting.

    PubMed

    2012-08-01

    This proceedings report presents the outcomes from an international workshop designed to establish consensus on: definitions for key performance indicators (KPIs) for oocyte and embryo cryopreservation, using either slow freezing or vitrification; minimum performance level values for each KPI, representing basic competency; and aspirational benchmark values for each KPI, representing best practice goals. This report includes general presentations about current practice and factors for consideration in the development of KPIs. A total of 14 KPIs were recommended and benchmarks for each are presented. No recommendations were made regarding specific cryopreservation techniques or devices, or whether vitrification is 'better' than slow freezing, or vice versa, for any particular stage or application, as this was considered to be outside the scope of this workshop. Copyright © 2012 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  15. EVALUATION OF LITERATURE ESTABLISHING SCREENING LEVELS FOR TERRESTRIAL PLANTS/INVERTEBRATES

    EPA Science Inventory

    Scientific publications often lack key information on experimental design or do not follow appropriate test methods and therefore cannot be used in deriving reliable benchmarks. Risk based soil screening levels (Eco-SSLs) are being established for chemicals of concern to terrestr...

  16. Performance Evaluation and Benchmarking of Intelligent Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveragingmore » previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems.« less

  17. Separating homeologs by phasing in the tetraploid wheat transcriptome.

    PubMed

    Krasileva, Ksenia V; Buffalo, Vince; Bailey, Paul; Pearce, Stephen; Ayling, Sarah; Tabbita, Facundo; Soria, Marcelo; Wang, Shichen; Akhunov, Eduard; Uauy, Cristobal; Dubcovsky, Jorge

    2013-06-25

    The high level of identity among duplicated homoeologous genomes in tetraploid pasta wheat presents substantial challenges for de novo transcriptome assembly. To solve this problem, we develop a specialized bioinformatics workflow that optimizes transcriptome assembly and separation of merged homoeologs. To evaluate our strategy, we sequence and assemble the transcriptome of one of the diploid ancestors of pasta wheat, and compare both assemblies with a benchmark set of 13,472 full-length, non-redundant bread wheat cDNAs. A total of 489 million 100 bp paired-end reads from tetraploid wheat assemble in 140,118 contigs, including 96% of the benchmark cDNAs. We used a comparative genomics approach to annotate 66,633 open reading frames. The multiple k-mer assembly strategy increases the proportion of cDNAs assembled full-length in a single contig by 22% relative to the best single k-mer size. Homoeologs are separated using a post-assembly pipeline that includes polymorphism identification, phasing of SNPs, read sorting, and re-assembly of phased reads. Using a reference set of genes, we determine that 98.7% of SNPs analyzed are correctly separated by phasing. Our study shows that de novo transcriptome assembly of tetraploid wheat benefit from multiple k-mer assembly strategies more than diploid wheat. Our results also demonstrate that phasing approaches originally designed for heterozygous diploid organisms can be used to separate the close homoeologous genomes of tetraploid wheat. The predicted tetraploid wheat proteome and gene models provide a valuable tool for the wheat research community and for those interested in comparative genomic studies.

  18. Separating homeologs by phasing in the tetraploid wheat transcriptome

    PubMed Central

    2013-01-01

    Background The high level of identity among duplicated homoeologous genomes in tetraploid pasta wheat presents substantial challenges for de novo transcriptome assembly. To solve this problem, we develop a specialized bioinformatics workflow that optimizes transcriptome assembly and separation of merged homoeologs. To evaluate our strategy, we sequence and assemble the transcriptome of one of the diploid ancestors of pasta wheat, and compare both assemblies with a benchmark set of 13,472 full-length, non-redundant bread wheat cDNAs. Results A total of 489 million 100 bp paired-end reads from tetraploid wheat assemble in 140,118 contigs, including 96% of the benchmark cDNAs. We used a comparative genomics approach to annotate 66,633 open reading frames. The multiple k-mer assembly strategy increases the proportion of cDNAs assembled full-length in a single contig by 22% relative to the best single k-mer size. Homoeologs are separated using a post-assembly pipeline that includes polymorphism identification, phasing of SNPs, read sorting, and re-assembly of phased reads. Using a reference set of genes, we determine that 98.7% of SNPs analyzed are correctly separated by phasing. Conclusions Our study shows that de novo transcriptome assembly of tetraploid wheat benefit from multiple k-mer assembly strategies more than diploid wheat. Our results also demonstrate that phasing approaches originally designed for heterozygous diploid organisms can be used to separate the close homoeologous genomes of tetraploid wheat. The predicted tetraploid wheat proteome and gene models provide a valuable tool for the wheat research community and for those interested in comparative genomic studies. PMID:23800085

  19. 76 FR 54739 - Pacific Halibut Fishery; Guideline Harvest Levels for the Guided Sport Fishery for Pacific...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-02

    ... Halibut Fishery; Guideline Harvest Levels for the Guided Sport Fishery for Pacific Halibut in...) for the guided sport fishery in International Pacific Halibut Commission (IPHC) Regulatory Areas 2C... sport fishery for halibut. The GHLs are benchmark harvest levels for participants in the guided sport...

  20. 78 FR 18323 - Pacific Halibut Fishery; Guideline Harvest Levels for the Guided Sport Fishery for Pacific...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... Halibut Fishery; Guideline Harvest Levels for the Guided Sport Fishery for Pacific Halibut in...) for the guided sport fishery in International Pacific Halibut Commission (IPHC) Regulatory Areas 2C... the guided sport fishery for halibut. The GHLs are benchmark harvest levels for participants in the...

  1. Benchmarking Problems Used in Second Year Level Organic Chemistry Instruction

    ERIC Educational Resources Information Center

    Raker, Jeffrey R.; Towns, Marcy H.

    2010-01-01

    Investigations of the problem types used in college-level general chemistry examinations have been reported in this Journal and were first reported in the "Journal of Chemical Education" in 1924. This study extends the findings from general chemistry to the problems of four college-level organic chemistry courses. Three problem…

  2. NASA Indexing Benchmarks: Evaluating Text Search Engines

    NASA Technical Reports Server (NTRS)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  3. Benchmarks of fairness for health care reform: a policy tool for developing countries.

    PubMed Central

    Daniels, N.; Bryant, J.; Castano, R. A.; Dantes, O. G.; Khan, K. S.; Pannarunothai, S.

    2000-01-01

    Teams of collaborators from Colombia, Mexico, Pakistan, and Thailand have adapted a policy tool originally developed for evaluating health insurance reforms in the United States into "benchmarks of fairness" for assessing health system reform in developing countries. We describe briefly the history of the benchmark approach, the tool itself, and the uses to which it may be put. Fairness is a wide term that includes exposure to risk factors, access to all forms of care, and to financing. It also includes efficiency of management and resource allocation, accountability, and patient and provider autonomy. The benchmarks standardize the criteria for fairness. Reforms are then evaluated by scoring according to the degree to which they improve the situation, i.e. on a scale of -5 to 5, with zero representing the status quo. The object is to promote discussion about fairness across the disciplinary divisions that keep policy analysts and the public from understanding how trade-offs between different effects of reforms can affect the overall fairness of the reform. The benchmarks can be used at both national and provincial or district levels, and we describe plans for such uses in the collaborating sites. A striking feature of the adaptation process is that there was wide agreement on this ethical framework among the collaborating sites despite their large historical, political and cultural differences. PMID:10916911

  4. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  5. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, L.C.; Deen, J.R.; Woodruff, W.L.

    1995-02-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  6. Model evaluation using a community benchmarking system for land surface models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Kluzek, E. B.; Koven, C. D.; Randerson, J. T.

    2014-12-01

    Evaluation of atmosphere, ocean, sea ice, and land surface models is an important step in identifying deficiencies in Earth system models and developing improved estimates of future change. For the land surface and carbon cycle, the design of an open-source system has been an important objective of the International Land Model Benchmarking (ILAMB) project. Here we evaluated CMIP5 and CLM models using a benchmarking system that enables users to specify models, data sets, and scoring systems so that results can be tailored to specific model intercomparison projects. Our scoring system used information from four different aspects of global datasets, including climatological mean spatial patterns, seasonal cycle dynamics, interannual variability, and long-term trends. Variable-to-variable comparisons enable investigation of the mechanistic underpinnings of model behavior, and allow for some control of biases in model drivers. Graphics modules allow users to evaluate model performance at local, regional, and global scales. Use of modular structures makes it relatively easy for users to add new variables, diagnostic metrics, benchmarking datasets, or model simulations. Diagnostic results are automatically organized into HTML files, so users can conveniently share results with colleagues. We used this system to evaluate atmospheric carbon dioxide, burned area, global biomass and soil carbon stocks, net ecosystem exchange, gross primary production, ecosystem respiration, terrestrial water storage, evapotranspiration, and surface radiation from CMIP5 historical and ESM historical simulations. We found that the multi-model mean often performed better than many of the individual models for most variables. We plan to publicly release a stable version of the software during fall of 2014 that has land surface, carbon cycle, hydrology, radiation and energy cycle components.

  7. Costs of a community-based glaucoma detection programme: analysis of the Philadelphia Glaucoma Detection and Treatment Project.

    PubMed

    Pizzi, Laura T; Waisbourd, Michael; Hark, Lisa; Sembhi, Harjeet; Lee, Paul; Crews, John E; Saaddine, Jinan B; Steele, Deon; Katz, L Jay

    2018-02-01

    Glaucoma is the foremost cause of irreversible blindness, and more than 50% of cases remain undiagnosed. Our objective was to report the costs of a glaucoma detection programme operationalised through Philadelphia community centres. The analysis was performed using a healthcare system perspective in 2013 US dollars. Costs of examination and educational workshops were captured. Measures were total programme costs, cost/case of glaucoma detected and cost/case of any ocular disease detected (including glaucoma). Diagnoses are reported at the individual level (therefore representing a diagnosis made in one or both eyes). Staff time was captured during site visits to 15 of 43 sites and included time to deliver examinations and workshops, supervision, training and travel. Staff time was converted to costs by applying wage and fringe benefit costs from the US Bureau of Labor Statistics. Non-staff costs (equipment and mileage) were collected using study logs. Participants with previously diagnosed glaucoma were excluded. 1649 participants were examined. Mean total per-participant examination time was 56 min (SD 4). Mean total examination cost/participant was $139. The cost/case of glaucoma newly identified (open-angle glaucoma, angle-closure glaucoma, glaucoma suspect, or primary angle closure) was $420 and cost/case for any ocular disease identified was $273. Glaucoma examinations delivered through this programme provided significant health benefit to hard-to-reach communities. On a per-person basis, examinations were fairly low cost, though opportunities exist to improve efficiency. Findings serve as an important benchmark for planning future community-based glaucoma examination programmes. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. The GAAS metagenomic tool and its estimations of viral and microbial average genome size in four major biomes.

    PubMed

    Angly, Florent E; Willner, Dana; Prieto-Davó, Alejandra; Edwards, Robert A; Schmieder, Robert; Vega-Thurber, Rebecca; Antonopoulos, Dionysios A; Barott, Katie; Cottrell, Matthew T; Desnues, Christelle; Dinsdale, Elizabeth A; Furlan, Mike; Haynes, Matthew; Henn, Matthew R; Hu, Yongfei; Kirchman, David L; McDole, Tracey; McPherson, John D; Meyer, Folker; Miller, R Michael; Mundt, Egbert; Naviaux, Robert K; Rodriguez-Mueller, Beltran; Stevens, Rick; Wegley, Linda; Zhang, Lixin; Zhu, Baoli; Rohwer, Forest

    2009-12-01

    Metagenomic studies characterize both the composition and diversity of uncultured viral and microbial communities. BLAST-based comparisons have typically been used for such analyses; however, sampling biases, high percentages of unknown sequences, and the use of arbitrary thresholds to find significant similarities can decrease the accuracy and validity of estimates. Here, we present Genome relative Abundance and Average Size (GAAS), a complete software package that provides improved estimates of community composition and average genome length for metagenomes in both textual and graphical formats. GAAS implements a novel methodology to control for sampling bias via length normalization, to adjust for multiple BLAST similarities by similarity weighting, and to select significant similarities using relative alignment lengths. In benchmark tests, the GAAS method was robust to both high percentages of unknown sequences and to variations in metagenomic sequence read lengths. Re-analysis of the Sargasso Sea virome using GAAS indicated that standard methodologies for metagenomic analysis may dramatically underestimate the abundance and importance of organisms with small genomes in environmental systems. Using GAAS, we conducted a meta-analysis of microbial and viral average genome lengths in over 150 metagenomes from four biomes to determine whether genome lengths vary consistently between and within biomes, and between microbial and viral communities from the same environment. Significant differences between biomes and within aquatic sub-biomes (oceans, hypersaline systems, freshwater, and microbialites) suggested that average genome length is a fundamental property of environments driven by factors at the sub-biome level. The behavior of paired viral and microbial metagenomes from the same environment indicated that microbial and viral average genome sizes are independent of each other, but indicative of community responses to stressors and environmental conditions.

  9. Abiotic and biotic determinants of leaf carbon exchange capacity from tropical to high boreal biomes

    NASA Astrophysics Data System (ADS)

    Smith, N. G.; Dukes, J. S.

    2016-12-01

    Photosynthesis and respiration on land represent the two largest fluxes of carbon dioxide between the atmosphere and the Earth's surface. As such, the Earth System Models that are used to project climate change are high sensitive to these processes. Studies have found that much of this uncertainty is due to the formulation and parameterization of plant photosynthetic and respiratory capacity. Here, we quantified the abiotic and biotic factors that determine photosynthetic and respiratory capacity at large spatial scales. Specifically, we measured the maximum rate of Rubisco carboxylation (Vcmax), the maximum rate of Ribulose-1,5-bisphosphate regeneration (Jmax), and leaf dark respiration (Rd) in >600 individuals of 98 plant species from the tropical to high boreal biomes of Northern and Central America. We also measured a bevy of covariates including plant functional type, leaf nitrogen content, short- and long-term climate, leaf water potential, plant size, and leaf mass per area. We found that plant functional type and leaf nitrogen content were the primary determinants of Vcmax, Jmax, and Rd. Mean annual temperature and mean annual precipitation were not significant predictors of these rates. However, short-term climatic variables, specifically soil moisture and air temperature over the previous 25 days, were significant predictors and indicated that heat and soil moisture deficits combine to reduce photosynthetic capacity and increase respiratory capacity. Finally, these data were used as a model benchmarking tool for the Community Land Model version 4.5 (CLM 4.5). The benchmarking analyses determined errors in the leaf nitrogen allocation scheme of CLM 4.5. Under high leaf nitrogen levels within a plant type the model overestimated Vcmax and Jmax. This result suggested that plants were altering their nitrogen allocation patterns when leaf nitrogen levels were high, an effect that was not being captured by the model. These data, taken with models in mind, provide paths forward for improving model structure and parameterization of leaf carbon exchange at large spatial scales.

  10. Family social support, community "social capital" and adolescents' mental health and educational outcomes: a longitudinal study in England.

    PubMed

    Rothon, Catherine; Goodwin, Laura; Stansfeld, Stephen

    2012-05-01

    To examine the associations between family social support, community "social capital" and mental health and educational outcomes. The data come from the Longitudinal Study of Young People in England, a multi-stage stratified nationally representative random sample. Family social support (parental relationships, evening meal with family, parental surveillance) and community social capital (parental involvement at school, sociability, involvement in activities outside the home) were measured at baseline (age 13-14), using a variety of instruments. Mental health was measured at age 14-15 (GHQ-12). Educational achievement was measured at age 15-16 by achievement at the General Certificate of Secondary Education. After adjustments, good paternal (OR = 0.70, 95% CI 0.56-0.86) and maternal (OR = 0.65, 95% CI 0.53-0.81) relationships, high parental surveillance (OR = 0.81, 95% CI 0.69-0.94) and frequency of evening meal with family (6 or 7 times a week: OR = 0.77, 95% CI 0.61-0.96) were associated with lower odds of poor mental health. A good paternal relationship (OR = 1.27, 95% CI 1.06-1.51), high parental surveillance (OR = 1.37, 95% CI 1.20-1.58), high frequency of evening meal with family (OR = 1.64, 95% CI 1.33-2.03) high involvement in extra-curricular activities (OR = 2.57, 95% CI 2.11-3.13) and parental involvement at school (OR = 1.60, 95% CI 1.37-1.87) were associated with higher odds of reaching the educational benchmark. Participating in non-directed activities was associated with lower odds of reaching the benchmark (OR = 0.79, 95% CI 0.70-0.89). Building social capital in deprived communities may be one way in which both mental health and educational outcomes could be improved. In particular, there is a need to focus on the family as a provider of support.

  11. The Use of Quality Benchmarking in Assessing Web Resources for the Dermatology Virtual Branch Library of the National electronic Library for Health (NeLH)

    PubMed Central

    Roudsari, AV; Gordon, C; Gray, JA Muir

    2001-01-01

    Background In 1998, the U.K. National Health Service Information for Health Strategy proposed the implementation of a National electronic Library for Health to provide clinicians, healthcare managers and planners, patients and the public with easy, round the clock access to high quality, up-to-date electronic information on health and healthcare. The Virtual Branch Libraries are among the most important components of the National electronic Library for Health . They aim at creating online knowledge based communities, each concerned with some specific clinical and other health-related topics. Objectives This study is about the envisaged Dermatology Virtual Branch Libraries of the National electronic Library for Health . It aims at selecting suitable dermatology Web resources for inclusion in the forthcoming Virtual Branch Libraries after establishing preliminary quality benchmarking rules for this task. Psoriasis, being a common dermatological condition, has been chosen as a starting point. Methods Because quality is a principal concern of the National electronic Library for Health, the study includes a review of the major quality benchmarking systems available today for assessing health-related Web sites. The methodology of developing a quality benchmarking system has been also reviewed. Aided by metasearch Web tools, candidate resources were hand-selected in light of the reviewed benchmarking systems and specific criteria set by the authors. Results Over 90 professional and patient-oriented Web resources on psoriasis and dermatology in general are suggested for inclusion in the forthcoming Dermatology Virtual Branch Libraries. The idea of an all-in knowledge-hallmarking instrument for the National electronic Library for Health is also proposed based on the reviewed quality benchmarking systems. Conclusions Skilled, methodical, organized human reviewing, selection and filtering based on well-defined quality appraisal criteria seems likely to be the key ingredient in the envisaged National electronic Library for Health service. Furthermore, by promoting the application of agreed quality guidelines and codes of ethics by all health information providers and not just within the National electronic Library for Health, the overall quality of the Web will improve with time and the Web will ultimately become a reliable and integral part of the care space. PMID:11720947

  12. Sea Level Variability in the Mediterranean

    NASA Astrophysics Data System (ADS)

    Zerbini, S.; Bruni, S.; del Conte, S.; Errico, M.; Petracca, F.; Prati, C.; Raicich, F.; Santi, E.

    2015-12-01

    Tide gauges measure local sea-level relative to a benchmark on land, therefore the interpretation of these measurements can be limited by the lack of appropriate knowledge of vertical crustal motions. The oldest sea-level records date back to the 18th century; these observations are the only centuries-old data source enabling the estimate of historical sea-level trends/variations. In general, tide gauge benchmarks were not frequently levelled, except in those stations where natural and/or anthropogenic subsidence was a major concern. However, in most cases, it is difficult to retrieve the historical geodetic levelling data. Space geodetic techniques, such as GNSS, Doris and InSAR are now providing measurements on a time and space-continuous basis, giving rise to a large amount of different data sets. The vertical motions resulting from the various analyses need to be compared and best exploited for achieving reliable estimates of sea level variations. In the Mediterranean area, there are a few centennial tide gauge records; our study focuses, in particular, on the Italian time series of Genoa, Marina di Ravenna, Venice and Trieste. Two of these stations, Marina di Ravenna and Venice, are affected by both natural and anthropogenic subsidence, the latter was particularly intense during a few decades of the 20th century because of ground fluids withdrawal. We have retrieved levelling data of benchmarks at and/or close to the tide gauges from the end of 1800 and, for the last couple of decades, also GPS and InSAR height time series in close proximity of the stations. By using an ensemble of these data, modelling of the long-period non-linear behavior of subsidence was successfully accomplished. After removal of the land vertical motions, the linear long period sea-level rates of all stations are in excellent agreement. Over the last two decades, the tide gauge rates were also compared with those obtained by satellite radar altimetry data.

  13. Hierarchical Artificial Bee Colony Algorithm for RFID Network Planning Optimization

    PubMed Central

    Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong

    2014-01-01

    This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness. PMID:24592200

  14. Hierarchical artificial bee colony algorithm for RFID network planning optimization.

    PubMed

    Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong

    2014-01-01

    This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.

  15. Results of the first order leveling surveys in the Mexicali Valley and at the Cerro Prieto field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de la Pena L, A.

    1981-01-01

    The results obtained from the third leveling survey carried out by the Direccion General de Geografia del Territorio Nacional (previously DETENAL) during November and December 1979 are presented. Calculations of the changes in field elevation and plots showing comparisons of the 1977, 1978, and 1979 surveys are also presented. Results from a second order leveling survey performed to ascertain the extent of ground motion resulting from the 8 June 1980 earthquake are presented. This magnitude ML = 6.7 earthquake with epicenter located 15 km southeast of the Guadalupe Victoria village, caused fissures on the surface, the formation of small sandmore » volcanos, and the ejection of ground water in the vicinity of the Cerro Prieto field. This leveling survey was carried out between benchmark BN-10067 at the intersection of the Solfatara canal and the Sonora-Baja California railroad, and benchmark BN-10055 located at the Delta station.« less

  16. Analytical Support to Defence Transformation (Le soutien analytique a la transformation de la Defense)

    DTIC Science & Technology

    2010-04-01

    analytical community. 5.1 Towards a Common Understanding of CD&E and CD&E Project Management Recent developments within NATO have contributed to the... project management purposes it is useful to distinguish four phases [P 21]: a) Preparation, Initiation and Structuring; b) Concept Development Planning...examined in more detail below. While the NATO CD&E policy provides a benchmark for a comprehensive, disciplined management of CD&E projects , it may

  17. Supporting Mentoring Relationships of Youth in Foster Care: Do Program Practices Predict Match Length?

    PubMed

    Stelter, Rebecca L; Kupersmidt, Janis B; Stump, Kathryn N

    2018-04-15

    Implementation of research- and safety-based program practices enhance the longevity of mentoring relationships, in general; however, little is known about how mentoring programs might support the relationships of mentees in foster care. Benchmark program practices and Standards in the Elements of Effective Practice for Mentoring, 3rd Edition (MENTOR, 2009) were assessed in the current study as predictors of match longevity. Secondary data analyses were conducted on a national agency information management database from 216 Big Brothers Big Sisters agencies serving 641 youth in foster care and 70,067 youth not in care from across the United States (Mean = 11.59 years old at the beginning of their matches) in one-to-one, community-based (55.06%) and school- or site-based (44.94%) matches. Mentees in foster care had shorter matches and matches that were more likely to close prematurely than mentees who were not in foster care. Agency leaders from 32 programs completed a web-based survey describing their policies and practices. The sum total numbers of Benchmark program practices and Standards were associated with match length for 208 mentees in foster care; however, neither predicted premature match closure. Results are discussed in terms of how mentoring programs and their staff can support the mentoring relationships of high-risk youth in foster care. © Society for Community Research and Action 2018.

  18. Bridging the Gap between Theory and Practice in Integrated Care: The Case of the Diabetic Foot Pathway in Tuscany

    PubMed Central

    Bini, Barbara; Ruggieri, Tommaso Grillo; Piaggesi, Alberto; Ricci, Lucia

    2016-01-01

    Introduction and Background: As diabetic foot (DF) care benefits from integration, monitoring geographic variations in lower limb Major Amputation rate enables to highlight potential lack of Integrated Care. In Tuscany (Italy), these DF outcomes were good on average but they varied within the region. In order to stimulate an improvement process towards integration, the project aimed to shift health professionals’ focus on the geographic variation issue, promote the Population Medicine approach, and engage professionals in a community of practice. Method: Three strategies were thus carried out: the use of a transparent performance evaluation system based on benchmarking; the use of patient stories and benchmarking analyses on outcomes, service utilization and costs that cross-checked delivery- and population-based perspectives; the establishment of a stable community of professionals to discuss data and practices. Results: The project enabled professionals to shift their focus on geographic variation and to a joint accountability on outcomes and costs for the entire patient pathways. Organizational best practices and gaps in integration were identified and improvement actions towards Integrated Care were implemented. Conclusion and Discussion: For the specific category of care pathways whose geographic variation is related to a lack of Integrated Care, a comprehensive strategy to improve outcomes and reduce equity gaps by diffusing integration should be carried out. PMID:29042842

  19. Modeling Memory Processes and Performance Benchmarks of AWACS Weapons Director Teams

    DTIC Science & Technology

    2006-01-31

    levels of processing generally lead to higher levels of performance than shallow levels of processing ( Craik & Lockhart ...making. New York: John Wiley & Sons. Craik , F.I.M., & Lockhart , R.S. (1972). Levels of processing : A framework for memory research. Journal of Verbal...representation. The type of processing occurring at encoding has been demonstrated to result in differential levels of memory performance ( Craik

  20. Reevaluation of health risk benchmark for sustainable water practice through risk analysis of rooftop-harvested rainwater.

    PubMed

    Lim, Keah-Ying; Jiang, Sunny C

    2013-12-15

    Health risk concerns associated with household use of rooftop-harvested rainwater (HRW) constitute one of the main impediments to exploit the benefits of rainwater harvesting in the United States. However, the benchmark based on the U.S. EPA acceptable annual infection risk level of ≤1 case per 10,000 persons per year (≤10(-4) pppy) developed to aid drinking water regulations may be unnecessarily stringent for sustainable water practice. In this study, we challenge the current risk benchmark by quantifying the potential microbial risk associated with consumption of HRW-irrigated home produce and comparing it against the current risk benchmark. Microbial pathogen data for HRW and exposure rates reported in literature are applied to assess the potential microbial risk posed to household consumers of their homegrown produce. A Quantitative Microbial Risk Assessment (QMRA) model based on worst-case scenario (e.g. overhead irrigation, no pathogen inactivation) is applied to three crops that are most popular among home gardeners (lettuce, cucumbers, and tomatoes) and commonly consumed raw. The infection risks of household consumers attributed to consumption of these home produce vary with the type of produce. The lettuce presents the highest risk, which is followed by tomato and cucumber, respectively. Results show that the 95th percentile values of infection risk per intake event of home produce are one to three orders of magnitude (10(-7) to 10(-5)) lower than U.S. EPA risk benchmark (≤10(-4) pppy). However, annual infection risks under the same scenario (multiple intake events in a year) are very likely to exceed the risk benchmark by one order of magnitude in some cases. Estimated 95th percentile values of the annual risk are in the 10(-4) to 10(-3) pppy range, which are still lower than the 10(-3) to 10(-1) pppy risk range of reclaimed water irrigated produce estimated in comparable studies. We further discuss the desirability of HRW for irrigating home produce based on the relative risk of HRW to reclaimed wastewater for irrigation of food crops. The appropriateness of the ≤10(-4) pppy risk benchmark for assessing safety level of HRW-irrigated fresh produce is questioned by considering the assumptions made for the QMRA model. Consequently, the need of an updated approach to assess appropriateness of sustainable water practice for making guidelines and policies is proposed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Automatic Thread-Level Parallelization in the Chombo AMR Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christen, Matthias; Keen, Noel; Ligocki, Terry

    2011-05-26

    The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less

  2. Technologies of polytechnic education in global benchmark higher education institutions

    NASA Astrophysics Data System (ADS)

    Kurushina, V. A.; Kurushina, E. V.; Zemenkova, M. Y.

    2018-05-01

    The Russian polytechnic education is going through the sequence of transformations started with introduction of bachelor and master degrees in the higher education instead of the previous “specialists”. The next stage of reformation in the Russian polytechnic education should imply the growth in quality of teaching and learning experience that is possible to achieve by accumulating the best education practices of the world-class universities using the benchmarking method. This paper gives an overview of some major distinctive features of the foreign benchmark higher education institution and the Russian university of polytechnic profile. The parameters that allowed the authors to select the foreign institution for comparison include the scope of educational profile, industrial specialization, connections with the leading regional corporations, size of the city and number of students. When considering the possibilities of using relevant higher education practices of the world level, the authors emphasize the importance of formation of a new mentality of an engineer, the role of computer technologies in engineering education, the provision of licensed software for the educational process which exceeds the level of a regional Russian university, and successful staff technologies (e.g., inviting “guest” lecturers or having 2-3 lecturers per course).

  3. The Safety Attitudes Questionnaire: psychometric properties, benchmarking data, and emerging research.

    PubMed

    Sexton, John B; Helmreich, Robert L; Neilands, Torsten B; Rowan, Kathy; Vella, Keryn; Boyden, James; Roberts, Peter R; Thomas, Eric J

    2006-04-03

    There is widespread interest in measuring healthcare provider attitudes about issues relevant to patient safety (often called safety climate or safety culture). Here we report the psychometric properties, establish benchmarking data, and discuss emerging areas of research with the University of Texas Safety Attitudes Questionnaire. Six cross-sectional surveys of health care providers (n = 10,843) in 203 clinical areas (including critical care units, operating rooms, inpatient settings, and ambulatory clinics) in three countries (USA, UK, New Zealand). Multilevel factor analyses yielded results at the clinical area level and the respondent nested within clinical area level. We report scale reliability, floor/ceiling effects, item factor loadings, inter-factor correlations, and percentage of respondents who agree with each item and scale. A six factor model of provider attitudes fit to the data at both the clinical area and respondent nested within clinical area levels. The factors were: Teamwork Climate, Safety Climate, Perceptions of Management, Job Satisfaction, Working Conditions, and Stress Recognition. Scale reliability was 0.9. Provider attitudes varied greatly both within and among organizations. Results are presented to allow benchmarking among organizations and emerging research is discussed. The Safety Attitudes Questionnaire demonstrated good psychometric properties. Healthcare organizations can use the survey to measure caregiver attitudes about six patient safety-related domains, to compare themselves with other organizations, to prompt interventions to improve safety attitudes and to measure the effectiveness of these interventions.

  4. ANAlyte: A modular image analysis tool for ANA testing with indirect immunofluorescence.

    PubMed

    Di Cataldo, Santa; Tonti, Simone; Bottino, Andrea; Ficarra, Elisa

    2016-05-01

    The automated analysis of indirect immunofluorescence images for Anti-Nuclear Autoantibody (ANA) testing is a fairly recent field that is receiving ever-growing interest from the research community. ANA testing leverages on the categorization of intensity level and fluorescent pattern of IIF images of HEp-2 cells to perform a differential diagnosis of important autoimmune diseases. Nevertheless, it suffers from tremendous lack of repeatability due to subjectivity in the visual interpretation of the images. The automatization of the analysis is seen as the only valid solution to this problem. Several works in literature address individual steps of the work-flow, nonetheless integrating such steps and assessing their effectiveness as a whole is still an open challenge. We present a modular tool, ANAlyte, able to characterize a IIF image in terms of fluorescent intensity level and fluorescent pattern without any user-interactions. For this purpose, ANAlyte integrates the following: (i) Intensity Classifier module, that categorizes the intensity level of the input slide based on multi-scale contrast assessment; (ii) Cell Segmenter module, that splits the input slide into individual HEp-2 cells; (iii) Pattern Classifier module, that determines the fluorescent pattern of the slide based on the pattern of the individual cells. To demonstrate the accuracy and robustness of our tool, we experimentally validated ANAlyte on two different public benchmarks of IIF HEp-2 images with rigorous leave-one-out cross-validation strategy. We obtained overall accuracy of fluorescent intensity and pattern classification respectively around 85% and above 90%. We assessed all results by comparisons with some of the most representative state of the art works. Unlike most of the other works in the recent literature, ANAlyte aims at the automatization of all the major steps of ANA image analysis. Results on public benchmarks demonstrate that the tool can characterize HEp-2 slides in terms of intensity and fluorescent pattern with accuracy better or comparable with the state of the art techniques, even when such techniques are run on manually segmented cells. Hence, ANAlyte can be proposed as a valid solution to the problem of ANA testing automatization. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Anthropogenic Organic Compounds in Ground Water and Finished Water of Community Water Systems in the Northern Tampa Bay Area, Florida, 2002-04

    USGS Publications Warehouse

    Metz, Patricia A.; Delzer, Gregory C.; Berndt, Marian P.; Crandall, Christy A.; Toccalino, Patricia L.

    2007-01-01

    As part of the U.S. Geological Survey's (USGS's) National Water-Quality Assessment (NAWQA) Program, a Source Water-Quality Assessment (SWQA) was conducted in the unconfined and semiconfined portions of the Upper Floridan aquifer system during 2002-04. SWQAs are two-phased sampling activities, wherein phase 1 was designed to evaluate the occurrence of 258 anthropogenic organic compounds (AOCs) in ground water used as source water for 30 of the largest-producing community water system (CWS) wells in the northern Tampa Bay area, Florida. The 258 AOCs included volatile organic compounds (VOCs), pesticides, and other anthropogenic organic compounds (OAOCs). Phase 2 was designed to monitor concentrations in the source water and also the finished water of CWSs for compounds most frequently detected during phase 1. During phase 1 of the SWQA study, 31 of the 258 AOCs were detected in source-water samples collected from CWS wells at low concentrations (less than 1.0 microgram per liter (ug/L)). Twelve AOCs were detected in at least 10 percent of samples. Concentrations from 16 of the 31 detected AOCs were about 2 to 5 orders of magnitude below human-health benchmarks indicating that concentrations were unlikely to be of potential human-health concern. The potential human-health relevance for the remaining 15 detected unregulated AOCs could not be evaluated because no human-health benchmarks were available for these compounds. Hydrogeology, population, and land use were examined to evaluate the effects of these variables on the source water monitored. Approximately three times as many detections of VOCs (27) and pesticides (34) occurred in unconfined areas than in the semiconfined areas (8 VOCs, 14 pesticides). In contrast, 1 OAOC was detected in unconfined areas, and 13 OAOCs were detected in semiconfined areas with 9 of the OAOC detections occurring in samples from two wells located near septic systems. Analyses of population and land use indicated that the number of compounds detected increased as the population surrounding each well increased. Detection frequencies and concentrations for VOCs (particularly chloroform) and pesticides were highest in residential land-use areas. The results of source-water samples from the 30 CWS wells monitored during phase 1 of this SWQA study were compared to four locally conducted studies. These general comparisons indicate that the occurrence of VOCs in other studies is similar to their occurrence in source water of CWSs monitored as part of this SWQA. However, pesticide compounds, especially atrazine and its breakdown products, occurred more frequently in the SWQA study than in the other four studies. Phase 2 of the SWQA assessed AOCs in samples from 11 of the 30 CWS wells and the associated finished water. Overall, 42 AOCs were detected in either source water or finished water and more compounds were detected in finished water than in source water. Specifically, 22 individual AOCs were detected in source water and 27 AOCs were detected in finished water. The total number of detections was greater in the finished water (80) than in the source water (49); however, this was largely due to the creation of disinfection by-products (DBPs) during water treatment. Excluding DBPs, about the same number of total detections was observed in source water (40) and finished water (44). During phase 2, AOC detected concentrations ranged from E0.003 (estimated) to 1,140 ug/L in the source water and from E0.003 to 36.3 ug/L in the finished water. Concentrations of 24 of the 42 compounds were compared to human-health benchmarks and were about 1 to 5 orders of magnitude below their human-health benchmarks indicating that concentrations are unlikely to be of potential human-health concern, excluding DBPs. Concentrations of carbon tetrachloride, however, were within 10 percent of its human-health benchmark, which is considered a level that may warrant inclusion of the compound in a low-concentration, t

  6. Benchmarking reference services: step by step.

    PubMed

    Buchanan, H S; Marshall, J G

    1996-01-01

    This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.

  7. A review of current practices to increase Chlamydia screening in the community--a consumer-centred social marketing perspective.

    PubMed

    Phillipson, Lyn; Gordon, Ross; Telenta, Joanne; Magee, Chris; Janssen, Marty

    2016-02-01

    Chlamydia trachomatis is one of the most frequently reported sexually transmitted infections (STI) in Australia, the UK and Europe. Yet, rates of screening for STIs remain low, especially in younger adults. To assess effectiveness of Chlamydia screening interventions targeting young adults in community-based settings, describe strategies utilized and assess them according to social marketing benchmark criteria. A systematic review of relevant literature between 2002 and 2012 in Medline, Web of Knowledge, PubMed, Scopus and the Cumulative Index to Nursing and Allied Health was undertaken. Of 18 interventions identified, quality of evidence was low. Proportional screening rates varied, ranging from: 30.9 to 62.5% in educational settings (n = 4), 4.8 to 63% in media settings (n = 6) and from 5.7 to 44.5% in other settings (n = 7). Assessment against benchmark criteria found that interventions incorporating social marketing principles were more likely to achieve positive results, yet few did this comprehensively. Most demonstrated customer orientation and addressed barriers to presenting to a clinic for screening. Only one addressed barriers to presenting for treatment after a positive result. Promotional messages typically focused on providing facts and accessing a testing kit. Risk assessment tools appeared to promote screening among higher risk groups. Few evaluated treatment rates following positive results; therefore, impact of screening on treatment rates remains unknown. Future interventions should consider utilizing a comprehensive social marketing approach, using formative research to increase insight and segmentation and tailoring of screening interventions. Easy community access to both screening and treatment should be prioritized. © 2015 John Wiley & Sons Ltd.

  8. A Multicenter Collaborative to Improve Care of Community Acquired Pneumonia in Hospitalized Children.

    PubMed

    Parikh, Kavita; Biondi, Eric; Nazif, Joanne; Wasif, Faiza; Williams, Derek J; Nichols, Elizabeth; Ralston, Shawn

    2017-03-01

    The Value in Inpatient Pediatrics Network sponsored the Improving Care in Community Acquired Pneumonia collaborative with the goal of increasing evidence-based management of children hospitalized with community acquired pneumonia (CAP). Project aims included: increasing use of narrow-spectrum antibiotics, decreasing use of macrolides, and decreasing concurrent treatment of pneumonia and asthma. Data were collected through chart review across emergency department (ED), inpatient, and discharge settings. Sites reviewed up to 20 charts in each of 6 3-month cycles. Analysis of means with 3-σ control limits was the primary method of assessment for change. The expert panel developed project measures, goals, and interventions. A change package of evidence-based tools to promote judicious use of antibiotics and raise awareness of asthma and pneumonia codiagnosis was disseminated through webinars. Peer coaching and periodic benchmarking were used to motivate change. Fifty-three hospitals enrolled and 48 (91%) completed the 1-year project (July 2014-June 2015). A total of 3802 charts were reviewed for the project; 1842 during baseline cycles and 1960 during postintervention cycles. The median before and after use of narrow-spectrum antibiotics in the collaborative increased by 67% in the ED, 43% in the inpatient setting, and 25% at discharge. Median before and after use of macrolides decreased by 22% in the ED and 27% in the inpatient setting. A decrease in asthma and CAP codiagnosis was noted, but the change was not sustained. Low-cost strategies, including collaborative sharing, peer benchmarking, and coaching, increased judicious use of antibiotics in a diverse range of hospitals for pediatric CAP. Copyright © 2017 by the American Academy of Pediatrics.

  9. Radiation Detection Computational Benchmark Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing differentmore » techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for compilation. This is a report describing the details of the selected Benchmarks and results from various transport codes.« less

  10. Optimal type 2 diabetes mellitus management: the randomised controlled OPTIMISE benchmarking study: baseline results from six European countries.

    PubMed

    Hermans, Michel P; Brotons, Carlos; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank

    2013-12-01

    Micro- and macrovascular complications of type 2 diabetes have an adverse impact on survival, quality of life and healthcare costs. The OPTIMISE (OPtimal Type 2 dIabetes Management Including benchmarking and Standard trEatment) trial comparing physicians' individual performances with a peer group evaluates the hypothesis that benchmarking, using assessments of change in three critical quality indicators of vascular risk: glycated haemoglobin (HbA1c), low-density lipoprotein-cholesterol (LDL-C) and systolic blood pressure (SBP), may improve quality of care in type 2 diabetes in the primary care setting. This was a randomised, controlled study of 3980 patients with type 2 diabetes. Six European countries participated in the OPTIMISE study (NCT00681850). Quality of care was assessed by the percentage of patients achieving pre-set targets for the three critical quality indicators over 12 months. Physicians were randomly assigned to receive either benchmarked or non-benchmarked feedback. All physicians received feedback on six of their patients' modifiable outcome indicators (HbA1c, fasting glycaemia, total cholesterol, high-density lipoprotein-cholesterol (HDL-C), LDL-C and triglycerides). Physicians in the benchmarking group additionally received information on levels of control achieved for the three critical quality indicators compared with colleagues. At baseline, the percentage of evaluable patients (N = 3980) achieving pre-set targets was 51.2% (HbA1c; n = 2028/3964); 34.9% (LDL-C; n = 1350/3865); 27.3% (systolic blood pressure; n = 911/3337). OPTIMISE confirms that target achievement in the primary care setting is suboptimal for all three critical quality indicators. This represents an unmet but modifiable need to revisit the mechanisms and management of improving care in type 2 diabetes. OPTIMISE will help to assess whether benchmarking is a useful clinical tool for improving outcomes in type 2 diabetes.

  11. Turbulent dissipation challenge: a community-driven effort

    NASA Astrophysics Data System (ADS)

    Parashar, Tulasi N.; Salem, Chadi; Wicks, Robert T.; Karimabadi, H.; Gary, S. Peter; Matthaeus, William H.

    2015-10-01

    > Many naturally occurring and man-made plasmas are collisionless and turbulent. It is not yet well understood how the energy in fields and fluid motions is transferred into the thermal degrees of freedom of constituent particles in such systems. The debate at present primarily concerns proton heating. Multiple possible heating mechanisms have been proposed over the past few decades, including cyclotron damping, Landau damping, heating at intermittent structures and stochastic heating. Recently, a community-driven effort was proposed (Parashar & Salem, 2013, arXiv:1303.0204) to bring the community together and understand the relative contributions of these processes under given conditions. In this paper, we propose the first step of this challenge: a set of problems and diagnostics for benchmarking and comparing different types of 2.5D simulations. These comparisons will provide insights into the strengths and limitations of different types of numerical simulations and will help guide subsequent stages of the challenge.

  12. Parameterized centrality metric for network analysis

    NASA Astrophysics Data System (ADS)

    Ghosh, Rumi; Lerman, Kristina

    2011-06-01

    A variety of metrics have been proposed to measure the relative importance of nodes in a network. One of these, alpha-centrality [P. Bonacich, Am. J. Sociol.0002-960210.1086/228631 92, 1170 (1987)], measures the number of attenuated paths that exist between nodes. We introduce a normalized version of this metric and use it to study network structure, for example, to rank nodes and find community structure of the network. Specifically, we extend the modularity-maximization method for community detection to use this metric as the measure of node connectivity. Normalized alpha-centrality is a powerful tool for network analysis, since it contains a tunable parameter that sets the length scale of interactions. Studying how rankings and discovered communities change when this parameter is varied allows us to identify locally and globally important nodes and structures. We apply the proposed metric to several benchmark networks and show that it leads to better insights into network structure than alternative metrics.

  13. Barriers and Facilitators to Retaining and Reengaging HIV Clients in Care: A Case Study of North Carolina.

    PubMed

    Berger, Miriam B; Sullivan, Kristen A; Parnell, Heather E; Keller, Jennifer; Pollard, Alice; Cox, Mary E; Clymore, Jacquelyn M; Quinlivan, Evelyn Byrd

    2016-11-01

    Retention in HIV care is critical to decrease disease-related mortality and morbidity and achieve national benchmarks. However, a myriad of barriers and facilitators impact retention in care; these can be understood within the social-ecological model. To elucidate the unique factors that impact consistent HIV care engagement, a qualitative case study was conducted in North Carolina to examine the barriers and facilitators to retain and reengage HIV clients in care. HIV professionals (n = 21) from a variety of health care settings across the state participated in interviews that were transcribed and analyzed for emergent themes. Respondents described barriers to care at all levels within the HIV prevention and care system including intrapersonal, interpersonal, institutional, community, and public policy. Participants also described recent statewide initiatives with the potential to improve care engagement. Results from this study may assist other states with similar challenges to identify needed programs and priorities to optimize client retention in HIV care. © The Author(s) 2015.

  14. The Italian corporate system in a network perspective (1952-1983)

    NASA Astrophysics Data System (ADS)

    Bargigli, L.; Giannetti, R.

    2018-03-01

    We study the Italian network of boards in four benchmark years covering different decades, when important economic structural shifts occurred. We find that the latter did not significantly disturb its structure as a small world. At the same time, we do not find a strong peculiarity of the Italian variety of capitalism and its corporate governance system. Typical properties of small world networks are at levels which are not dissimilar from those of other developed economies. Even the steady decrease of density that we observe is recurrent in many other national systems. The composition of the core of the most connected boards remains also quite stable over time. Among the most central boards we always find those of banks and insurances, as well as those of State Owned Enterprises (SOEs). At the same time, the system underwent two significant dynamic adjustments in the Sixties (nationalization of electrical industry) and Seventies (financial restructuring after the "big inflation") which are revealed by modifications in the core and in the community structure.

  15. Assessing fidelity to evidence-based practices in usual care: the example of family therapy for adolescent behavior problems.

    PubMed

    Hogue, Aaron; Dauber, Sarah

    2013-04-01

    This study describes a multimethod evaluation of treatment fidelity to the family therapy (FT) approach demonstrated by front-line therapists in a community behavioral health clinic that utilized FT as its routine standard of care. Study cases (N=50) were adolescents with conduct and/or substance use problems randomly assigned to routine family therapy (RFT) or to a treatment-as-usual clinic not aligned with the FT approach (TAU). Observational analyses showed that RFT therapists consistently achieved a level of adherence to core FT techniques comparable to the adherence benchmark established during an efficacy trial of a research-based FT. Analyses of therapist-report measures found that compared to TAU, RFT demonstrated strong adherence to FT and differentiation from three other evidence-based practices: cognitive-behavioral therapy, motivational interviewing, and drug counseling. Implications for rigorous fidelity assessments of evidence-based practices in usual care settings are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Diet Diversity in Pastoral and Agro-pastoral Households in Ugandan Rangeland Ecosystems.

    PubMed

    Mayanja, Maureen; Rubaire-Akiiki, Chris; Morton, John; Young, Stephen; Greiner, Ted

    2015-01-01

    We explore how diet diversity differs with agricultural seasons and between households within pastoral and agro-pastoral livelihood systems, using variety of foods consumed as a less complex proxy indicator of food insecurity than benchmark indicators like anthropometry and serum nutrients. The study was in the central part of the rangelands in Uganda. Seventy nine households were monitored for three seasons, and eight food groups consumed during a 24 hour diet recall period used to create a household diet diversity score (HDDS). Mean HDDS was 3.2, varied significantly with gender, age, livelihood system and season (p<.001, F=15.04), but not with household size or household head's education level. Agro-pastoralists exhibited lower mean diet diversity than pastoralists (p<.01, F=7.84) and among agro-pastoralists, households headed by persons over 65 years were most vulnerable (mean HDDS 2.1). This exploratory study raises issues requiring further investigation to inform policies on nutrition security in the two communities.

  17. A Review of Mental Health and Mental Health Care Disparities Research: 2011-2014.

    PubMed

    Cook, Benjamin Lê; Hou, Sherry Shu-Yeu; Lee-Tauler, Su Yeon; Progovac, Ana Maria; Samson, Frank; Sanchez, Maria Jose

    2018-06-01

    Racial/ethnic minorities in the United States are more likely than Whites to have severe and persistent mental disorders and less likely to access mental health care. This comprehensive review evaluates studies of mental health and mental health care disparities funded by the National Institute of Mental Health (NIMH) to provide a benchmark for the 2015 NIMH revised strategic plan. A total of 615 articles were categorized into five pathways underlying mental health care and three pathways underlying mental health disparities. Identified studies demonstrate that socioeconomic mechanisms and demographic moderators of disparities in mental health status and treatment are well described, as are treatment options that support diverse patient needs. In contrast, there is a need for studies that focus on community- and policy-level predictors of mental health care disparities, link discrimination- and trauma-induced neurobiological pathways to disparities in mental illness, assess the cost effectiveness of disparities reduction programs, and scale up culturally adapted interventions.

  18. Evaluating the effectiveness of student assistance programs in Pennsylvania.

    PubMed

    Fertman, C I; Fichter, C; Schlesinger, J; Tarasevich, S; Wald, H; Zhang, X

    2001-01-01

    This article presents data from an evaluation of the Pennsylvania Student Assistance Program (SAP). Focusing on both program process and effectiveness, the evaluation was conducted to determine the overall efficacy of SAPs in Pennsylvania and, more specifically, how SAP is currently being implemented. Five data collection strategies were employed: statewide surveys of SAP team members and county administrators, focus groups, site visits, and the Pennsylvania Department of Education SAP Database. A total of 1204 individual team members from 154 school buildings completed the team member survey. Fifty-three county administrators completed the county administrator survey. Focus groups were comprised of SAP coordinators, school board personnel and community agency staff. Site visits were conducted at five schools. The findings of the evaluation indicate that SAP in Pennsylvania is being implemented as designed. Recommended is the development of benchmarks and indicators that focus on the best SAP practices and the extent to which various indicators of the effectiveness of SAP are occurring at appropriate levels.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, J; Dossa, D; Gokhale, M

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less

  20. Benchmarking Memory Performance with the Data Cube Operator

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Shabanov, Leonid V.

    2004-01-01

    Data movement across a computer memory hierarchy and across computational grids is known to be a limiting factor for applications processing large data sets. We use the Data Cube Operator on an Arithmetic Data Set, called ADC, to benchmark capabilities of computers and of computational grids to handle large distributed data sets. We present a prototype implementation of a parallel algorithm for computation of the operatol: The algorithm follows a known approach for computing views from the smallest parent. The ADC stresses all levels of grid memory and storage by producing some of 2d views of an Arithmetic Data Set of d-tuples described by a small number of integers. We control data intensity of the ADC by selecting the tuple parameters, the sizes of the views, and the number of realized views. Benchmarking results of memory performance of a number of computer architectures and of a small computational grid are presented.

  1. BIOREL: the benchmark resource to estimate the relevance of the gene networks.

    PubMed

    Antonov, Alexey V; Mewes, Hans W

    2006-02-06

    The progress of high-throughput methodologies in functional genomics has lead to the development of statistical procedures to infer gene networks from various types of high-throughput data. However, due to the lack of common standards, the biological significance of the results of the different studies is hard to compare. To overcome this problem we propose a benchmark procedure and have developed a web resource (BIOREL), which is useful for estimating the biological relevance of any genetic network by integrating different sources of biological information. The associations of each gene from the network are classified as biologically relevant or not. The proportion of genes in the network classified as "relevant" is used as the overall network relevance score. Employing synthetic data we demonstrated that such a score ranks the networks fairly in respect to the relevance level. Using BIOREL as the benchmark resource we compared the quality of experimental and theoretically predicted protein interaction data.

  2. Anharmonic Vibrational Spectroscopy on Metal Transition Complexes

    NASA Astrophysics Data System (ADS)

    Latouche, Camille; Bloino, Julien; Barone, Vincenzo

    2014-06-01

    Advances in hardware performance and the availability of efficient and reliable computational models have made possible the application of computational spectroscopy to ever larger molecular systems. The systematic interpretation of experimental data and the full characterization of complex molecules can then be facilitated. Focusing on vibrational spectroscopy, several approaches have been proposed to simulate spectra beyond the double harmonic approximation, so that more details become available. However, a routine use of such tools requires the preliminary definition of a valid protocol with the most appropriate combination of electronic structure and nuclear calculation models. Several benchmark of anharmonic calculations frequency have been realized on organic molecules. Nevertheless, benchmarks of organometallics or inorganic metal complexes at this level are strongly lacking despite the interest of these systems due to their strong emission and vibrational properties. Herein we report the benchmark study realized with anharmonic calculations on simple metal complexes, along with some pilot applications on systems of direct technological or biological interest.

  3. Social significance of community structure: Statistical view

    NASA Astrophysics Data System (ADS)

    Li, Hui-Jia; Daniels, Jasmine J.

    2015-01-01

    Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p -value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc.

  4. A clustering algorithm for determining community structure in complex networks

    NASA Astrophysics Data System (ADS)

    Jin, Hong; Yu, Wei; Li, ShiJun

    2018-02-01

    Clustering algorithms are attractive for the task of community detection in complex networks. DENCLUE is a representative density based clustering algorithm which has a firm mathematical basis and good clustering properties allowing for arbitrarily shaped clusters in high dimensional datasets. However, this method cannot be directly applied to community discovering due to its inability to deal with network data. Moreover, it requires a careful selection of the density parameter and the noise threshold. To solve these issues, a new community detection method is proposed in this paper. First, we use a spectral analysis technique to map the network data into a low dimensional Euclidean Space which can preserve node structural characteristics. Then, DENCLUE is applied to detect the communities in the network. A mathematical method named Sheather-Jones plug-in is chosen to select the density parameter which can describe the intrinsic clustering structure accurately. Moreover, every node on the network is meaningful so there were no noise nodes as a result the noise threshold can be ignored. We test our algorithm on both benchmark and real-life networks, and the results demonstrate the effectiveness of our algorithm over other popularity density based clustering algorithms adopted to community detection.

  5. ViSAPy: a Python tool for biophysics-based generation of virtual spiking activity for evaluation of spike-sorting algorithms.

    PubMed

    Hagen, Espen; Ness, Torbjørn V; Khosrowshahi, Amir; Sørensen, Christina; Fyhn, Marianne; Hafting, Torkel; Franke, Felix; Einevoll, Gaute T

    2015-04-30

    New, silicon-based multielectrodes comprising hundreds or more electrode contacts offer the possibility to record spike trains from thousands of neurons simultaneously. This potential cannot be realized unless accurate, reliable automated methods for spike sorting are developed, in turn requiring benchmarking data sets with known ground-truth spike times. We here present a general simulation tool for computing benchmarking data for evaluation of spike-sorting algorithms entitled ViSAPy (Virtual Spiking Activity in Python). The tool is based on a well-established biophysical forward-modeling scheme and is implemented as a Python package built on top of the neuronal simulator NEURON and the Python tool LFPy. ViSAPy allows for arbitrary combinations of multicompartmental neuron models and geometries of recording multielectrodes. Three example benchmarking data sets are generated, i.e., tetrode and polytrode data mimicking in vivo cortical recordings and microelectrode array (MEA) recordings of in vitro activity in salamander retinas. The synthesized example benchmarking data mimics salient features of typical experimental recordings, for example, spike waveforms depending on interspike interval. ViSAPy goes beyond existing methods as it includes biologically realistic model noise, synaptic activation by recurrent spiking networks, finite-sized electrode contacts, and allows for inhomogeneous electrical conductivities. ViSAPy is optimized to allow for generation of long time series of benchmarking data, spanning minutes of biological time, by parallel execution on multi-core computers. ViSAPy is an open-ended tool as it can be generalized to produce benchmarking data or arbitrary recording-electrode geometries and with various levels of complexity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Benchmarking working conditions for health and safety in the frontline healthcare industry: Perspectives from Australia and Malaysia.

    PubMed

    McLinton, Sarven S; Loh, May Young; Dollard, Maureen F; Tuckey, Michelle M R; Idris, Mohd Awang; Morton, Sharon

    2018-04-06

    To present benchmarks for working conditions in healthcare industries as an initial effort into international surveillance. The healthcare industry is fundamental to sustaining the health of Australians, yet it is under immense pressure. Budgets are limited, demands are increasing as are workplace injuries and all of these factors compromise patient care. Urgent attention is needed to reduce strains on workers and costs in health care, however, little work has been done to benchmark psychosocial factors in healthcare working conditions in the Asia-Pacific. Intercultural comparisons are important to provide an evidence base for public policy. A cross-sectional design was used (like other studies of prevalence), including a mixed-methods approach with qualitative interviews to better contextualize the results. Data on psychosocial factors and other work variables were collected from healthcare workers in three hospitals in Australia (N = 1,258) and Malaysia (N = 1,125). 2015 benchmarks were calculated for each variable and comparison was conducted via independent samples t tests. Healthcare samples were also compared with benchmarks for non-healthcare general working populations from their respective countries: Australia (N = 973) and Malaysia (N = 225). Our study benchmarks healthcare working conditions in Australia and Malaysia against the general working population, identifying trends that indicate the industry is in need of intervention strategies and job redesign initiatives that better support psychological health and safety. We move toward a better understanding of the precursors of psychosocial safety climate in a broader context, including similarities and differences between Australia and Malaysia in national culture, government occupational health and safety policies and top-level management practices. © 2018 John Wiley & Sons Ltd.

  7. Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks

    NASA Astrophysics Data System (ADS)

    Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.

    2015-12-01

    A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.

  8. ELAPSE - NASA AMES LISP AND ADA BENCHMARK SUITE: EFFICIENCY OF LISP AND ADA PROCESSING - A SYSTEM EVALUATION

    NASA Technical Reports Server (NTRS)

    Davis, G. J.

    1994-01-01

    One area of research of the Information Sciences Division at NASA Ames Research Center is devoted to the analysis and enhancement of processors and advanced computer architectures, specifically in support of automation and robotic systems. To compare systems' abilities to efficiently process Lisp and Ada, scientists at Ames Research Center have developed a suite of non-parallel benchmarks called ELAPSE. The benchmark suite was designed to test a single computer's efficiency as well as alternate machine comparisons on Lisp, and/or Ada languages. ELAPSE tests the efficiency with which a machine can execute the various routines in each environment. The sample routines are based on numeric and symbolic manipulations and include two-dimensional fast Fourier transformations, Cholesky decomposition and substitution, Gaussian elimination, high-level data processing, and symbol-list references. Also included is a routine based on a Bayesian classification program sorting data into optimized groups. The ELAPSE benchmarks are available for any computer with a validated Ada compiler and/or Common Lisp system. Of the 18 routines that comprise ELAPSE, provided within this package are 14 developed or translated at Ames. The others are readily available through literature. The benchmark that requires the most memory is CHOLESKY.ADA. Under VAX/VMS, CHOLESKY.ADA requires 760K of main memory. ELAPSE is available on either two 5.25 inch 360K MS-DOS format diskettes (standard distribution) or a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The ELAPSE benchmarks were written in 1990. VAX and VMS are trademarks of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.

  9. The demographic impact and development benefits of meeting demand for family planning with modern contraceptive methods.

    PubMed

    Goodkind, Daniel; Lollock, Lisa; Choi, Yoonjoung; McDevitt, Thomas; West, Loraine

    2018-01-01

    Meeting demand for family planning can facilitate progress towards all major themes of the United Nations Sustainable Development Goals (SDGs): people, planet, prosperity, peace, and partnership. Many policymakers have embraced a benchmark goal that at least 75% of the demand for family planning in all countries be satisfied with modern contraceptive methods by the year 2030. This study examines the demographic impact (and development implications) of achieving the 75% benchmark in 13 developing countries that are expected to be the furthest from achieving that benchmark. Estimation of the demographic impact of achieving the 75% benchmark requires three steps in each country: 1) translate contraceptive prevalence assumptions (with and without intervention) into future fertility levels based on biometric models, 2) incorporate each pair of fertility assumptions into separate population projections, and 3) compare the demographic differences between the two population projections. Data are drawn from the United Nations, the US Census Bureau, and Demographic and Health Surveys. The demographic impact of meeting the 75% benchmark is examined via projected differences in fertility rates (average expected births per woman's reproductive lifetime), total population, growth rates, age structure, and youth dependency. On average, meeting the benchmark would imply a 16 percentage point increase in modern contraceptive prevalence by 2030 and a 20% decline in youth dependency, which portends a potential demographic dividend to spur economic growth. Improvements in meeting the demand for family planning with modern contraceptive methods can bring substantial benefits to developing countries. To our knowledge, this is the first study to show formally how such improvements can alter population size and age structure. Declines in youth dependency portend a demographic dividend, an added bonus to the already well-known benefits of meeting existing demands for family planning.

  10. Oncology practice trends from the national practice benchmark.

    PubMed

    Barr, Thomas R; Towle, Elaine L

    2012-09-01

    In 2011, we made predictions on the basis of data from the National Practice Benchmark (NPB) reports from 2005 through 2010. With the new 2011 data in hand, we have revised last year's predictions and projected for the next 3 years. In addition, we make some new predictions that will be tracked in future benchmarking surveys. We also outline a conceptual framework for contemplating these data based on an ecological model of the oncology delivery system. The 2011 NPB data are consistent with last year's prediction of a decrease in the operating margins necessary to sustain a community oncology practice. With the new data in, we now predict these reductions to occur more slowly than previously forecast. We note an ease to the squeeze observed in last year's trend analysis, which will allow more time for practices to adapt their business models for survival and offer the best of these practices an opportunity to invest earnings into operations to prepare for the inevitable shift away from historic payment methodology for clinical service. This year, survey respondents reported changes in business structure, first measured in the 2010 data, indicating an increase in the percentage of respondents who believe that change is coming soon, but the majority still have confidence in the viability of their existing business structure. Although oncology practices are in for a bumpy ride, things are looking less dire this year for practices participating in our survey.

  11. A benchmark for comparison of dental radiography analysis algorithms.

    PubMed

    Wang, Ching-Wei; Huang, Cheng-Ta; Lee, Jia-Hong; Li, Chung-Hsing; Chang, Sheng-Wei; Siao, Ming-Jhih; Lai, Tat-Ming; Ibragimov, Bulat; Vrtovec, Tomaž; Ronneberger, Olaf; Fischer, Philipp; Cootes, Tim F; Lindner, Claudia

    2016-07-01

    Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/). Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Phylogenetic Tools for Generalized HIV-1 Epidemics: Findings from the PANGEA-HIV Methods Comparison

    PubMed Central

    Ratmann, Oliver; Hodcroft, Emma B.; Pickles, Michael; Cori, Anne; Hall, Matthew; Lycett, Samantha; Colijn, Caroline; Dearlove, Bethany; Didelot, Xavier; Frost, Simon; Hossain, A.S. Md Mukarram; Joy, Jeffrey B.; Kendall, Michelle; Kühnert, Denise; Leventhal, Gabriel E.; Liang, Richard; Plazzotta, Giacomo; Poon, Art F.Y.; Rasmussen, David A.; Stadler, Tanja; Volz, Erik; Weis, Caroline; Leigh Brown, Andrew J.; Fraser, Christophe

    2017-01-01

    Viral phylogenetic methods contribute to understanding how HIV spreads in populations, and thereby help guide the design of prevention interventions. So far, most analyses have been applied to well-sampled concentrated HIV-1 epidemics in wealthy countries. To direct the use of phylogenetic tools to where the impact of HIV-1 is greatest, the Phylogenetics And Networks for Generalized HIV Epidemics in Africa (PANGEA-HIV) consortium generates full-genome viral sequences from across sub-Saharan Africa. Analyzing these data presents new challenges, since epidemics are principally driven by heterosexual transmission and a smaller fraction of cases is sampled. Here, we show that viral phylogenetic tools can be adapted and used to estimate epidemiological quantities of central importance to HIV-1 prevention in sub-Saharan Africa. We used a community-wide methods comparison exercise on simulated data, where participants were blinded to the true dynamics they were inferring. Two distinct simulations captured generalized HIV-1 epidemics, before and after a large community-level intervention that reduced infection levels. Five research groups participated. Structured coalescent modeling approaches were most successful: phylogenetic estimates of HIV-1 incidence, incidence reductions, and the proportion of transmissions from individuals in their first 3 months of infection correlated with the true values (Pearson correlation > 90%), with small bias. However, on some simulations, true values were markedly outside reported confidence or credibility intervals. The blinded comparison revealed current limits and strengths in using HIV phylogenetics in challenging settings, provided benchmarks for future methods’ development, and supports using the latest generation of phylogenetic tools to advance HIV surveillance and prevention. PMID:28053012

  13. Performance Evaluation and Benchmarking of Next Intelligent Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    del Pobil, Angel; Madhavan, Raj; Bonsignorio, Fabio

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this bookmore » include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.« less

  14. Fixism and conservation science.

    PubMed

    Robert, Alexandre; Fontaine, Colin; Veron, Simon; Monnet, Anne-Christine; Legrand, Marine; Clavel, Joanne; Chantepie, Stéphane; Couvet, Denis; Ducarme, Frédéric; Fontaine, Benoît; Jiguet, Frédéric; le Viol, Isabelle; Rolland, Jonathan; Sarrazin, François; Teplitsky, Céline; Mouchet, Maud

    2017-08-01

    The field of biodiversity conservation has recently been criticized as relying on a fixist view of the living world in which existing species constitute at the same time targets of conservation efforts and static states of reference, which is in apparent disagreement with evolutionary dynamics. We reviewed the prominent role of species as conservation units and the common benchmark approach to conservation that aims to use past biodiversity as a reference to conserve current biodiversity. We found that the species approach is justified by the discrepancy between the time scales of macroevolution and human influence and that biodiversity benchmarks are based on reference processes rather than fixed reference states. Overall, we argue that the ethical and theoretical frameworks underlying conservation research are based on macroevolutionary processes, such as extinction dynamics. Current species, phylogenetic, community, and functional conservation approaches constitute short-term responses to short-term human effects on these reference processes, and these approaches are consistent with evolutionary principles. © 2016 Society for Conservation Biology.

  15. Community-based benchmarking improves spike rate inference from two-photon calcium imaging data.

    PubMed

    Berens, Philipp; Freeman, Jeremy; Deneux, Thomas; Chenkov, Nikolay; McColgan, Thomas; Speiser, Artur; Macke, Jakob H; Turaga, Srinivas C; Mineault, Patrick; Rupprecht, Peter; Gerhard, Stephan; Friedrich, Rainer W; Friedrich, Johannes; Paninski, Liam; Pachitariu, Marius; Harris, Kenneth D; Bolte, Ben; Machado, Timothy A; Ringach, Dario; Stone, Jasmine; Rogerson, Luke E; Sofroniew, Nicolas J; Reimer, Jacob; Froudarakis, Emmanouil; Euler, Thomas; Román Rosón, Miroslav; Theis, Lucas; Tolias, Andreas S; Bethge, Matthias

    2018-05-01

    In recent years, two-photon calcium imaging has become a standard tool to probe the function of neural circuits and to study computations in neuronal populations. However, the acquired signal is only an indirect measurement of neural activity due to the comparatively slow dynamics of fluorescent calcium indicators. Different algorithms for estimating spike rates from noisy calcium measurements have been proposed in the past, but it is an open question how far performance can be improved. Here, we report the results of the spikefinder challenge, launched to catalyze the development of new spike rate inference algorithms through crowd-sourcing. We present ten of the submitted algorithms which show improved performance compared to previously evaluated methods. Interestingly, the top-performing algorithms are based on a wide range of principles from deep neural networks to generative models, yet provide highly correlated estimates of the neural activity. The competition shows that benchmark challenges can drive algorithmic developments in neuroscience.

  16. Diagnostic reference levels of paediatric computed tomography examinations performed at a dedicated Australian paediatric hospital.

    PubMed

    Bibbo, Giovanni; Brown, Scott; Linke, Rebecca

    2016-08-01

    Diagnostic Reference Levels (DRL) of procedures involving ionizing radiation are important tools to optimizing radiation doses delivered to patients and in identifying cases where the levels of doses are unusually high. This is particularly important for paediatric patients undergoing computed tomography (CT) examinations as these examinations are associated with relatively high-dose. Paediatric CT studies, performed at our institution from January 2010 to March 2014, have been retrospectively analysed to determine the 75th and 95th percentiles of both the volume computed tomography dose index (CTDIvol ) and dose-length product (DLP) for the most commonly performed studies to: establish local diagnostic reference levels for paediatric computed tomography examinations performed at our institution, benchmark our DRL with national and international published paediatric values, and determine the compliance of CT radiographer with established protocols. The derived local 75th percentile DRL have been found to be acceptable when compared with those published by the Australian National Radiation Dose Register and two national children's hospitals, and at the international level with the National Reference Doses for the UK. The 95th percentiles of CTDIvol for the various CT examinations have been found to be acceptable values for the CT scanner Dose-Check Notification. Benchmarking CT radiographers shows that they follow the set protocols for the various examinations without significant variations in the machine setting factors. The derivation of DRL has given us the tool to evaluate and improve the performance of our CT service by improved compliance and a reduction in radiation dose to our paediatric patients. We have also been able to benchmark our performance with similar national and international institutions. © 2016 The Royal Australian and New Zealand College of Radiologists.

  17. Paying Medicare Advantage Plans: To Level or Tilt the Playing Field

    PubMed Central

    Glazer, Jacob; McGuire, Thomas G.

    2017-01-01

    Medicare beneficiaries are eligible for health insurance through the public option of traditional Medicare (TM) or may join a private Medicare Advantage (MA) plan. Both are highly subsidized but in different ways. Medicare pays for most of costs directly in TM, and makes a subsidy payment to an MA plan based on a “benchmark” for each beneficiary choosing a private plan. The level of this benchmark is arguably the most important policy decision Medicare makes about the MA program. Presently, about 30% of beneficiaries are in MA, and Medicare subsidizes MA plans more on average than TM. Many analysts recommend equalizing Medicare’s subsidy across the options – referred to in policy circles as a “level playing field.” This paper studies the normative question of how to set the level of the benchmark, applying the versatile model of plan choice developed by Einav and Finkelstein (EF) to Medicare. The EF framework implies unequal subsidies to counteract risk selection across plan types. We also study other reasons to tilt the field: the relative efficiency of MA vs. TM, market power of MA plans, and institutional features of the way Medicare determines subsidies and premiums. After review of the empirical and policy literature, we conclude that in areas where the MA market is competitive, the benchmark should be set below average costs in TM, but in areas characterized by imperfect competition in MA, it should be raised in order to offset output (enrollment) restrictions by plans with market power. We also recommend specific modifications of Medicare rules to make demand for MA more price elastic. PMID:28318667

  18. Compounded effects of heat waves and droughts over the Western Electricity Grid: spatio-temporal scales of impacts and predictability toward mitigation and adaptation.

    NASA Astrophysics Data System (ADS)

    Voisin, N.; Kintner-Meyer, M.; Skaggs, R.; Xie, Y.; Wu, D.; Nguyen, T. B.; Fu, T.; Zhou, T.

    2016-12-01

    Heat waves and droughts are projected to be more frequent and intense. We have seen in the past the effects of each of those extreme climate events on electricity demand and constrained electricity generation, challenging power system operations. Our aim here is to understand the compounding effects under historical conditions. We present a benchmark of Western US grid performance under 55 years of historical climate, and including droughts, using 2010-level of water demand and water management infrastructure, and 2010-level of electricity grid infrastructure and operations. We leverage CMIP5 historical hydrology simulations and force a large scale river routing- reservoir model with 2010-level sectoral water demands. The regulated flow at each water-dependent generating plants is processed to adjust water-dependent electricity generation parameterization in a production cost model, that represents 2010-level power system operations with hourly energy demand of 2010. The resulting benchmark includes a risk distribution of several grid performance metrics (unserved energy, production cost, carbon emission) as a function of inter-annual variability in regional water availability and predictability using large scale climate oscillations. In the second part of the presentation, we describe an approach to map historical heat waves onto this benchmark grid performance using a building energy demand model. The impact of the heat waves, combined with the impact of droughts, is explored at multiple scales to understand the compounding effects. Vulnerabilities of the power generation and transmission systems are highlighted to guide future adaptation.

  19. Paradoxical ventilator associated pneumonia incidences among selective digestive decontamination studies versus other studies of mechanically ventilated patients: benchmarking the evidence base

    PubMed Central

    2011-01-01

    Introduction Selective digestive decontamination (SDD) appears to have a more compelling evidence base than non-antimicrobial methods for the prevention of ventilator associated pneumonia (VAP). However, the striking variability in ventilator associated pneumonia-incidence proportion (VAP-IP) among the SDD studies remains unexplained and a postulated contextual effect remains untested for. Methods Nine reviews were used to source 45 observational (benchmark) groups and 137 component (control and intervention) groups of studies of SDD and studies of three non-antimicrobial methods of VAP prevention. The logit VAP-IP data were summarized by meta-analysis using random effects methods and the associated heterogeneity (tau2) was measured. As group level predictors of logit VAP-IP, the mode of VAP diagnosis, proportion of trauma admissions, the proportion receiving prolonged ventilation and the intervention method under study were examined in meta-regression models containing the benchmark groups together with either the control (models 1 to 3) or intervention (models 4 to 6) groups of the prevention studies. Results The VAP-IP benchmark derived here is 22.1% (95% confidence interval; 95% CI; 19.2 to 25.5; tau2 0.34) whereas the mean VAP-IP of control groups from studies of SDD and of non-antimicrobial methods, is 35.7 (29.7 to 41.8; tau2 0.63) versus 20.4 (17.2 to 24.0; tau2 0.41), respectively (P < 0.001). The disparity between the benchmark groups and the control groups of the SDD studies, which was most apparent for the highest quality studies, could not be explained in the meta-regression models after adjusting for various group level factors. The mean VAP-IP (95% CI) of intervention groups is 16.0 (12.6 to 20.3; tau2 0.59) and 17.1 (14.2 to 20.3; tau2 0.35) for SDD studies versus studies of non-antimicrobial methods, respectively. Conclusions The VAP-IP among the intervention groups within the SDD evidence base is less variable and more similar to the benchmark than among the control groups. These paradoxical observations cannot readily be explained. The interpretation of the SDD evidence base cannot proceed without further consideration of this contextual effect. PMID:21214897

  20. Looking Past Primary Productivity: Benchmarking System Processes that Drive Ecosystem Level Responses in Models

    NASA Astrophysics Data System (ADS)

    Cowdery, E.; Dietze, M.

    2017-12-01

    As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentration are highly variable and contain a considerable amount of uncertainty. Benchmarking model predictions against data are necessary to assess their ability to replicate observed patterns, but also to identify and evaluate the assumptions causing inter-model differences. We have implemented a novel benchmarking workflow as part of the Predictive Ecosystem Analyzer (PEcAn) that is automated, repeatable, and generalized to incorporate different sites and ecological models. Building on the recent Free-Air CO2 Enrichment Model Data Synthesis (FACE-MDS) project, we used observational data from the FACE experiments to test this flexible, extensible benchmarking approach aimed at providing repeatable tests of model process representation that can be performed quickly and frequently. Model performance assessments are often limited to traditional residual error analysis; however, this can result in a loss of critical information. Models that fail tests of relative measures of fit may still perform well under measures of absolute fit and mathematical similarity. This implies that models that are discounted as poor predictors of ecological productivity may still be capturing important patterns. Conversely, models that have been found to be good predictors of productivity may be hiding error in their sub-process that result in the right answers for the wrong reasons. Our suite of tests have not only highlighted process based sources of uncertainty in model productivity calculations, they have also quantified the patterns and scale of this error. Combining these findings with PEcAn's model sensitivity analysis and variance decomposition strengthen our ability to identify which processes need further study and additional data constraints. This can be used to inform future experimental design and in turn can provide an informative starting point for data assimilation.

  1. Cancer and non-cancer health effects from food contaminant exposures for children and adults in California: a risk assessment

    PubMed Central

    2012-01-01

    Background In the absence of current cumulative dietary exposure assessments, this analysis was conducted to estimate exposure to multiple dietary contaminants for children, who are more vulnerable to toxic exposure than adults. Methods We estimated exposure to multiple food contaminants based on dietary data from preschool-age children (2–4 years, n=207), school-age children (5–7 years, n=157), parents of young children (n=446), and older adults (n=149). We compared exposure estimates for eleven toxic compounds (acrylamide, arsenic, lead, mercury, chlorpyrifos, permethrin, endosulfan, dieldrin, chlordane, DDE, and dioxin) based on self-reported food frequency data by age group. To determine if cancer and non-cancer benchmark levels were exceeded, chemical levels in food were derived from publicly available databases including the Total Diet Study. Results Cancer benchmark levels were exceeded by all children (100%) for arsenic, dieldrin, DDE, and dioxins. Non-cancer benchmarks were exceeded by >95% of preschool-age children for acrylamide and by 10% of preschool-age children for mercury. Preschool-age children had significantly higher estimated intakes of 6 of 11 compounds compared to school-age children (p<0.0001 to p=0.02). Based on self-reported dietary data, the greatest exposure to pesticides from foods included in this analysis were tomatoes, peaches, apples, peppers, grapes, lettuce, broccoli, strawberries, spinach, dairy, pears, green beans, and celery. Conclusions Dietary strategies to reduce exposure to toxic compounds for which cancer and non-cancer benchmarks are exceeded by children vary by compound. These strategies include consuming organically produced dairy and selected fruits and vegetables to reduce pesticide intake, consuming less animal foods (meat, dairy, and fish) to reduce intake of persistent organic pollutants and metals, and consuming lower quantities of chips, cereal, crackers, and other processed carbohydrate foods to reduce acrylamide intake. PMID:23140444

  2. Three-dimensional viscous design methodology for advanced technology aircraft supersonic inlet systems

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.

    1983-01-01

    A broad program to develop advanced, reliable, and user oriented three-dimensional viscous design techniques for supersonic inlet systems, and encourage their transfer into the general user community is discussed. Features of the program include: (1) develop effective methods of computing three-dimensional flows within a zonal modeling methodology; (2) ensure reasonable agreement between said analysis and selective sets of benchmark validation data; (3) develop user orientation into said analysis; and (4) explore and develop advanced numerical methodology.

  3. General Aviation Aircraft Reliability Study

    NASA Technical Reports Server (NTRS)

    Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)

    2001-01-01

    This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.

  4. Disaster metrics: quantitative benchmarking of hospital surge capacity in trauma-related multiple casualty events.

    PubMed

    Bayram, Jamil D; Zuabi, Shawki; Subbarao, Italo

    2011-06-01

    Hospital surge capacity in multiple casualty events (MCE) is the core of hospital medical response, and an integral part of the total medical capacity of the community affected. To date, however, there has been no consensus regarding the definition or quantification of hospital surge capacity. The first objective of this study was to quantitatively benchmark the various components of hospital surge capacity pertaining to the care of critically and moderately injured patients in trauma-related MCE. The second objective was to illustrate the applications of those quantitative parameters in local, regional, national, and international disaster planning; in the distribution of patients to various hospitals by prehospital medical services; and in the decision-making process for ambulance diversion. A 2-step approach was adopted in the methodology of this study. First, an extensive literature search was performed, followed by mathematical modeling. Quantitative studies on hospital surge capacity for trauma injuries were used as the framework for our model. The North Atlantic Treaty Organization triage categories (T1-T4) were used in the modeling process for simplicity purposes. Hospital Acute Care Surge Capacity (HACSC) was defined as the maximum number of critical (T1) and moderate (T2) casualties a hospital can adequately care for per hour, after recruiting all possible additional medical assets. HACSC was modeled to be equal to the number of emergency department beds (#EDB), divided by the emergency department time (EDT); HACSC = #EDB/EDT. In trauma-related MCE, the EDT was quantitatively benchmarked to be 2.5 (hours). Because most of the critical and moderate casualties arrive at hospitals within a 6-hour period requiring admission (by definition), the hospital bed surge capacity must match the HACSC at 6 hours to ensure coordinated care, and it was mathematically benchmarked to be 18% of the staffed hospital bed capacity. Defining and quantitatively benchmarking the different components of hospital surge capacity is vital to hospital preparedness in MCE. Prospective studies of our mathematical model are needed to verify its applicability, generalizability, and validity.

  5. The EB Factory: Fundamental Stellar Astrophysics with Eclipsing Binary Stars Discovered by Kepler

    NASA Astrophysics Data System (ADS)

    Stassun, Keivan

    Eclipsing binaries (EBs) are key laboratories for determining the fundamental properties of stars. EBs are therefore foundational objects for constraining stellar evolution models, which in turn are central to determinations of stellar mass functions, of exoplanet properties, and many other areas. The primary goal of this proposal is to mine the Kepler mission light curves for: (1) EBs that include a subgiant star, from which precise ages can be derived and which can thus serve as critically needed age benchmarks; and within these, (2) long-period EBs that include low-mass M stars or brown dwarfs, which are increa-singly becoming the focus of exoplanet searches, but for which there are the fewest available fundamental mass- radius-age benchmarks. A secondary goal of this proposal is to develop an end-to-end computational pipeline -- the Kepler EB Factory -- that allows automatic processing of Kepler light curves for EBs, from period finding, to object classification, to determination of EB physical properties for the most scientifically interesting EBs, and finally to accurate modeling of these EBs for detailed tests and benchmarking of theoretical stellar evolution models. We will integrate the most successful algorithms into a single, cohesive workflow environment, and apply this 'Kepler EB Factory' to the full public Kepler dataset to find and characterize new "benchmark grade" EBs, and will disseminate both the enhanced data products from this pipeline and the pipeline itself to the broader NASA science community. The proposed work responds directly to two of the defined Research Areas of the NASA Astrophysics Data Analysis Program (ADAP), specifically Research Area #2 (Stellar Astrophysics) and Research Area #9 (Astrophysical Databases). To be clear, our primary goal is the fundamental stellar astrophysics that will be enabled by the discovery and analysis of relatively rare, benchmark-grade EBs in the Kepler dataset. At the same time, to enable this goal will require bringing a suite of extant and new custom algorithms to bear on the Kepler data, and thus our development of the Kepler EB Factory represents a value-added product that will allow the widest scientific impact of the in-formation locked within the vast reservoir of the Kepler light curves.

  6. Cascading failures in complex networks with community structure

    NASA Astrophysics Data System (ADS)

    Lin, Guoqiang; di, Zengru; Fan, Ying

    2014-12-01

    Much empirical evidence shows that when attacked with cascading failures, scale-free or even random networks tend to collapse more extensively when the initially deleted node has higher betweenness. Meanwhile, in networks with strong community structure, high-betweenness nodes tend to be bridge nodes that link different communities, and the removal of such nodes will reduce only the connections among communities, leaving the networks fairly stable. Understanding what will affect cascading failures and how to protect or attack networks with strong community structure is therefore of interest. In this paper, we have constructed scale-free Community Networks (SFCN) and Random Community Networks (RCN). We applied these networks, along with the Lancichinett-Fortunato-Radicchi (LFR) benchmark, to the cascading-failure scenario to explore their vulnerability to attack and the relationship between cascading failures and the degree distribution and community structure of a network. The numerical results show that when the networks are of a power-law distribution, a stronger community structure will result in the failure of fewer nodes. In addition, the initial removal of the node with the highest betweenness will not lead to the worst cascading, i.e. the largest avalanche size. The Betweenness Overflow (BOF), an index that we developed, is an effective indicator of this tendency. The RCN, however, display a different result. In addition, the avalanche size of each node can be adopted as an index to evaluate the importance of the node.

  7. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilk, Todd

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  8. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE PAGES

    Yilk, Todd

    2018-02-17

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  9. Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)

    NASA Technical Reports Server (NTRS)

    Hughes, Steven; Knittel, Jeremy; Shoan, Wendy; Kim, Youngkwang; Conway, Claire; Conway, Darrel J.

    2017-01-01

    This paper describes the processes and results of Verification and Validation (VV) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The VV effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.

  10. Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)

    NASA Technical Reports Server (NTRS)

    Hughes, Steven; Knittel, Jeremy; Shoan, Wendy (Compiler); Kim, Youngkwang; Conway, Claire (Compiler); Conway, Darrel

    2017-01-01

    This paper describes the processes and results of Verification and Validation (V&V) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The V&V effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.

  11. Approaches to local climate action in Colorado

    NASA Astrophysics Data System (ADS)

    Huang, Y. D.

    2011-12-01

    Though climate change is a global problem, the impacts are felt on the local scale; it follows that the solutions must come at the local level. Fortunately, many cities and municipalities are implementing climate mitigation (or climate action) policies and programs. However, they face many procedural and institutional barriers to their efforts, such of lack of expertise or data, limited human and financial resources, and lack of community engagement (Krause 2011). To address the first obstacle, thirteen in-depth case studies were done of successful model practices ("best practices") of climate action programs carried out by various cities, counties, and organizations in Colorado, and one outside Colorado, and developed into "how-to guides" for other municipalities to use. Research was conducted by reading documents (e.g. annual reports, community guides, city websites), email correspondence with program managers and city officials, and via phone interviews. The information gathered was then compiled into a series of reports containing a narrative description of the initiative; an overview of the plan elements (target audience and goals); implementation strategies and any indicators of success to date (e.g. GHG emissions reductions, cost savings); and the adoption or approval process, as well as community engagement efforts and marketing or messaging strategies. The types of programs covered were energy action plans, energy efficiency programs, renewable energy programs, and transportation and land use programs. Between the thirteen case studies, there was a range of approaches to implementing local climate action programs, examined along two dimensions: focus on climate change (whether it was direct/explicit or indirect/implicit) and extent of government authority. This benchmarking exercise affirmed the conventional wisdom propounded by Pitt (2010), that peer pressure (that is, the presence of neighboring jurisdictions with climate initiatives), the level of community engagement and enthusiasm, and most importantly staff members dedicated to the area of climate planning have a significant effect on climate mitigation policy adoption. In addition, it supported the claim asserted by Toly (2008) that an emphasis on economic co-benefits perpetuates the principle that economic growth need not be compromised when addressing climate change and weakens our capacity to shift toward a bolder paradigm in what is politically achievable in climate legislation.

  12. Finding Statistically Significant Communities in Networks

    PubMed Central

    Lancichinetti, Andrea; Radicchi, Filippo; Ramasco, José J.; Fortunato, Santo

    2011-01-01

    Community structure is one of the main structural features of networks, revealing both their internal organization and the similarity of their elementary units. Despite the large variety of methods proposed to detect communities in graphs, there is a big need for multi-purpose techniques, able to handle different types of datasets and the subtleties of community structure. In this paper we present OSLOM (Order Statistics Local Optimization Method), the first method capable to detect clusters in networks accounting for edge directions, edge weights, overlapping communities, hierarchies and community dynamics. It is based on the local optimization of a fitness function expressing the statistical significance of clusters with respect to random fluctuations, which is estimated with tools of Extreme and Order Statistics. OSLOM can be used alone or as a refinement procedure of partitions/covers delivered by other techniques. We have also implemented sequential algorithms combining OSLOM with other fast techniques, so that the community structure of very large networks can be uncovered. Our method has a comparable performance as the best existing algorithms on artificial benchmark graphs. Several applications on real networks are shown as well. OSLOM is implemented in a freely available software (http://www.oslom.org), and we believe it will be a valuable tool in the analysis of networks. PMID:21559480

  13. A Citizen Science Program for Monitoring Lake Stages in Northern Wisconsin

    NASA Astrophysics Data System (ADS)

    Kretschmann, A.; Drum, A.; Rubsam, J.; Watras, C. J.; Cellar-Rossler, A.

    2011-12-01

    Historical data indicate that surface water levels in northern Wisconsin are fluctuating more now than they did in the recent past. In the northern highland lake district of Vilas County, Wisconsin, concern about record low lake levels in 2008 spurred local citizens and lake associations to form a lake level monitoring network comprising citizen scientists. The network is administered by the North Lakeland Discovery Center (NLDC, a local NGO) and is supported by a grant from the Citizen Science Monitoring Program of the Wisconsin Department of Natural Resources (WDNR). With technical guidance from limnologists at neighboring UW-Madison Trout Lake Research Station, citizen scientists have installed geographic benchmarks and staff gauges on 26 area lakes. The project engages citizen and student science participants including homeowners, non-profit organization member-participants, and local schools. Each spring, staff gauges are installed and referenced to fixed benchmarks after ice off by NLDC and dedicated volunteers. Volunteers read and record staff gauges on a weekly basis during the ice-free season; and maintain log books recording lake levels to the nearest 0.5 cm. At the end of the season, before ice on, gauges are removed and log books are collected by the NLDC coordinator. Data is compiled and submitted to a database management system, coordinated within the Wisconsin Surface Water Integrated Monitoring System (SWIMS), a statewide information system managed by the WDNR in Madison. Furthermore, NLDC is collaborating with the SWIMS database manager to develop data entry screens based on records collected by citizen scientists. This program is the first of its kind in Wisconsin to utilize citizen scientists to collect lake level data. The retention rate for volunteers has been 100% over the three years since inception, and the program has expanded from four lakes in 2008 to twenty-six lakes in 2011. NLDC stresses the importance of long-term monitoring and the commitment that such monitoring takes. The volunteers recognize this importance and have fulfilled their monitoring commitments on an annual basis. All participating volunteers receive a summary report at the end of the year, and, if requested, a graph that is updated monthly. Recruitment has been through lake associations, town boards, word of mouth, newspaper articles, community events, and the NLDC citizen science webpage. Local interest and participation are high, perhaps due to the value that citizens place on lakes and the concern that they have about declining water levels.

  14. Groundwater-quality data in the Western San Joaquin Valley study unit, 2010 - Results from the California GAMA Program

    USGS Publications Warehouse

    Mathany, Timothy M.; Landon, Matthew K.; Shelton, Jennifer L.; Belitz, Kenneth

    2013-01-01

    Groundwater quality in the approximately 2,170-square-mile Western San Joaquin Valley (WSJV) study unit was investigated by the U.S. Geological Survey (USGS) from March to July 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program's Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The WSJV study unit was the twenty-ninth study unit to be sampled as part of the GAMA-PBP. The GAMA Western San Joaquin Valley study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system, and to facilitate statistically consistent comparisons of untreated groundwater quality throughout California. The primary aquifer system is defined as parts of aquifers corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the WSJV study unit. Groundwater quality in the primary aquifer system may differ from the quality in the shallower or deeper water-bearing zones; shallow groundwater may be more vulnerable to surficial contamination. In the WSJV study unit, groundwater samples were collected from 58 wells in 2 study areas (Delta-Mendota subbasin and Westside subbasin) in Stanislaus, Merced, Madera, Fresno, and Kings Counties. Thirty-nine of the wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells), and 19 wells were selected to aid in the understanding of aquifer-system flow and related groundwater-quality issues (understanding wells). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], low-level fumigants, and pesticides and pesticide degradates), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], and 1,2,3-trichloropropane [1,2,3-TCP]), and naturally occurring inorganic constituents (trace elements, nutrients, dissolved organic carbon [DOC], major and minor ions, silica, total dissolved solids [TDS], alkalinity, total arsenic and iron [unfiltered] and arsenic, chromium, and iron species [filtered]). Isotopic tracers (stable isotopes of hydrogen, oxygen, and boron in water, stable isotopes of nitrogen and oxygen in dissolved nitrate, stable isotopes of sulfur in dissolved sulfate, isotopic ratios of strontium in water, stable isotopes of carbon in dissolved inorganic carbon, activities of tritium, and carbon-14 abundance), dissolved standard gases (methane, carbon dioxide, nitrogen, oxygen, and argon), and dissolved noble gases (argon, helium-4, krypton, neon, and xenon) were measured to help identify sources and ages of sampled groundwater. In total, 245 constituents and 8 water-quality indicators were measured. Quality-control samples (blanks, replicates, or matrix spikes) were collected at 16 percent of the wells in the WSJV study unit, and the results for these samples were used to evaluate the quality of the data from the groundwater samples. Blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples all were within acceptable limits of variability. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 87 percent of the compounds. This study did not evaluate the quality of water delivered to consumers. After withdrawal, groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is delivered to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. Most inorganic constituents detected in groundwater samples from the 39 grid wells were detected at concentrations less than health-based benchmarks. Detections of organic and special-interest constituents from grid wells sampled in the WSJV study unit also were less than health-based benchmarks. In total, VOCs were detected in 12 of the 39 grid wells sampled (approximately 31 percent), pesticides and pesticide degradates were detected in 9 grid wells (approximately 23 percent), and perchlorate was detected in 15 grid wells (approximately 38 percent). Trace elements, major and minor ions, and nutrients were sampled for at 39 grid wells; most concentrations were less than health-based benchmarks. Exceptions include two detections of arsenic greater than the USEPA maximum contaminant level (MCL-US) of 10 micrograms per liter (μg/L), 20 detections of boron greater than the CDPH notification level (NL-CA) of 1,000 μg/L, 2 detections of molybdenum greater than the USEPA lifetime health advisory level (HAL-US) of 40 μg/L, 1 detection of selenium greater than the MCL-US of 50 μg/L, 2 detections of strontium greater than the HAL-US of 4,000 μg/L, and 3 detections of nitrate greater than the MCL-US of 10 μg/L. Results for inorganic constituents with non-health-based benchmarks (iron, manganese, chloride, sulfate, and TDS) showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in five grid wells. Manganese concentrations greater than the SMCL-CA of 50 μg/L were detected in 16 grid wells. Chloride concentrations greater than the recommended SMCL-CA benchmark of 250 milligrams per liter (mg/L) were detected in 14 grid wells, and concentrations in 5 of these wells also were greater than the upper SMCL-CA benchmark of 500 mg/L. Sulfate concentrations greater than the recommended SMCL-CA benchmark of 250 mg/L were measured in 21 grid wells, and concentrations in 13 of these wells also were greater than the SMCL-CA upper benchmark of 500 mg/L. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 36 grid wells, and concentrations in 20 of these wells also were greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  15. Fish communities of benchmark streams in agricultural areas of eastern Wisconsin

    USGS Publications Warehouse

    Sullivan, D.J.; Peterson, E.M.

    1997-01-01

    Fish communities were surveyed at 20 stream sites in agricultural areas in eastern Wisconsin in 1993 and 1995 as part of the National Water-Quality Assessment (NAWQA) Program. These streams, designated "benchmark streams," were selected for study because of their potential use as regional references for healthy streams in agricultural areas, based on aquatic communities, habitat, and water chemistry. The agricultural benchmark streams were selected from four physical settings, or relatively homogeneous units (RHU's), that differ in bedrock type, texture of surficial deposits, and land use. Additional data were collected along with the fish-community data, including measures of habitat, water chemistry, and population surveys of algae and benthic invertebrates. Of the 20 sites, 19 are classified as trout (salmonid) streams. Fish species that require cold or cool water were the most commonly collected. At least one species of trout was collected at 18 sites, and trout were the most abundant species at 13 sites. The species with the greatest collective abundance, and collected at 18 of the 20 sites, were mottled sculpin (Cottus bairdi), a coldwater species. The next most abundant species were brown trout (Salmo trutta), followed by brook trout (Salvelinusfontinalis), creek chub (Semotilus atromaculatus), and longnose dace (Rhinichthys cataractae). In all, 31 species of fish were collected. The number of species per stream ranged from 2 to 14, and the number of individuals collected ranged from 19 to 264. According to Index of Biotic Integrity (IBI) scores, 5 sites were rated excellent, 10 sites rated good, 4 rated fair, and 1 rated poor. The ratings of the five sites in the fair to poor range were low for various reasons. Two sites appeared to have more warmwater species than was ideal for a high-quality coldwater stream. One was sampled during high flow and the results may not be valid for periods of normal flow; the other may have been populated by migrating warmwater species. Two sites had insufficient deep-water habitat to support large numbers offish, especially top carnivores. Finally, one stream may be too cool to support enough warmwater species and too warm to support trout. In general, two methods of evaluating site habitat indicate that habitat is not a limiting factor for fish communities. However, two sites were rated as fair according to both habitat evaluation methods due to low base flow. Two sites rated below good according to one habitat evaluation method but rated good or excellent according to the other. Detrended correspondence analysis (DCA) of data for 17 sites showed three station groupings. These groupings fell along RHU divisions and each group was associated with one of three trout species. A species-richness gradient was evident on the station-ordination diagram. Intolerant species were associated with each grouping, a reflection of the generally high water quality at the sites. However, no significant differences were found between IBI scores or habitat indices among the site groupings. The DCA axis 1 and 2 scores correlated with average velocity and percent pool as well as RHU factors percent sandy surficial deposits, percent wetland, percent agriculture, and bedrock. Average velocity was highest at three sites which also had among the highest measured flow and largest drainage areas. Percent pool was generally lower at sites with smaller percentages of sandy surficial deposits, with one exception. The usefulness of ordination methods in conjunction with more traditional methods of defining biotic integrity (IB I) has been noted in previous studies. In this study, however, perhaps because of the relative homogeneity of the benchmark streams, the IBI did not correlate with the same kinds of factors as the DCA axis scores did. 

  16. DEVELOPING ECOLOGICAL SOIL SCREENING LEVELS: BENCHMARK VALUES FOR SOIL INVERTEBRATES, PLANTS, AND MICROBIAL FUNCTIONS

    EPA Science Inventory

    Soils are repositories for environmental contaminants (COCs) in terrestrial ecosystems. Time, effort, and money repeatedly are invested in literature-based evaluations of potential soil-ecotoxicity...

  17. Benchmarking specialty hospitals, a scoping review on theory and practice.

    PubMed

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  18. The grout/glass performance assessment code system (GPACS) with verification and benchmarking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepho, M.G.; Sutherland, W.H.; Rittmann, P.D.

    1994-12-01

    GPACS is a computer code system for calculating water flow (unsaturated or saturated), solute transport, and human doses due to the slow release of contaminants from a waste form (in particular grout or glass) through an engineered system and through a vadose zone to an aquifer, well and river. This dual-purpose document is intended to serve as a user`s guide and verification/benchmark document for the Grout/Glass Performance Assessment Code system (GPACS). GPACS can be used for low-level-waste (LLW) Glass Performance Assessment and many other applications including other low-level-waste performance assessments and risk assessments. Based on all the cses presented, GPACSmore » is adequate (verified) for calculating water flow and contaminant transport in unsaturated-zone sediments and for calculating human doses via the groundwater pathway.« less

  19. Levelized cost of energy for a Backward Bent Duct Buoy

    DOE PAGES

    Bull, Diana; Jenne, D. Scott; Smith, Christopher S.; ...

    2016-07-18

    The Reference Model Project, supported by the U.S. Department of Energy, was developed to provide publicly available technical and economic benchmarks for a variety of marine energy converters. The methodology to achieve these benchmarks is to develop public domain designs that incorporate power performance estimates, structural models, anchor and mooring designs, power conversion chain designs, and estimates of the operations and maintenance, installation, and environmental permitting required. The reference model designs are intended to be conservative, robust, and experimentally verified. The Backward Bent Duct Buoy (BBDB) presented in this paper is one of three wave energy conversion devices studied withinmore » the Reference Model Project. Furthermore, comprehensive modeling of the BBDB in a Northern California climate has enabled a full levelized cost of energy (LCOE) analysis to be completed on this device.« less

  20. Levelized cost of energy for a Backward Bent Duct Buoy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bull, Diana; Jenne, D. Scott; Smith, Christopher S.

    2016-12-01

    The Reference Model Project, supported by the U.S. Department of Energy, was developed to provide publically available technical and economic benchmarks for a variety of marine energy converters. The methodology to achieve these benchmarks is to develop public domain designs that incorporate power performance estimates, structural models, anchor and mooring designs, power conversion chain designs, and estimates of the operations and maintenance, installation, and environmental permitting required. The reference model designs are intended to be conservative, robust, and experimentally verified. The Backward Bent Duct Buoy (BBDB) presented in this paper is one of three wave energy conversion devices studied withinmore » the Reference Model Project. Comprehensive modeling of the BBDB in a Northern California climate has enabled a full levelized cost of energy (LCOE) analysis to be completed on this device.« less

Top