Benchmarking the Importance and Use of Labor Market Surveys by Certified Rehabilitation Counselors
ERIC Educational Resources Information Center
Barros-Bailey, Mary; Saunders, Jodi L.
2013-01-01
The purpose of this research was to benchmark the importance and use of labor market survey (LMS) among U.S. certified rehabilitation counselors (CRCs). A secondary post hoc analysis of data collected via the "Rehabilitation Skills Inventory--Revised" for the 2011 Commission on Rehabilitation Counselor Certification job analysis resulted in…
The State of PR Graduate Curriculum as We Know It: A Longitudinal Analysis
ERIC Educational Resources Information Center
Briones, Rowena L.; Toth, Elizabeth L.
2013-01-01
This longitudinal content analysis study uses the Commission on Public Relations Education's 2006 report as a benchmark to determine whether master's education in public relations has evolved over the past decade. Findings show a lack of uniformity across the 75 programs studied. In addition, there is a lack of adherence to the Commission on…
NASA Technical Reports Server (NTRS)
deWit, A.; Cohn, N.
1999-01-01
The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure, This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.
NASA Technical Reports Server (NTRS)
de Wit, A.; Cohn, N.
1999-01-01
The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.
The Isprs Benchmark on Indoor Modelling
NASA Astrophysics Data System (ADS)
Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.
2017-09-01
Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-11
... Requirement R3.1 of MOD-001-1. C. Benchmarking 14. In the Final Rule, the Commission directed the ERO to develop benchmarking and updating requirements for the MOD Reliability Standards to measure modeled... requirements should specify the frequency for benchmarking and updating the available transfer and flowgate...
DOE Office of Scientific and Technical Information (OSTI.GOV)
EERE Office of Strategic Programs, Strategic Priorities and Impact Analysis Team
The Energy Department’s Office of Energy Efficiency and Renewable Energy (EERE) commissioned the Clean Energy Manufacturing Analysis Center to conduct the first-ever annual assessment of the economic state of global clean energy manufacturing. The report, Benchmarks of Global Clean Energy Manufacturing, makes economic data on clean energy technology widely available.
From politics to policy: a new payment approach in Medicare Advantage.
Berenson, Robert A
2008-01-01
While the Medicare Advantage program's future remains contentious politically, the Medicare Payment Advisory Commission's (MedPAC's) recommended policy of financial neutrality at the local level between private plans and traditional Medicare ignores local market dynamics in important ways. An analysis correlating plan bids against traditional Medicare's local spending levels likely would provide an alternative method of setting benchmarks, by producing a blend of local and national rates. A result would be that the rural and lower-cost urban "floor counties" would have benchmarks below currently inflated levels but above what financial neutrality at the local level--MedPAC's approach--would produce.
SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, T; Finlay, J; Mesina, C
Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axismore » ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.« less
Bohl, Michael A; Goswami, Roopa; Strassner, Brett; Stanger, Paula
2016-08-01
The purpose of this investigation was to evaluate the potential of using the ACR's Dose Index Registry(®) to meet The Joint Commission's requirements to identify incidents in which the radiation dose index from diagnostic CT examinations exceeded the protocol's expected dose index range. In total, 10,970 records in the Dose Index Registry were statistically analyzed to establish both an upper and lower expected dose index for each protocol. All 2015 studies to date were then retrospectively reviewed to identify examinations whose total examination dose index exceeded the protocol's defined upper threshold. Each dose incident was then logged and reviewed per the new Joint Commission requirements. Facilities may leverage their participation in the ACR's Dose Index Registry to fully meet The Joint Commission's dose incident identification review and external benchmarking requirements. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-05
... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 27 [WT Docket Nos. 12-69, 12-332; FCC 13-136] Promoting Interoperability in the 700 MHz Commercial Spectrum; Requests for Waiver and Extension of Lower 700 MHz Band Interim Construction Benchmark Deadlines AGENCY: Federal Communications Commission...
NASA Astrophysics Data System (ADS)
Lambert, Larry D.
The study by the American Productivity & Quality Center (APQC) was commissioned by Loral Space Information Systems, Inc. and the National Aeronautics and Space Administration (NASA) to evaluate internal assessment systems. APQC benchmarked approaches to the internal assessment of quality management systems in three phases. The first phase included work conducted for the International Benchmarking Clearinghouse (IBC) and consisted of an in-depth analysis of the 1991 Malcolm Baldrige National Quality Award criteria. The second phase was also performed for the IBC and compared the 1991 award criteria among the following quality awards: Deming Prize, Malcolm Baldrige National Quality Award, The President's Award for Quality and Productivity Improvement, The NASA Excellence Award (The George M. Lowe Trophy) for Quality and Productivity Improvement and the Shigeo Shingo Award for Excellence in Manufacturing. The third phase compared the internal implementation approaches of 23 companies selected from American industry for their recognized, formal assessment systems.
NASA Technical Reports Server (NTRS)
Lambert, Larry D.
1992-01-01
The study by the American Productivity & Quality Center (APQC) was commissioned by Loral Space Information Systems, Inc. and the National Aeronautics and Space Administration (NASA) to evaluate internal assessment systems. APQC benchmarked approaches to the internal assessment of quality management systems in three phases. The first phase included work conducted for the International Benchmarking Clearinghouse (IBC) and consisted of an in-depth analysis of the 1991 Malcolm Baldrige National Quality Award criteria. The second phase was also performed for the IBC and compared the 1991 award criteria among the following quality awards: Deming Prize, Malcolm Baldrige National Quality Award, The President's Award for Quality and Productivity Improvement, The NASA Excellence Award (The George M. Lowe Trophy) for Quality and Productivity Improvement and the Shigeo Shingo Award for Excellence in Manufacturing. The third phase compared the internal implementation approaches of 23 companies selected from American industry for their recognized, formal assessment systems.
ERIC Educational Resources Information Center
McCarthy, Sally A.
2009-01-01
Online learning is a complex undertaking that holds great potential as a teaching and learning mode that public colleges and universities may strategically employ to achieve broad institutional priorities and contribute to the attainment of national goals. The Association of Public and Land-grant Universities- (APLU) Sloan National Commission on…
Dmitrieva, Olga; Michalakidis, Georgios; Mason, Aaron; Jones, Simon; Chan, Tom; de Lusignan, Simon
2012-01-01
A new distributed model of health care management is being introduced in England. Family practitioners have new responsibilities for the management of health care budgets and commissioning of services. There are national datasets available about health care providers and the geographical areas they serve. These data could be better used to assist the family practitioner turned health service commissioners. Unfortunately these data are not in a form that is readily usable by these fledgling family commissioning groups. We therefore Web enabled all the national hospital dermatology treatment data in England combining it with locality data to provide a smart commissioning tool for local communities. We used open-source software including the Ruby on Rails Web framework and MySQL. The system has a Web front-end, which uses hypertext markup language cascading style sheets (HTML/CSS) and JavaScript to deliver and present data provided by the database. A combination of advanced caching and schema structures allows for faster data retrieval on every execution. The system provides an intuitive environment for data analysis and processing across a large health system dataset. Web-enablement has enabled data about in patients, day cases and outpatients to be readily grouped, viewed, and linked to other data. The combination of web-enablement, consistent data collection from all providers; readily available locality data; and a registration based primary system enables the creation of data, which can be used to commission dermatology services in small areas. Standardized datasets collected across large health enterprises when web enabled can readily benchmark local services and inform commissioning decisions.
INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom; Javier Ortensi; Sonat Sen
2013-09-01
The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less
SERVER DEVELOPMENT FOR NSLS-II PHYSICS APPLICATIONS AND PERFORMANCE ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, G.; Kraimer, M.
2011-03-28
The beam commissioning software framework of NSLS-II project adopts a client/server based architecture to replace the more traditional monolithic high level application approach. The server software under development is available via an open source sourceforge project named epics-pvdata, which consists of modules pvData, pvAccess, pvIOC, and pvService. Examples of two services that already exist in the pvService module are itemFinder, and gather. Each service uses pvData to store in-memory transient data, pvService to transfer data over the network, and pvIOC as the service engine. The performance benchmarking for pvAccess and both gather service and item finder service are presented inmore » this paper. The performance comparison between pvAccess and Channel Access are presented also. For an ultra low emittance synchrotron radiation light source like NSLS II, the control system requirements, especially for beam control are tight. To control and manipulate the beam effectively, a use case study has been performed to satisfy the requirement and theoretical evaluation has been performed. The analysis shows that model based control is indispensable for beam commissioning and routine operation. However, there are many challenges such as how to re-use a design model for on-line model based control, and how to combine the numerical methods for modeling of a realistic lattice with the analytical techniques for analysis of its properties. To satisfy the requirements and challenges, adequate system architecture for the software framework for beam commissioning and operation is critical. The existing traditional approaches are self-consistent, and monolithic. Some of them have adopted a concept of middle layer to separate low level hardware processing from numerical algorithm computing, physics modelling, data manipulating and plotting, and error handling. However, none of the existing approaches can satisfy the requirement. A new design has been proposed by introducing service oriented architecture technology, and client interface is undergoing. The design and implementation adopted a new EPICS implementation, namely epics-pvdata [9], which is under active development. The implementation of this project under Java is close to stable, and binding to other language such as C++ and/or Python is undergoing. In this paper, we focus on the performance benchmarking and comparison for pvAccess and Channel Access, the performance evaluation for 2 services, gather and item finder respectively.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
With the recent introduction of heterogeneity correction algorithms for brachytherapy, the AAPM community is still unclear on how to commission and implement these into clinical practice. The recently-published AAPM TG-186 report discusses important issues for clinical implementation of these algorithms. A charge of the AAPM-ESTRO-ABG Working Group on MBDCA in Brachytherapy (WGMBDCA) is the development of a set of well-defined test case plans, available as references in the software commissioning process to be performed by clinical end-users. In this practical medical physics course, specific examples on how to perform the commissioning process are presented, as well as descriptions of themore » clinical impact from recent literature reporting comparisons of TG-43 and heterogeneity-based dosimetry. Learning Objectives: Identify key clinical applications needing advanced dose calculation in brachytherapy. Review TG-186 and WGMBDCA guidelines, commission process, and dosimetry benchmarks. Evaluate clinical cases using commercially available systems and compare to TG-43 dosimetry.« less
Continuous quality improvement for the clinical decision unit.
Mace, Sharon E
2004-01-01
Clinical decision units (CDUs) are a relatively new and growing area of medicine in which patients undergo rapid evaluation and treatment. Continuous quality improvement (CQI) is important for the establishment and functioning of CDUs. CQI in CDUs has many advantages: better CDU functioning, fulfillment of Joint Commission on Accreditation of Healthcare Organizations mandates, greater efficiency/productivity, increased job satisfaction, better performance improvement, data availability, and benchmarking. Key elements include a database with volume indicators, operational policies, clinical practice protocols (diagnosis specific/condition specific), monitors, benchmarks, and clinical pathways. Examples of these important parameters are given. The CQI process should be individualized for each CDU and hospital.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivard, M.
With the recent introduction of heterogeneity correction algorithms for brachytherapy, the AAPM community is still unclear on how to commission and implement these into clinical practice. The recently-published AAPM TG-186 report discusses important issues for clinical implementation of these algorithms. A charge of the AAPM-ESTRO-ABG Working Group on MBDCA in Brachytherapy (WGMBDCA) is the development of a set of well-defined test case plans, available as references in the software commissioning process to be performed by clinical end-users. In this practical medical physics course, specific examples on how to perform the commissioning process are presented, as well as descriptions of themore » clinical impact from recent literature reporting comparisons of TG-43 and heterogeneity-based dosimetry. Learning Objectives: Identify key clinical applications needing advanced dose calculation in brachytherapy. Review TG-186 and WGMBDCA guidelines, commission process, and dosimetry benchmarks. Evaluate clinical cases using commercially available systems and compare to TG-43 dosimetry.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-03
... Commission as an open-end management investment company. The investment manager of the Fund is Pacific Investment Management Company LLC (``PIMCO'' or ``Adviser''). State Street Bank & Trust Co. is the custodian... return which exceeds that of its benchmark indexes, consistent with prudent investment management. The...
The Development of the Children's Services Statistical Neighbour Benchmarking Model. Final Report
ERIC Educational Resources Information Center
Benton, Tom; Chamberlain, Tamsin; Wilson, Rebekah; Teeman, David
2007-01-01
In April 2006, the Department for Education and Skills (DfES) commissioned the National Foundation for Educational Research (NFER) to conduct an independent external review in order to develop a single "statistical neighbour" model. This single model aimed to combine the key elements of the different models currently available and be…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-02
... Halibut Fishery; Guideline Harvest Levels for the Guided Sport Fishery for Pacific Halibut in...) for the guided sport fishery in International Pacific Halibut Commission (IPHC) Regulatory Areas 2C... sport fishery for halibut. The GHLs are benchmark harvest levels for participants in the guided sport...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
... Halibut Fishery; Guideline Harvest Levels for the Guided Sport Fishery for Pacific Halibut in...) for the guided sport fishery in International Pacific Halibut Commission (IPHC) Regulatory Areas 2C... the guided sport fishery for halibut. The GHLs are benchmark harvest levels for participants in the...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-16
...-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Designation of Longer Period for Commission Action on Proceedings to Determine Whether to Approve or Disapprove Proposed Rule Change To Establish... proposed rule change to establish various ``Benchmark Orders'' under NASDAQ Rule 4751(f). The proposed rule...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
... Lawson, Senior Special Counsel, Division of Trading and Markets (``Division''), Commission, dated May 20... return--of two index components (the target component and the benchmark component). The index is calculated by measuring the total return of the target component relative to the total return of the...
The Stock Market Game: A Simulation of Stock Market Trading. Grades 5-8.
ERIC Educational Resources Information Center
Draze, Dianne
This guide to a unit on a simulation game about the stock market contains an instructional text and two separate simulations. Through directed lessons and reproducible worksheets, the unit teaches students about business ownership, stock exchanges, benchmarks, commissions, why prices change, the logistics of buying and selling stocks, and how to…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-02
...-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Designation of a Longer Period for Commission Action on Proposed Rule Change To Establish ``Benchmark Orders'' Under NASDAQ Rule 4751(f) June 26, 2012. On May 1, 2012, The NASDAQ Stock Market LLC (``NASDAQ'' or ``Exchange'') filed with the...
Noninterceptive transverse emittance measurements using BPM for Chinese ADS R&D project
NASA Astrophysics Data System (ADS)
Wang, Zhi-Jun; Feng, Chi; He, Yuan; Dou, Weiping; Tao, Yue; Chen, Wei-long; Jia, Huan; Liu, Shu-hui; Wang, Wang-sheng; Zhang, Yong; Wu, Jian-qiang; Zhang, Sheng-hu; Zhang, X. L.
2016-04-01
The noninterceptive four-dimensional transverse emittance measurements are essential for commissioning the high power continue-wave (CW) proton linacs as well as their operations. The conventional emittance measuring devices such as slits and wire scanners are not well suited under these conditions due to sure beam damages. Therefore, the method of using noninterceptive Beam Position Monitor (BPM) is developed and demonstrated on Injector Scheme II at the Chinese Accelerator Driven Sub-critical System (China-ADS) proofing facility inside Institute of Modern Physics (IMP) [1]. The results of measurements are in good agreements with wire scanners and slits at low duty-factor pulsed (LDFP) beam. In this paper, the detailed experiment designs, data analysis and result benchmarking are presented.
Al-Rubaish, Abdullah M; Abdel Rahim, Sheikh Idris; Hassan, Ammar; Ali, Amein Al; Mokabel, Fatma; Hegazy, Mohammed; Wosornu, Ladé
2010-05-01
The National Commission for Academic Accreditation and Assessment is responsible for the academic accreditation of universities in the Kingdom of Saudi Arabia (KSA). Requirements for this include evaluation of teaching effectiveness, evidence-based conclusions, and external benchmarks. To develop a questionnaire for students' evaluation of the teaching skills of individual instructors and provide a tool for benchmarking. College of Nursing, University of Dammam [UoD], May-June 2009. The original questionnaire was "Monash Questionnaire Series on Teaching (MonQueST) - Clinical Nursing. The UoD modification retained four areas and seven responses, but reduced items from 26 to 20. Outcome measures were factor analysis and Cronbach's alpha coefficient. Seven Nursing courses were studied, viz.: Fundamentals, Medical, Surgical, Psychiatric and Mental Health, Obstetrics and Gynecology, Pediatrics, and Family and Community Health. Total number of students was 74; missing data ranged from 5 to 27%. The explained variance ranged from 66.9% to 78.7%. The observed Cornbach's α coefficients ranged from 0.78 to 0.93, indicating an exceptionally high reliability. The students in the study were found to be fair and frank in their evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaulieu, L.
With the recent introduction of heterogeneity correction algorithms for brachytherapy, the AAPM community is still unclear on how to commission and implement these into clinical practice. The recently-published AAPM TG-186 report discusses important issues for clinical implementation of these algorithms. A charge of the AAPM-ESTRO-ABG Working Group on MBDCA in Brachytherapy (WGMBDCA) is the development of a set of well-defined test case plans, available as references in the software commissioning process to be performed by clinical end-users. In this practical medical physics course, specific examples on how to perform the commissioning process are presented, as well as descriptions of themore » clinical impact from recent literature reporting comparisons of TG-43 and heterogeneity-based dosimetry. Learning Objectives: Identify key clinical applications needing advanced dose calculation in brachytherapy. Review TG-186 and WGMBDCA guidelines, commission process, and dosimetry benchmarks. Evaluate clinical cases using commercially available systems and compare to TG-43 dosimetry.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mihailidis, D; Mallah, J; Zhu, D
2016-06-15
Purpose: The dosimetric leaf gap (DLG) is an important parameter to be measured for dynamic beam delivery of modern linacs, like the Varian Truebeam (TB). The clinical effects of DLG-values on IMRT and/or VMAT commissioning of two “matched” TB linacs will be presented.Methods and Materials: The DLG values on two TB linacs were measured for all energy modalities (filtered and FFF-modes) as part of the dynamic delivery mode commissioning (IMRT and/or VMAT. After the standard beam data was modeled in eclipse treatment planning system (TPS) and validated, IMRT validation was performed based on TG1191 benchmark, IROC Head-Neck (H&N) phantom andmore » sample of clinical cases, all measured on both linacs. Although there was a single-set of data entered in the TPS, a noticeable difference was observed for the DLG-values between the linacs. The TG119, IROC phantom and selected patient plans were furnished with DLG-values of TB1 for both linacs and the delivery was performed on both TB linacs for comparison. Results: The DLG values of TB1 was first used for both linacs to perform the testing comparisons. The QA comparison of TG119 plans revealed a great dependence of the results to the DLG-values used for the linac for all energy modalities studied, especially when moving from 3%/3mm to 2%/2mm γ-analysis. Conclusion: The DLG-values have a definite influence on the dynamic dose, delivery that increases with the plan complexity. We recommend that the measured DLG-values are assigned to each of the “matched” linacs, even if a single set of beam data describes multiple linacs. The user should perform a detail test of the dynamic delivery of each linac based on end-to-end benchmark suites like TG119 and IROC phantoms.1Ezzel G., et al., “IMRT commissioning: Multiple institution planning and dosimetry comparisons, a report from AAPM Task Group 119.” Med. Phys. 36:5359–5373 (2009). partly supported by CAMC Cancer Center and Alliance Oncology.« less
Monitoring the delivery of cancer care: Commission on Cancer and National Cancer Data Base.
Williams, Richelle T; Stewart, Andrew K; Winchester, David P
2012-07-01
The primary objective of the Commission on Cancer (CoC) is to ensure the delivery of comprehensive, high-quality care that improves survival while maintaining quality of life for patients with cancer. This article examines the initiatives of the CoC toward achieving this goal, utilizing data from the National Cancer Data Base (NCDB) to monitor treatment patterns and outcomes, to develop quality measures, and to benchmark hospital performance. The article also highlights how these initiatives align with the Institute of Medicine's recommendations for improving the quality of cancer care and briefly explores future projects of the CoC and NCDB. Copyright © 2012 Elsevier Inc. All rights reserved.
Ferretti, A; Martignano, A; Simonato, F; Paiusco, M
2014-02-01
The aim of the present work was the validation of the VMC(++) Monte Carlo (MC) engine implemented in the Oncentra Masterplan (OMTPS) and used to calculate the dose distribution produced by the electron beams (energy 5-12 MeV) generated by the linear accelerator (linac) Primus (Siemens), shaped by a digital variable applicator (DEVA). The BEAMnrc/DOSXYZnrc (EGSnrc package) MC model of the linac head was used as a benchmark. Commissioning results for both MC codes were evaluated by means of 1D Gamma Analysis (2%, 2 mm), calculated with a home-made Matlab (The MathWorks) program, comparing the calculations with the measured profiles. The results of the commissioning of OMTPS were good [average gamma index (γ) > 97%]; some mismatches were found with large beams (size ≥ 15 cm). The optimization of the BEAMnrc model required to increase the beam exit window to match the calculated and measured profiles (final average γ > 98%). Then OMTPS dose distribution maps were compared with DOSXYZnrc with a 2D Gamma Analysis (3%, 3 mm), in 3 virtual water phantoms: (a) with an air step, (b) with an air insert, and (c) with a bone insert. The OMTPD and EGSnrc dose distributions with the air-water step phantom were in very high agreement (γ ∼ 99%), while for heterogeneous phantoms there were differences of about 9% in the air insert and of about 10-15% in the bone region. This is due to the Masterplan implementation of VMC(++) which reports the dose as "dose to water", instead of "dose to medium". Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Singh, Madhu, Ed.; Duvekot, Ruud, Ed.
2013-01-01
This publication is the outcome of the international conference organized by UNESCO Institute for Lifelong Learning (UIL), in collaboration with the Centre for Validation of Prior Learning at Inholland University of Applied Sciences, the Netherlands, and in partnership with the French National Commission for UNESCO that was held in Hamburg in…
A Question of Equity: Women and the Glass Ceiling in the Federal Government
1992-10-01
Sponsoring/Monitoring Agency Regulations (i.e., UNCLASSIFIED). If form Names(s) and Address(es). Self-explanatory. contains classified information...then director of OPM, Constance programs performed by the U.S. Equal Employ- Newman, who said, ` * * the percentages of ment Opportunity Commission ( EEOC ...Prior research has indicated that adequate benchmark by which to measure repre- these barriers exist and that they can be complex sentation. The EEOC
In Search of a Time Efficient Approach to Crack and Delamination Growth Predictions in Composites
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Carvalho, Nelson
2016-01-01
Analysis benchmarking was used to assess the accuracy and time efficiency of algorithms suitable for automated delamination growth analysis. First, the Floating Node Method (FNM) was introduced and its combination with a simple exponential growth law (Paris Law) and Virtual Crack Closure technique (VCCT) was discussed. Implementation of the method into a user element (UEL) in Abaqus/Standard(Registered TradeMark) was also presented. For the assessment of growth prediction capabilities, an existing benchmark case based on the Double Cantilever Beam (DCB) specimen was briefly summarized. Additionally, the development of new benchmark cases based on the Mixed-Mode Bending (MMB) specimen to assess the growth prediction capabilities under mixed-mode I/II conditions was discussed in detail. A comparison was presented, in which the benchmark cases were used to assess the existing low-cycle fatigue analysis tool in Abaqus/Standard(Registered TradeMark) in comparison to the FNM-VCCT fatigue growth analysis implementation. The low-cycle fatigue analysis tool in Abaqus/Standard(Registered TradeMark) was able to yield results that were in good agreement with the DCB benchmark example. Results for the MMB benchmark cases, however, only captured the trend correctly. The user element (FNM-VCCT) always yielded results that were in excellent agreement with all benchmark cases, at a fraction of the analysis time. The ability to assess the implementation of two methods in one finite element code illustrated the value of establishing benchmark solutions.
NASA Astrophysics Data System (ADS)
Murata, Isao; Ohta, Masayuki; Miyamaru, Hiroyuki; Kondo, Keitaro; Yoshida, Shigeo; Iida, Toshiyuki; Ochiai, Kentaro; Konno, Chikara
2011-10-01
Nuclear data are indispensable for development of fusion reactor candidate materials. However, benchmarking of the nuclear data in MeV energy region is not yet adequate. In the present study, benchmark performance in the MeV energy region was investigated theoretically for experiments by using a 14 MeV neutron source. We carried out a systematical analysis for light to heavy materials. As a result, the benchmark performance for the neutron spectrum was confirmed to be acceptable, while for gamma-rays it was not sufficiently accurate. Consequently, a spectrum shifter has to be applied. Beryllium had the best performance as a shifter. Moreover, a preliminary examination of whether it is really acceptable that only the spectrum before the last collision is considered in the benchmark performance analysis. It was pointed out that not only the last collision but also earlier collisions should be considered equally in the benchmark performance analysis.
The NEW detector: construction, commissioning and first results
NASA Astrophysics Data System (ADS)
Nebot-Guinot, M.;
2017-09-01
NEXT (Neutrino Experiment with a Xenon TPC) is a neutrinoless double-beta (ββ0ν) decay experiment at the Canfranc Underground Laboratory (LSC). It seeks to detect the ββ0ν decay of Xe-136 using a high pressure xenon gas TPC with electroluminescent (EL) amplification. The NEXT-White (NEW) detector, with an active xenon mass of about 10 kg at 15 bar, is the first NEXT prototype installed at LSC. It implements the NEXT detector concept tested in smaller prototypes using the same radiopure sensors and materials that will be used in the future NEXT-100, serving as a benchmark for technical solutions as well as for the signal selection and background rejection algorithms. NEW is currently under commissioning at the LSC. In this poster proceedings we describe the technical solutions adopted for NEW construction, the lessons learned from the commissioning phase, and the first results on energy calibration and energy resolution obtained with low-energy radioactive source data.
In-core flux sensor evaluations at the ATR critical facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Troy Unruh; Benjamin Chase; Joy Rempe
2014-09-01
Flux detector evaluations were completed as part of a joint Idaho State University (ISU) / Idaho National Laboratory (INL) / French Atomic Energy commission (CEA) ATR National Scientific User Facility (ATR NSUF) project to compare the accuracy, response time, and long duration performance of several flux detectors. Special fixturing developed by INL allows real-time flux detectors to be inserted into various ATRC core positions and perform lobe power measurements, axial flux profile measurements, and detector cross-calibrations. Detectors initially evaluated in this program include the French Atomic Energy Commission (CEA)-developed miniature fission chambers; specialized self-powered neutron detectors (SPNDs) developed by themore » Argentinean National Energy Commission (CNEA); specially developed commercial SPNDs from Argonne National Laboratory. As shown in this article, data obtained from this program provides important insights related to flux detector accuracy and resolution for subsequent ATR and CEA experiments and flux data required for bench-marking models in the ATR V&V Upgrade Initiative.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Risner, J.M.; Wiarda, D.; Miller, T.M.
2011-07-01
The U.S. Nuclear Regulatory Commission's Regulatory Guide 1.190 states that calculational methods used to estimate reactor pressure vessel (RPV) fluence should use the latest version of the evaluated nuclear data file (ENDF). The VITAMIN-B6 fine-group library and BUGLE-96 broad-group library, which are widely used for RPV fluence calculations, were generated using ENDF/B-VI.3 data, which was the most current data when Regulatory Guide 1.190 was issued. We have developed new fine-group (VITAMIN-B7) and broad-group (BUGLE-B7) libraries based on ENDF/B-VII.0. These new libraries, which were processed using the AMPX code system, maintain the same group structures as the VITAMIN-B6 and BUGLE-96 libraries.more » Verification and validation of the new libraries were accomplished using diagnostic checks in AMPX, 'unit tests' for each element in VITAMIN-B7, and a diverse set of benchmark experiments including critical evaluations for fast and thermal systems, a set of experimental benchmarks that are used for SCALE regression tests, and three RPV fluence benchmarks. The benchmark evaluation results demonstrate that VITAMIN-B7 and BUGLE-B7 are appropriate for use in RPV fluence calculations and meet the calculational uncertainty criterion in Regulatory Guide 1.190. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Risner, Joel M; Wiarda, Dorothea; Miller, Thomas Martin
2011-01-01
The U.S. Nuclear Regulatory Commission s Regulatory Guide 1.190 states that calculational methods used to estimate reactor pressure vessel (RPV) fluence should use the latest version of the Evaluated Nuclear Data File (ENDF). The VITAMIN-B6 fine-group library and BUGLE-96 broad-group library, which are widely used for RPV fluence calculations, were generated using ENDF/B-VI data, which was the most current data when Regulatory Guide 1.190 was issued. We have developed new fine-group (VITAMIN-B7) and broad-group (BUGLE-B7) libraries based on ENDF/B-VII. These new libraries, which were processed using the AMPX code system, maintain the same group structures as the VITAMIN-B6 and BUGLE-96more » libraries. Verification and validation of the new libraries was accomplished using diagnostic checks in AMPX, unit tests for each element in VITAMIN-B7, and a diverse set of benchmark experiments including critical evaluations for fast and thermal systems, a set of experimental benchmarks that are used for SCALE regression tests, and three RPV fluence benchmarks. The benchmark evaluation results demonstrate that VITAMIN-B7 and BUGLE-B7 are appropriate for use in LWR shielding applications, and meet the calculational uncertainty criterion in Regulatory Guide 1.190.« less
Analysis of a benchmark suite to evaluate mixed numeric and symbolic processing
NASA Technical Reports Server (NTRS)
Ragharan, Bharathi; Galant, David
1992-01-01
The suite of programs that formed the benchmark for a proposed advanced computer is described and analyzed. The features of the processor and its operating system that are tested by the benchmark are discussed. The computer codes and the supporting data for the analysis are given as appendices.
NASA Astrophysics Data System (ADS)
Shokrollahpour, Elsa; Hosseinzadeh Lotfi, Farhad; Zandieh, Mostafa
2016-06-01
Efficiency and quality of services are crucial to today's banking industries. The competition in this section has become increasingly intense, as a result of fast improvements in Technology. Therefore, performance analysis of the banking sectors attracts more attention these days. Even though data envelopment analysis (DEA) is a pioneer approach in the literature as of an efficiency measurement tool and finding benchmarks, it is on the other hand unable to demonstrate the possible future benchmarks. The drawback to it could be that the benchmarks it provides us with, may still be less efficient compared to the more advanced future benchmarks. To cover for this weakness, artificial neural network is integrated with DEA in this paper to calculate the relative efficiency and more reliable benchmarks of one of the Iranian commercial bank branches. Therefore, each branch could have a strategy to improve the efficiency and eliminate the cause of inefficiencies based on a 5-year time forecast.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandor, Debra; Chung, Donald; Keyser, David
This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.
Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions
NASA Technical Reports Server (NTRS)
Kruger, Ronald
2011-01-01
The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2011-01-01
The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.
Development and verification of NRC`s single-rod fuel performance codes FRAPCON-3 AND FRAPTRAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyer, C.E.; Cunningham, M.E.; Lanning, D.D.
1998-03-01
The FRAPCON and FRAP-T code series, developed in the 1970s and early 1980s, are used by the US Nuclear Regulatory Commission (NRC) to predict fuel performance during steady-state and transient power conditions, respectively. Both code series are now being updated by Pacific Northwest National Laboratory to improve their predictive capabilities at high burnup levels. The newest versions of the codes are called FRAPCON-3 and FRAPTRAN. The updates to fuel property and behavior models are focusing on providing best estimate predictions under steady-state and fast transient power conditions up to extended fuel burnups (> 55 GWd/MTU). Both codes will be assessedmore » against a data base independent of the data base used for code benchmarking and an estimate of code predictive uncertainties will be made based on comparisons to the benchmark and independent data bases.« less
CLEAR: Cross-Layer Exploration for Architecting Resilience
2017-03-01
benchmark analysis, also provides cost-effective solutions (~1% additional energy cost for the same 50× improvement). This paper addresses the...core (OoO-core) [Wang 04], across 18 benchmarks . Such extensive exploration enables us to conclusively answer the above cross-layer resilience...analysis of the effects of soft errors on application benchmarks , provides a highly effective soft error resilience approach. 3. The above
NASA Technical Reports Server (NTRS)
Orifici, Adrian C.; Krueger, Ronald
2010-01-01
With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.
Non-invasive online wavelength measurements at FLASH2 and present benchmark
Braune, Markus; Buck, Jens; Kuhlmann, Marion; Grunewald, Sören; Düsterer, Stefan; Viefhaus, Jens; Tiedtke, Kai
2018-01-01
At FLASH2, the free-electron laser radiation wavelength is routinely measured by an online spectrometer based on photoionization of gas targets. Photoelectrons are detected with time-of-flight spectrometers and the wavelength is determined by means of well known binding energies of the target species. The wavelength measurement is non-invasive and transparent with respect to running user experiments due to the low gas pressure applied. Sophisticated controls for setting the OPIS operation parameters have been created and integrated into the distributed object-oriented control system at FLASH2. Raw and processed data can be stored on request in the FLASH data acquisition system for later correlation with data from user experiments or re-analysis. In this paper, the commissioning of the instrument at FLASH2 and the challenges of space charge effects on wavelength determination are reported. Furthermore, strategies for fast data reduction and online data processing are presented. PMID:29271744
Super Energy Efficiency Design (S.E.E.D.) Home Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
German, A.; Dakin, B.; Backman, C.
This report describes the results of evaluation by the Alliance for Residential Building Innovation (ARBI) Building America team of the 'Super Energy Efficient Design' (S.E.E.D) home, a 1,935 sq. ft., single-story spec home located in Tucson, AZ. This prototype design was developed with the goal of providing an exceptionally energy efficient yet affordable home and includes numerous aggressive energy features intended to significantly reduce heating and cooling loads such as structural insulated panel (SIP) walls and roof, high performance windows, an ERV, an air-to-water heat pump with mixed-mode radiant and forced air delivery, solar water heating, and rooftop PV. Sourcemore » energy savings are estimated at 45% over the Building America B10 Benchmark. System commissioning, short term testing, long term monitoring and detailed analysis of results was conducted to identify the performance attributes and cost effectiveness of the whole house measure package.« less
Super Energy Efficient Design (S.E.E.D.) Home Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
German, A.; Dakin, B.; Backman, C.
This report describes the results of evaluation by the Alliance for Residential Building Innovation (ARBI) Building America team of the “Super Energy Efficient Design” (S.E.E.D) home, a 1,935 sq. ft., single-story spec home located in Tucson, AZ. This prototype design was developed with the goal of providing an exceptionally energy efficient yet affordable home and includes numerous aggressive energy features intended to significantly reduce heating and cooling loads such as structural insulated panel (SIP) walls and roof, high performance windows, an ERV, an air-to-water heat pump with mixed-mode radiant and forced air delivery, solar water heating, and rooftop PV. Sourcemore » energy savings are estimated at 45% over the Building America B10 Benchmark. System commissioning, short term testing, long term monitoring and detailed analysis of results was conducted to identify the performance attributes and cost effectiveness of the whole house measure package.« less
NASA Astrophysics Data System (ADS)
Pulkkinen, A. A.; Bernabeu, E.; Weigel, R. S.; Kelbert, A.; Rigler, E. J.; Bedrosian, P.; Love, J. J.
2017-12-01
Development of realistic storm scenarios that can be played through the exposed systems is one of the key requirements for carrying out quantitative space weather hazards assessments. In the geomagnetically induced currents (GIC) and power grids context, these scenarios have to quantify the spatiotemporal evolution of the geoelectric field that drives the potentially hazardous currents in the system. In response to the Federal Energy Regulatory Commission (FERC) order 779, a team of scientists and engineers that worked under the auspices of North American Electric Reliability Corporation (NERC), has developed extreme geomagnetic storm and geoelectric field benchmark(s) that use various scaling factors that account for geomagnetic latitude and ground structure of the locations of interest. These benchmarks, together with the information generated in the National Space Weather Action Plan, are the foundation for the hazards assessments that the industry will be carrying out in response to the FERC order and under the auspices of the National Science and Technology Council. While the scaling factors developed in the past work were based on the best available information, there is now significant new information available for parts of the U.S. pertaining to the ground response to external geomagnetic field excitation. The significant new information includes the results magnetotelluric surveys that have been conducted over the past few years across the contiguous US and results from previous surveys that have been made available in a combined online database. In this paper, we distill this new information in the framework of the NERC benchmark and in terms of updated ground response scaling factors thereby allowing straightforward utilization in the hazard assessments. We also outline the path forward for improving the overall extreme event benchmark scenario(s) including generalization of the storm waveforms and geoelectric field spatial patterns.
Sonntag, Philipp
2014-01-01
Reviewers of the Max-Planck-Institut zur Erforschung der Lebensbedingungen der wissenschaftlich-technischen Welt (MPIL) did focus upon an abundance of vague reports of evaluative commissions, of benchmarking, of scientific modes. Thus it remained rather neglected, what staff actually had researched. An example: Progression and end of project AKR (Work-Consumption-Assessment) does display all kinds of related emotions at MPIL, and the sensitive guidance by Carl Friedrich von Weizsäcker.
Simple Benchmark Specifications for Space Radiation Protection
NASA Technical Reports Server (NTRS)
Singleterry, Robert C. Jr.; Aghara, Sukesh K.
2013-01-01
This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.
International land Model Benchmarking (ILAMB) Package v002.00
Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory
2016-05-09
As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.
International land Model Benchmarking (ILAMB) Package v001.00
Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory
2016-05-02
As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.
Mitchell, L
1996-01-01
The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas R.; Hong, Seokyong; Lee, Sangkeun
2016-06-01
GraphBench is a benchmark suite for graph pattern mining and graph analysis systems. The benchmark suite is a significant addition to conducting apples-apples comparison of graph analysis software (databases, in-memory tools, triple stores, etc.)
Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data
NASA Astrophysics Data System (ADS)
Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki
2017-09-01
There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.
Building thermography as a tool in energy audits and building commissioning procedure
NASA Astrophysics Data System (ADS)
Kauppinen, Timo
2007-04-01
A Building Commissioning-project (ToVa) was launched in Finland in the year 2003. A comprehensive commissioning procedure, including the building process and operation stage was developed in the project. This procedure will confirm the precise documentation of client's goals, definition of planning goals and the performance of the building. It is rather usual, that within 1-2 years after introduction the users complain about the defects or performance malfunctions of the building. Thermography is one important manual tool in verifying the thermal performance of the building envelope. In this paper the results of one pilot building (a school) will be presented. In surveying the condition and energy efficiency of buildings, various auxiliary means are needed. We can compare the consumption data of the target building with other, same type of buildings by benchmarking. Energy audit helps to localize and determine the energy saving potential. The most general and also most effective auxiliary means in monitoring the thermal performance of building envelopes is an infrared camera. In this presentation some examples of the use of thermography in energy audits are presented.
EPA and EFSA approaches for Benchmark Dose modeling
Benchmark dose (BMD) modeling has become the preferred approach in the analysis of toxicological dose-response data for the purpose of deriving human health toxicity values. The software packages most often used are Benchmark Dose Software (BMDS, developed by EPA) and PROAST (de...
Evaluation and cross-validation of Environmental Models
NASA Astrophysics Data System (ADS)
Lemaire, Joseph
Before scientific models (statistical or empirical models based on experimental measurements; physical or mathematical models) can be proposed and selected as ISO Environmental Standards, a Commission of professional experts appointed by an established International Union or Association (e.g. IAGA for Geomagnetism and Aeronomy, . . . ) should have been able to study, document, evaluate and validate the best alternative models available at a given epoch. Examples will be given, indicating that different values for the Earth radius have been employed in different data processing laboratories, institutes or agencies, to process, analyse or retrieve series of experimental observations. Furthermore, invariant magnetic coordinates like B and L, commonly used in the study of Earth's radiation belts fluxes and for their mapping, differ from one space mission data center to the other, from team to team, and from country to country. Worse, users of empirical models generally fail to use the original magnetic model which had been employed to compile B and L , and thus to build these environmental models. These are just some flagrant examples of inconsistencies and misuses identified so far; there are probably more of them to be uncovered by careful, independent examination and benchmarking. A meter prototype, the standard unit length that has been determined on 20 May 1875, during the Diplomatic Conference of the Meter, and deposited at the BIPM (Bureau International des Poids et Mesures). In the same token, to coordinate and safeguard progress in the field of Space Weather, similar initiatives need to be undertaken, to prevent wild, uncontrolled dissemination of pseudo Environmental Models and Standards. Indeed, unless validation tests have been performed, there is guaranty, a priori, that all models on the market place have been built consistently with the same units system, and that they are based on identical definitions for the coordinates systems, etc... Therefore, preliminary analyses should be carried out under the control and authority of an established international professional Organization or Association, before any final political decision is made by ISO to select a specific Environmental Models, like for example IGRF and DGRF. Of course, Commissions responsible for checking the consistency of definitions, methods and algorithms for data processing might consider to delegate specific tasks (e.g. bench-marking the technical tools, the calibration procedures, the methods of data analysis, and the software algorithms employed in building the different types of models, as well as their usage) to private, intergovernmental or international organization/agencies (e.g.: NASA, ESA, AGU, EGU, COSPAR, . . . ); eventually, the latter should report conclusions to the Commissions members appointed by IAGA or any established authority like IUGG.
Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results
NASA Technical Reports Server (NTRS)
Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)
1994-01-01
In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.
Issues in benchmarking human reliability analysis methods : a literature review.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted,more » reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less
Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less
Building America Industrialized Housing Partnership (BAIHP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
McIlvaine, Janet; Chandra, Subrato; Barkaszi, Stephen
This final report summarizes the work conducted by the Building America Industrialized Housing Partnership (www.baihp.org) for the period 9/1/99-6/30/06. BAIHP is led by the Florida Solar Energy Center of the University of Central Florida and focuses on factory built housing. In partnership with over 50 factory and site builders, work was performed in two main areas--research and technical assistance. In the research area--through site visits in over 75 problem homes, we discovered the prime causes of moisture problems in some manufactured homes and our industry partners adopted our solutions to nearly eliminate this vexing problem. Through testing conducted in overmore » two dozen housing factories of six factory builders we documented the value of leak free duct design and construction which was embraced by our industry partners and implemented in all the thousands of homes they built. Through laboratory test facilities and measurements in real homes we documented the merits of 'cool roof' technologies and developed an innovative night sky radiative cooling concept currently being tested. We patented an energy efficient condenser fan design, documented energy efficient home retrofit strategies after hurricane damage, developed improved specifications for federal procurement for future temporary housing, compared the Building America benchmark to HERS Index and IECC 2006, developed a toolkit for improving the accuracy and speed of benchmark calculations, monitored the field performance of over a dozen prototype homes and initiated research on the effectiveness of occupancy feedback in reducing household energy use. In the technical assistance area we provided systems engineering analysis, conducted training, testing and commissioning that have resulted in over 128,000 factory built and over 5,000 site built homes which are saving their owners over $17,000,000 annually in energy bills. These include homes built by Palm Harbor Homes, Fleetwood, Southern Energy Homes, Cavalier and the manufacturers participating in the Northwest Energy Efficient Manufactured Home program. We worked with over two dozen Habitat for Humanity affiliates and helped them build over 700 Energy Star or near Energy Star homes. We have provided technical assistance to several show homes constructed for the International builders show in Orlando, FL and assisted with other prototype homes in cold climates that save 40% over the benchmark reference. In the Gainesville Fl area we have several builders that are consistently producing 15 to 30 homes per month in several subdivisions that meet the 30% benchmark savings goal. We have contributed to the 2006 DOE Joule goals by providing two community case studies meeting the 30% benchmark goal in marine climates.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balkey, K.; Witt, F.J.; Bishop, B.A.
1995-06-01
Significant attention has been focused on the issue of reactor vessel pressurized thermal shock (PTS) for many years. Pressurized thermal shock transient events are characterized by a rapid cooldown at potentially high pressure levels that could lead to a reactor vessel integrity concern for some pressurized water reactors. As a result of regulatory and industry efforts in the early 1980`s, a probabilistic risk assessment methodology has been established to address this concern. Probabilistic fracture mechanics analyses are performed as part of this methodology to determine conditional probability of significant flaw extension for given pressurized thermal shock events. While recent industrymore » efforts are underway to benchmark probabilistic fracture mechanics computer codes that are currently used by the nuclear industry, Part I of this report describes the comparison of two independent computer codes used at the time of the development of the original U.S. Nuclear Regulatory Commission (NRC) pressurized thermal shock rule. The work that was originally performed in 1982 and 1983 to compare the U.S. NRC - VISA and Westinghouse (W) - PFM computer codes has been documented and is provided in Part I of this report. Part II of this report describes the results of more recent industry efforts to benchmark PFM computer codes used by the nuclear industry. This study was conducted as part of the USNRC-EPRI Coordinated Research Program for reviewing the technical basis for pressurized thermal shock (PTS) analyses of the reactor pressure vessel. The work focused on the probabilistic fracture mechanics (PFM) analysis codes and methods used to perform the PTS calculations. An in-depth review of the methodologies was performed to verify the accuracy and adequacy of the various different codes. The review was structured around a series of benchmark sample problems to provide a specific context for discussion and examination of the fracture mechanics methodology.« less
A quality assessment of the MARS crop yield forecasting system for the European Union
NASA Astrophysics Data System (ADS)
van der Velde, Marijn; Bareuth, Bettina
2015-04-01
Timely information on crop production forecasts can become of increasing importance as commodity markets are more and more interconnected. Impacts across large crop production areas due to (e.g.) extreme weather and pest outbreaks can create ripple effects that may affect food prices and availability elsewhere. The MARS Unit (Monitoring Agricultural ResourceS), DG Joint Research Centre, European Commission, has been providing forecasts of European crop production levels since 1993. The operational crop production forecasting is carried out with the MARS Crop Yield Forecasting System (M-CYFS). The M-CYFS is used to monitor crop growth development, evaluate short-term effects of anomalous meteorological events, and provide monthly forecasts of crop yield at national and European Union level. The crop production forecasts are published in the so-called MARS bulletins. Forecasting crop yield over large areas in the operational context requires quality benchmarks. Here we present an analysis of the accuracy and skill of past crop yield forecasts of the main crops (e.g. soft wheat, grain maize), throughout the growing season, and specifically for the final forecast before harvest. Two simple benchmarks to assess the skill of the forecasts were defined as comparing the forecasts to 1) a forecast equal to the average yield and 2) a forecast using a linear trend established through the crop yield time-series. These reveal a variability in performance as a function of crop and Member State. In terms of production, the yield forecasts of 67% of the EU-28 soft wheat production and 80% of the EU-28 maize production have been forecast superior to both benchmarks during the 1993-2013 period. In a changing and increasingly variable climate crop yield forecasts can become increasingly valuable - provided they are used wisely. We end our presentation by discussing research activities that could contribute to this goal.
Nuclear power plant digital system PRA pilot study with the dynamic flow-graph methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yau, M.; Motamed, M.; Guarro, S.
2006-07-01
Current Probabilistic Risk Assessment (PRA) methodology is well established in analyzing hardware and some of the key human interactions. However processes for analyzing the software functions of digital systems within a plant PRA framework, and accounting for the digital system contribution to the overall risk are not generally available nor are they well understood and established. A recent study reviewed a number of methodologies that have potential applicability to modeling and analyzing digital systems within a PRA framework. This study identified the Dynamic Flow-graph Methodology (DFM) and the Markov Methodology as the most promising tools. As a result of thismore » study, a task was defined under the framework of a collaborative agreement between the U.S. Nuclear Regulatory Commission (NRC) and the Ohio State Univ. (OSU). The objective of this task is to set up benchmark systems representative of digital systems used in nuclear power plants and to evaluate DFM and the Markov methodology with these benchmark systems. The first benchmark system is a typical Pressurized Water Reactor (PWR) Steam Generator (SG) Feedwater System (FWS) level control system based on an earlier ASCA work with the U.S. NRC 2, upgraded with modern control laws. ASCA, Inc. is currently under contract to OSU to apply DFM to this benchmark system. The goal is to investigate the feasibility of using DFM to analyze and quantify digital system risk, and to integrate the DFM analytical results back into the plant event tree/fault tree PRA model. (authors)« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-29
... to provide more efficient, cost-effective, and timely benchmarking and other market information about.... This market analysis (commonly referred to as ``benchmarking'') would allow users of this service to... determine to be most useful. The benchmarking portion of the service would provide information on an...
Implementation and validation of a conceptual benchmarking framework for patient blood management.
Kastner, Peter; Breznik, Nada; Gombotz, Hans; Hofmann, Axel; Schreier, Günter
2015-01-01
Public health authorities and healthcare professionals are obliged to ensure high quality health service. Because of the high variability of the utilisation of blood and blood components, benchmarking is indicated in transfusion medicine. Implementation and validation of a benchmarking framework for Patient Blood Management (PBM) based on the report from the second Austrian Benchmark trial. Core modules for automatic report generation have been implemented with KNIME (Konstanz Information Miner) and validated by comparing the output with the results of the second Austrian benchmark trial. Delta analysis shows a deviation <0.1% for 95% (max. 1.4%). The framework provides a reliable tool for PBM benchmarking. The next step is technical integration with hospital information systems.
Musungu, Sisule F.
2006-01-01
The impact of intellectual property protection in the pharmaceutical sector on developing countries has been a central issue in the fierce debate during the past 10 years in a number of international fora, particularly the World Trade Organization (WTO) and WHO. The debate centres on whether the intellectual property system is: (1) providing sufficient incentives for research and development into medicines for diseases that disproportionately affect developing countries; and (2) restricting access to existing medicines for these countries. The Doha Declaration was adopted at WTO in 2001 and the Commission on Intellectual Property, Innovation and Public Health was established at WHO in 2004, but their respective contributions to tackling intellectual property-related challenges are disputed. Objective parameters are needed to measure whether a particular series of actions, events, decisions or processes contribute to progress in this area. This article proposes six possible benchmarks for intellectual property-related challenges with regard to the development of medicines and ensuring access to medicines in developing countries. PMID:16710545
Thermo-hydro-mechanical-chemical processes in fractured-porous media: Benchmarks and examples
NASA Astrophysics Data System (ADS)
Kolditz, O.; Shao, H.; Görke, U.; Kalbacher, T.; Bauer, S.; McDermott, C. I.; Wang, W.
2012-12-01
The book comprises an assembly of benchmarks and examples for porous media mechanics collected over the last twenty years. Analysis of thermo-hydro-mechanical-chemical (THMC) processes is essential to many applications in environmental engineering, such as geological waste deposition, geothermal energy utilisation, carbon capture and storage, water resources management, hydrology, even climate change. In order to assess the feasibility as well as the safety of geotechnical applications, process-based modelling is the only tool to put numbers, i.e. to quantify future scenarios. This charges a huge responsibility concerning the reliability of computational tools. Benchmarking is an appropriate methodology to verify the quality of modelling tools based on best practices. Moreover, benchmarking and code comparison foster community efforts. The benchmark book is part of the OpenGeoSys initiative - an open source project to share knowledge and experience in environmental analysis and scientific computation.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The development of benchmark examples for quasi-static delamination propagation prediction is presented and demonstrated for a commercial code. The examples are based on finite element models of the Mixed-Mode Bending (MMB) specimen. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, quasi-static benchmark examples were created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Good agreement between the results obtained from the automated propagation analysis and the benchmark results could be achieved by selecting input parameters that had previously been determined during analyses of mode I Double Cantilever Beam and mode II End Notched Flexure specimens. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.
Benchmarking as a Global Strategy for Improving Instruction in Higher Education.
ERIC Educational Resources Information Center
Clark, Karen L.
This paper explores the concept of benchmarking in institutional research, a comparative analysis methodology designed to help colleges and universities increase their educational quality and delivery systems. The primary purpose of benchmarking is to compare an institution to its competitors in order to improve the product (in this case…
Benchmarking and beyond. Information trends in home care.
Twiss, Amanda; Rooney, Heather; Lang, Christine
2002-11-01
With today's benchmarking concepts and tools, agencies have the unprecedented opportunity to use information as a strategic advantage. Because agencies are demanding more and better information, benchmark functionality has grown increasingly sophisticated. Agencies now require a new type of analysis, focused on high-level executive summaries while reducing the current "data overload."
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The development of benchmark examples for quasi-static delamination propagation prediction is presented. The example is based on a finite element model of the Mixed-Mode Bending (MMB) specimen for 50% mode II. The benchmarking is demonstrated for Abaqus/Standard, however, the example is independent of the analysis software used and allows the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement as well as delamination length versus applied load/displacement relationships from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall, the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.
Sensitivity Analysis of OECD Benchmark Tests in BISON
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less
Development of a Benchmark Example for Delamination Fatigue Growth Prediction
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2010-01-01
The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required
Information Literacy and Office Tool Competencies: A Benchmark Study
ERIC Educational Resources Information Center
Heinrichs, John H.; Lim, Jeen-Su
2010-01-01
Present information science literature recognizes the importance of information technology to achieve information literacy. The authors report the results of a benchmarking student survey regarding perceived functional skills and competencies in word-processing and presentation tools. They used analysis of variance and regression analysis to…
Benchmarking in Universities: League Tables Revisited
ERIC Educational Resources Information Center
Turner, David
2005-01-01
This paper examines the practice of benchmarking universities using a "league table" approach. Taking the example of the "Sunday Times University League Table", the author reanalyses the descriptive data on UK universities. Using a linear programming technique, data envelope analysis (DEA), the author uses the re-analysis to…
Robertson, Sue; Canary, Cheryl Westlake; Orr, Marsha; Herberg, Paula; Rutledge, Dana N
2010-03-01
Measurement and analysis of progression and graduation rates is a well-established activity in schools of nursing. Such rates are indices of program effectiveness and student success. The Commission on Collegiate Nursing Education (2008), in its recently revised Standards for Accreditation of Baccalaureate and Graduate Degree Nursing Programs, specifically dictated that graduation rates (including discussion of entry points, timeframes) be calculated for each degree program. This context affects what is considered timely progression to graduation. If progression and graduation rates are critical outcomes, then schools must fully understand their measurement as well as interpretation of results. Because no national benchmarks for nursing student progression/graduation rates exist, schools try to set expectations that are realistic yet academically sound. RN-to-bachelor of science in nursing (BSN) students are a unique cohort of baccalaureate learners who need to be understood within their own learning context. The purposes of this study were to explore issues and processes of measuring progression and graduation rates in an RN-to-BSN population and to identify factors that facilitate/hinder their successful progression to work toward establishing benchmarks for success. Using data collected from 14 California schools of nursing with RN-to-BSN programs, RN-to-BSN students were identified as generally older, married, and going to school part-time while working and juggling family responsibilities. The study found much program variation in definition of terms and measures used to report progression and graduation rates. A literature review supported the use of terms such as attrition, retention, persistence, graduation, completion, and success rates, in an overlapping and sometimes synonymous fashion. Conceptual clarity and standardization of measurements are needed to allow comparisons and setting of realistic benchmarks. One of the most important factors identified in this study is the potentially prolonged RN-to-BSN timeline to graduation. This underlines the need to look beyond standardized educational norms for graduation rates and consider the realities of "persistence" by which these students are successful in completing their studies. It also raises the question of whether student success and program success/effectiveness are two separate measures or two separate events on one progression timeline. While clarifying our thinking about success in this population of students, the study raised many questions that warrant further research and debate.
BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...
The purpose of this document is to provide guidance for the Agency on the application of the benchmark dose approach in determining the point of departure (POD) for health effects data, whether a linear or nonlinear low dose extrapolation is used. The guidance includes discussion on computation of benchmark doses and benchmark concentrations (BMDs and BMCs) and their lower confidence limits, data requirements, dose-response analysis, and reporting requirements. This guidance is based on today's knowledge and understanding, and on experience gained in using this approach.
Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A
2011-01-01
The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less
Benchmark dose risk assessment software (BMDS) was designed by EPA to generate dose-response curves and facilitate the analysis, interpretation and synthesis of toxicological data. Partial results of QA/QC testing of the EPA benchmark dose software (BMDS) are presented. BMDS pr...
SU-E-T-148: Benchmarks and Pre-Treatment Reviews: A Study of Quality Assurance Effectiveness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowenstein, J; Nguyen, H; Roll, J
Purpose: To determine the impact benchmarks and pre-treatment reviews have on improving the quality of submitted clinical trial data. Methods: Benchmarks are used to evaluate a site’s ability to develop a treatment that meets a specific protocol’s treatment guidelines prior to placing their first patient on the protocol. A pre-treatment review is an actual patient placed on the protocol in which the dosimetry and contour volumes are evaluated to be per protocol guidelines prior to allowing the beginning of the treatment. A key component of these QA mechanisms is that sites are provided timely feedback to educate them on howmore » to plan per the protocol and prevent protocol deviations on patients accrued to a protocol. For both benchmarks and pre-treatment reviews a dose volume analysis (DVA) was performed using MIM softwareTM. For pre-treatment reviews a volume contour evaluation was also performed. Results: IROC Houston performed a QA effectiveness analysis of a protocol which required both benchmarks and pre-treatment reviews. In 70 percent of the patient cases submitted, the benchmark played an effective role in assuring that the pre-treatment review of the cases met protocol requirements. The 35 percent of sites failing the benchmark subsequently modified there planning technique to pass the benchmark before being allowed to submit a patient for pre-treatment review. However, in 30 percent of the submitted cases the pre-treatment review failed where the majority (71 percent) failed the DVA. 20 percent of sites submitting patients failed to correct their dose volume discrepancies indicated by the benchmark case. Conclusion: Benchmark cases and pre-treatment reviews can be an effective QA tool to educate sites on protocol guidelines and to minimize deviations. Without the benchmark cases it is possible that 65 percent of the cases undergoing a pre-treatment review would have failed to meet the protocols requirements.Support: U24-CA-180803.« less
Evaluation of Graph Pattern Matching Workloads in Graph Analysis Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Seokyong; Lee, Sangkeun; Lim, Seung-Hwan
2016-01-01
Graph analysis has emerged as a powerful method for data scientists to represent, integrate, query, and explore heterogeneous data sources. As a result, graph data management and mining became a popular area of research, and led to the development of plethora of systems in recent years. Unfortunately, the number of emerging graph analysis systems and the wide range of applications, coupled with a lack of apples-to-apples comparisons, make it difficult to understand the trade-offs between different systems and the graph operations for which they are designed. A fair comparison of these systems is a challenging task for the following reasons:more » multiple data models, non-standardized serialization formats, various query interfaces to users, and diverse environments they operate in. To address these key challenges, in this paper we present a new benchmark suite by extending the Lehigh University Benchmark (LUBM) to cover the most common capabilities of various graph analysis systems. We provide the design process of the benchmark, which generalizes the workflow for data scientists to conduct the desired graph analysis on different graph analysis systems. Equipped with this extended benchmark suite, we present performance comparison for nine subgraph pattern retrieval operations over six graph analysis systems, namely NetworkX, Neo4j, Jena, Titan, GraphX, and uRiKA. Through the proposed benchmark suite, this study reveals both quantitative and qualitative findings in (1) implications in loading data into each system; (2) challenges in describing graph patterns for each query interface; and (3) different sensitivity of each system to query selectivity. We envision that this study will pave the road for: (i) data scientists to select the suitable graph analysis systems, and (ii) data management system designers to advance graph analysis systems.« less
Developing a benchmark for emotional analysis of music
Yang, Yi-Hsuan; Soleymani, Mohammad
2017-01-01
Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER. PMID:28282400
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Peiyuan; Brown, Timothy; Fullmer, William D.
Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less
Developing a benchmark for emotional analysis of music.
Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad
2017-01-01
Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.
Commercial Building Energy Saver, Web App
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Piette, Mary; Lee, Sang Hoon
The CBES App is a web-based toolkit for use by small businesses and building owners and operators of small and medium size commercial buildings to perform energy benchmarking and retrofit analysis for buildings. The CBES App analyzes the energy performance of user's building for pre-and posto-retrofit, in conjunction with user's input data, to identify recommended retrofit measures, energy savings and economic analysis for the selected measures. The CBES App provides energy benchmarking, including getting an EnergyStar score using EnergyStar API and benchmarking against California peer buildings using the EnergyIQ API. The retrofit analysis includes a preliminary analysis by looking upmore » retrofit measures from a pre-simulated database DEEP, and a detailed analysis creating and running EnergyPlus models to calculate energy savings of retrofit measures. The CBES App builds upon the LBNL CBES API.« less
Policy Analysis of the English Graduation Benchmark in Taiwan
ERIC Educational Resources Information Center
Shih, Chih-Min
2012-01-01
To nudge students to study English and to improve their English proficiency, many universities in Taiwan have imposed an English graduation benchmark on their students. This article reviews this policy, using the theoretic framework for education policy analysis proposed by Haddad and Demsky (1995). The author presents relevant research findings,…
NASA Astrophysics Data System (ADS)
Leonardi, Marcelo
The primary purpose of this study was to examine the impact of a scheduling change from a trimester 4x4 block schedule to a modified hybrid schedule on student achievement in ninth grade biology courses. This study examined the impact of the scheduling change on student achievement through teacher created benchmark assessments in Genetics, DNA, and Evolution and on the California Standardized Test in Biology. The secondary purpose of this study examined the ninth grade biology teacher perceptions of ninth grade biology student achievement. Using a mixed methods research approach, data was collected both quantitatively and qualitatively as aligned to research questions. Quantitative methods included gathering data from departmental benchmark exams and California Standardized Test in Biology and conducting multiple analysis of covariance and analysis of covariance to determine significance differences. Qualitative methods include journal entries questions and focus group interviews. The results revealed a statistically significant increase in scores on both the DNA and Evolution benchmark exams. DNA and Evolution benchmark exams showed significant improvements from a change in scheduling format. The scheduling change was responsible for 1.5% of the increase in DNA benchmark scores and 2% of the increase in Evolution benchmark scores. The results revealed a statistically significant decrease in scores on the Genetics Benchmark exam as a result of the scheduling change. The scheduling change was responsible for 1% of the decrease in Genetics benchmark scores. The results also revealed a statistically significant increase in scores on the CST Biology exam. The scheduling change was responsible for .7% of the increase in CST Biology scores. Results of the focus group discussions indicated that all teachers preferred the modified hybrid schedule over the trimester schedule and that it improved student achievement.
Whole-House Design and Commissioning in the Project Home Again Hot-Humid New Construction Community
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerrigan, Philip
2012-09-01
Building Science Corporation has been working with Project Home Again since 2008 and has consulted on the design of around 100 affordable, energy efficient new construction homes for victims of hurricanes Katrina and Rita. This report details the effort on the final two phases of the project: Phases V and VI, which resulted in a total of 25 homes constructed in 2011. The goal of this project was to develop and implement an energy efficiency package that will achieve at least 20% whole house source energy savings improvement over the B10 Benchmark.
Whole-House Design and Commissioning in the Project Home Again Hot-Humid New Construction Community
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerrigan, P.
2012-09-01
BSC has been working with Project Home Again since 2008 and has consulted on the design of around 100 affordable, energy efficient new construction homes for victims of hurricanes Katrina and Rita. This report details the effort on the final two phases of the project: Phases V and VI which resulted in a total of 25 homes constructed in 2011. The goal of this project was to develop and implement an energy efficiency package that will achieve at least 20% whole house source energy savings improvement over the B10 Benchmark.
Ashenafi, Michael S.; McDonald, Daniel G.; Vanek, Kenneth N.
2015-01-01
Beam scanning data collected on the tomotherapy linear accelerator using the TomoScanner water scanning system is primarily used to verify the golden beam profiles included in all Helical TomoTherapy treatment planning systems (TOMO TPSs). The user is not allowed to modify the beam profiles/parameters for beam modeling within the TOMO TPSs. The authors report the first feasibility study using the Blue Phantom Helix (BPH) as an alternative to the TomoScanner (TS) system. This work establishes a benchmark dataset using BPH for target commissioning and quality assurance (QA), and quantifies systematic uncertainties between TS and BPH. Reproducibility of scanning with BPH was tested by three experienced physicists taking five sets of measurements over a six‐month period. BPH provides several enhancements over TS, including a 3D scanning arm, which is able to acquire necessary beam‐data with one tank setup, a universal chamber mount, and the OmniPro software, which allows online data collection and analysis. Discrepancies between BPH and TS were estimated by acquiring datasets with each tank. In addition, data measured with BPH and TS was compared to the golden TOMO TPS beam data. The total systematic uncertainty, defined as the combination of scanning system and beam modeling uncertainties, was determined through numerical analysis and tabulated. OmniPro was used for all analysis to eliminate uncertainty due to different data processing algorithms. The setup reproducibility of BPH remained within 0.5 mm/0.5%. Comparing BPH, TS, and Golden TPS for PDDs beyond maximum depth, the total systematic uncertainties were within 1.4 mm/2.1%. Between BPH and TPS golden data, maximum differences in the field width and penumbra of in‐plane profiles were within 0.8 and 1.1 mm, respectively. Furthermore, in cross‐plane profiles, the field width differences increased at depth greater than 10 cm up to 2.5 mm, and maximum penumbra uncertainties were 5.6 mm and 4.6 mm from TS scanning system and TPS modeling, respectively. Use of BPH reduced measurement time by 1–2 hrs per session. The BPH has been assessed as an efficient, reproducible, and accurate scanning system capable of providing a reliable benchmark beam data. With this data, a physicist can utilize the BPH in a clinical setting with an understanding of the scan discrepancy that may be encountered while validating the TPS or during routine machine QA. Without the flexibility of modifying the TPS and without a golden beam dataset from the vendor or a TPS model generated from data collected with the BPH, this represents the best solution for current clinical use of the BPH. PACS number: 87.56.Fc
Interactive visual optimization and analysis for RFID benchmarking.
Wu, Yingcai; Chung, Ka-Kei; Qu, Huamin; Yuan, Xiaoru; Cheung, S C
2009-01-01
Radio frequency identification (RFID) is a powerful automatic remote identification technique that has wide applications. To facilitate RFID deployment, an RFID benchmarking instrument called aGate has been invented to identify the strengths and weaknesses of different RFID technologies in various environments. However, the data acquired by aGate are usually complex time varying multidimensional 3D volumetric data, which are extremely challenging for engineers to analyze. In this paper, we introduce a set of visualization techniques, namely, parallel coordinate plots, orientation plots, a visual history mechanism, and a 3D spatial viewer, to help RFID engineers analyze benchmark data visually and intuitively. With the techniques, we further introduce two workflow procedures (a visual optimization procedure for finding the optimum reader antenna configuration and a visual analysis procedure for comparing the performance and identifying the flaws of RFID devices) for the RFID benchmarking, with focus on the performance analysis of the aGate system. The usefulness and usability of the system are demonstrated in the user evaluation.
Petzold, Thomas; Hertzschuch, Diana; Elchlep, Frank; Eberlein-Gonska, Maria
2014-01-01
Process management (PM) is a valuable method for the systematic analysis and structural optimisation of the quality and safety of clinical treatment. PM requires a high motivation and willingness to implement changes of both employees and management. Definition of quality indicators is required to systematically measure the quality of the specified processes. One way to represent comparable quality results is the use of quality indicators of the external quality assurance in accordance with Sect. 137 SGB V—a method which the Federal Joint Committee (GBA) and the institutions commissioned by the GBA have employed and consistently enhanced for more than ten years. Information on the quality of inpatient treatment is available for 30 defined subjects throughout Germany. The combination of specified processes with quality indicators is beneficial for the information of employees. A process-based indicator dashboard provides essential information about the treatment process. These can be used for process analysis. In a continuous consideration of these indicator results values can be determined and errors will be remedied quickly. If due consideration is given to these indicators, they can be used for benchmarking to identify potential process improvements. Copyright © 2014. Published by Elsevier GmbH.
SP2Bench: A SPARQL Performance Benchmark
NASA Astrophysics Data System (ADS)
Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg
A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.
Big data analytics in the building industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berger, Michael A.; Mathew, Paul A.; Walter, Travis
Catalyzed by recent market, technology, and policy trends, energy data collection in the building industry is becoming more widespread. This wealth of information allows more data-driven decision-making by designers, commissioning agents, facilities staff, and energy service providers during the course of building design, operation and retrofit. The U.S. Department of Energy’s Building Performance Database (BPD) has taken advantage of this wealth of building asset- and energy-related data by collecting, cleansing, and standardizing data from across the U.S. on over 870,00 buildings, and is designed to support building benchmarking, energy efficiency project design, and buildings-related policy development with real-world data. Here,more » this article explores the promises and perils energy professionals are faced with when leveraging such tools, presenting example analyses for commercial and residential buildings, highlighting potential issues, and discussing solutions and best practices that will enable designers, operators and commissioning agents to make the most of ‘big data’ resources such as the BPD.« less
Big data analytics in the building industry
Berger, Michael A.; Mathew, Paul A.; Walter, Travis
2016-07-01
Catalyzed by recent market, technology, and policy trends, energy data collection in the building industry is becoming more widespread. This wealth of information allows more data-driven decision-making by designers, commissioning agents, facilities staff, and energy service providers during the course of building design, operation and retrofit. The U.S. Department of Energy’s Building Performance Database (BPD) has taken advantage of this wealth of building asset- and energy-related data by collecting, cleansing, and standardizing data from across the U.S. on over 870,00 buildings, and is designed to support building benchmarking, energy efficiency project design, and buildings-related policy development with real-world data. Here,more » this article explores the promises and perils energy professionals are faced with when leveraging such tools, presenting example analyses for commercial and residential buildings, highlighting potential issues, and discussing solutions and best practices that will enable designers, operators and commissioning agents to make the most of ‘big data’ resources such as the BPD.« less
Notes on numerical reliability of several statistical analysis programs
Landwehr, J.M.; Tasker, Gary D.
1999-01-01
This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.
Benchmarking to improve the quality of cystic fibrosis care.
Schechter, Michael S
2012-11-01
Benchmarking involves the ascertainment of healthcare programs with most favorable outcomes as a means to identify and spread effective strategies for delivery of care. The recent interest in the development of patient registries for patients with cystic fibrosis (CF) has been fueled in part by an interest in using them to facilitate benchmarking. This review summarizes reports of how benchmarking has been operationalized in attempts to improve CF care. Although certain goals of benchmarking can be accomplished with an exclusive focus on registry data analysis, benchmarking programs in Germany and the United States have supplemented these data analyses with exploratory interactions and discussions to better understand successful approaches to care and encourage their spread throughout the care network. Benchmarking allows the discovery and facilitates the spread of effective approaches to care. It provides a pragmatic alternative to traditional research methods such as randomized controlled trials, providing insights into methods that optimize delivery of care and allowing judgments about the relative effectiveness of different therapeutic approaches.
ERIC Educational Resources Information Center
Quinlan, Kathleen M.
2016-01-01
What aspects of student character are expected to be developed through disciplinary curricula? This paper examines the UK written curriculum through an analysis of the Quality Assurance Agency's subject benchmark statements for the most popular subjects studied in the UK. It explores the language, principles and intended outcomes that suggest…
Five radionuclide vadose zone models with different degrees of complexity (CHAIN, MULTIMED_DP, FECTUZ, HYDRUS, and CHAIN 2D) were selected for use in soil screening level (SSL) calculations. A benchmarking analysis between the models was conducted for a radionuclide (99Tc) rele...
Benchmarking expert system tools
NASA Technical Reports Server (NTRS)
Riley, Gary
1988-01-01
As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.
Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems
NASA Astrophysics Data System (ADS)
Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald
A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.
A new numerical benchmark of a freshwater lens
NASA Astrophysics Data System (ADS)
Stoeckl, L.; Walther, M.; Graf, T.
2016-04-01
A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.
How to benchmark methods for structure-based virtual screening of large compound libraries.
Christofferson, Andrew J; Huang, Niu
2012-01-01
Structure-based virtual screening is a useful computational technique for ligand discovery. To systematically evaluate different docking approaches, it is important to have a consistent benchmarking protocol that is both relevant and unbiased. Here, we describe the designing of a benchmarking data set for docking screen assessment, a standard docking screening process, and the analysis and presentation of the enrichment of annotated ligands among a background decoy database.
Zuckerman, Stephen; Skopec, Laura; Guterman, Stuart
2017-12-01
Medicare Advantage (MA), the program that allows people to receive their Medicare benefits through private health plans, uses a benchmark-and-bidding system to induce plans to provide benefits at lower costs. However, prior research suggests medical costs, profits, and other plan costs are not as low under this system as they might otherwise be. To examine how well the current system encourages MA plans to bid their lowest cost by examining the relationship between costs and bonuses (rebates) and the benchmarks Medicare uses in determining plan payments. Regression analysis using 2015 data for HMO and local PPO plans. Costs and rebates are higher for MA plans in areas with higher benchmarks, and plan costs vary less than benchmarks do. A one-dollar increase in benchmarks is associated with 32-cent-higher plan costs and a 52-cent-higher rebate, even when controlling for market and plan factors that can affect costs. This suggests the current benchmark-and-bidding system allows plans to bid higher than local input prices and other market conditions would seem to warrant. To incentivize MA plans to maximize efficiency and minimize costs, Medicare could change the way benchmarks are set or used.
Translational benchmark risk analysis
Piegorsch, Walter W.
2010-01-01
Translational development – in the sense of translating a mature methodology from one area of application to another, evolving area – is discussed for the use of benchmark doses in quantitative risk assessment. Illustrations are presented with traditional applications of the benchmark paradigm in biology and toxicology, and also with risk endpoints that differ from traditional toxicological archetypes. It is seen that the benchmark approach can apply to a diverse spectrum of risk management settings. This suggests a promising future for this important risk-analytic tool. Extensions of the method to a wider variety of applications represent a significant opportunity for enhancing environmental, biomedical, industrial, and socio-economic risk assessments. PMID:20953283
Design and realization of the optical and electron beam alignment system for the HUST-FEL oscillator
NASA Astrophysics Data System (ADS)
Fu, Q.; Tan, P.; Liu, K. F.; Qin, B.; Liu, X.
2018-06-01
A Free Electron Laser(FEL) oscillator with radiation wavelength at 30-100 μ m is under commissioning at Huazhong University of Science and Technology (HUST). This work presents the schematic design and realization procedures for the optical and beam alignment system in the HUST FEL facility. The optical cavity misalignment effects are analyzed with the code OPC + Genesis 1.3, and the tolerance of misalignment is proposed with the simulation result. Depending on undulator mechanical benchmarks, a laser indicating system has been built up as reference datum. The alignment of both optical axis and beam trajectory were achieved by this alignment system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mkhabela, P.; Han, J.; Tyobeka, B.
2006-07-01
The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less
Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja
2015-01-01
The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.
Hospital benchmarking: are U.S. eye hospitals ready?
de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S
2012-01-01
Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added "public" value of benchmarking in health care is questionable.
77 FR 47572 - Retrospective Analysis of Existing Rules
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-09
... INTERNATIONAL TRADE COMMISSION 19 CFR Chapter II Retrospective Analysis of Existing Rules AGENCY... Order 13579 of July 11, 2011, the Commission recently adopted its Plan for Retrospective Analysis of... submitted in connection with the Commission's Preliminary Plan for Retrospective Analysis of Existing Rules...
NASA Astrophysics Data System (ADS)
Wang, Kuo-Lung; Lin, Jun-Tin; Lee, Yi-Hsuan; Lin, Meei-Ling; Chen, Chao-Wei; Liao, Ray-Tang; Chi, Chung-Chi; Lin, Hsi-Hung
2016-04-01
Landslide is always not hazard until mankind development in highly potential area. The study tries to map deep seated landslide before the initiation of landslide. Study area in central Taiwan is selected and the geological condition is quite unique, which is slate. Major direction of bedding in this area is northeast and the dip ranges from 30-75 degree to southeast. Several deep seated landslides were discovered in the same side of bedding from rainfall events. The benchmarks from 2002 ~ 2009 are in this study. However, the benchmarks were measured along Highway No. 14B and the road was constructed along the peak of mountains. Taiwan located between sea plates and continental plate. The elevation of mountains is rising according to most GPS and benchmarks in the island. The same trend is discovered from benchmarks in this area. But some benchmarks are located in landslide area thus the elevation is below average and event negative. The aerial photos from 1979 to 2007 are used for orthophoto generation. The changes of land use are obvious during 30 years and enlargement of river channel is also observed in this area. Both benchmarks and aerial photos have discovered landslide potential did exist this area but how big of landslide in not easy to define currently. Thus SAR data utilization is adopted in this case. DInSAR and SBAS sar analysis are used in this research and ALOS/PALSAR from 2006 to 2010 is adopted. DInSAR analysis shows that landslide is possible mapped but the error is not easy to reduce. The error is possibly form several conditions such as vegetation, clouds, vapor, etc. To conquer the problem, time series analysis, SBAS, is adopted in this research. The result of SBAS in this area shows that large deep seated landslides are easy mapped and the accuracy of vertical displacement is reasonable.
Benchmarking routine psychological services: a discussion of challenges and methods.
Delgadillo, Jaime; McMillan, Dean; Leach, Chris; Lucock, Mike; Gilbody, Simon; Wood, Nick
2014-01-01
Policy developments in recent years have led to important changes in the level of access to evidence-based psychological treatments. Several methods have been used to investigate the effectiveness of these treatments in routine care, with different approaches to outcome definition and data analysis. To present a review of challenges and methods for the evaluation of evidence-based treatments delivered in routine mental healthcare. This is followed by a case example of a benchmarking method applied in primary care. High, average and poor performance benchmarks were calculated through a meta-analysis of published data from services working under the Improving Access to Psychological Therapies (IAPT) Programme in England. Pre-post treatment effect sizes (ES) and confidence intervals were estimated to illustrate a benchmarking method enabling services to evaluate routine clinical outcomes. High, average and poor performance ES for routine IAPT services were estimated to be 0.91, 0.73 and 0.46 for depression (using PHQ-9) and 1.02, 0.78 and 0.52 for anxiety (using GAD-7). Data from one specific IAPT service exemplify how to evaluate and contextualize routine clinical performance against these benchmarks. The main contribution of this report is to summarize key recommendations for the selection of an adequate set of psychometric measures, the operational definition of outcomes, and the statistical evaluation of clinical performance. A benchmarking method is also presented, which may enable a robust evaluation of clinical performance against national benchmarks. Some limitations concerned significant heterogeneity among data sources, and wide variations in ES and data completeness.
Electric load shape benchmarking for small- and medium-sized commercial buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Xuan; Hong, Tianzhen; Chen, Yixing
Small- and medium-sized commercial buildings owners and utility managers often look for opportunities for energy cost savings through energy efficiency and energy waste minimization. However, they currently lack easy access to low-cost tools that help interpret the massive amount of data needed to improve understanding of their energy use behaviors. Benchmarking is one of the techniques used in energy audits to identify which buildings are priorities for an energy analysis. Traditional energy performance indicators, such as the energy use intensity (annual energy per unit of floor area), consider only the total annual energy consumption, lacking consideration of the fluctuation ofmore » energy use behavior over time, which reveals the time of use information and represents distinct energy use behaviors during different time spans. To fill the gap, this study developed a general statistical method using 24-hour electric load shape benchmarking to compare a building or business/tenant space against peers. Specifically, the study developed new forms of benchmarking metrics and data analysis methods to infer the energy performance of a building based on its load shape. We first performed a data experiment with collected smart meter data using over 2,000 small- and medium-sized businesses in California. We then conducted a cluster analysis of the source data, and determined and interpreted the load shape features and parameters with peer group analysis. Finally, we implemented the load shape benchmarking feature in an open-access web-based toolkit (the Commercial Building Energy Saver) to provide straightforward and practical recommendations to users. The analysis techniques were generic and flexible for future datasets of other building types and in other utility territories.« less
Electric load shape benchmarking for small- and medium-sized commercial buildings
Luo, Xuan; Hong, Tianzhen; Chen, Yixing; ...
2017-07-28
Small- and medium-sized commercial buildings owners and utility managers often look for opportunities for energy cost savings through energy efficiency and energy waste minimization. However, they currently lack easy access to low-cost tools that help interpret the massive amount of data needed to improve understanding of their energy use behaviors. Benchmarking is one of the techniques used in energy audits to identify which buildings are priorities for an energy analysis. Traditional energy performance indicators, such as the energy use intensity (annual energy per unit of floor area), consider only the total annual energy consumption, lacking consideration of the fluctuation ofmore » energy use behavior over time, which reveals the time of use information and represents distinct energy use behaviors during different time spans. To fill the gap, this study developed a general statistical method using 24-hour electric load shape benchmarking to compare a building or business/tenant space against peers. Specifically, the study developed new forms of benchmarking metrics and data analysis methods to infer the energy performance of a building based on its load shape. We first performed a data experiment with collected smart meter data using over 2,000 small- and medium-sized businesses in California. We then conducted a cluster analysis of the source data, and determined and interpreted the load shape features and parameters with peer group analysis. Finally, we implemented the load shape benchmarking feature in an open-access web-based toolkit (the Commercial Building Energy Saver) to provide straightforward and practical recommendations to users. The analysis techniques were generic and flexible for future datasets of other building types and in other utility territories.« less
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The application of benchmark examples for the assessment of quasi-static delamination propagation capabilities is demonstrated for ANSYS. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation in commercial finite element codes based on the virtual crack closure technique (VCCT). The examples selected are based on two-dimensional finite element models of Double Cantilever Beam (DCB), End-Notched Flexure (ENF), Mixed-Mode Bending (MMB) and Single Leg Bending (SLB) specimens. First, the quasi-static benchmark examples were recreated for each specimen using the current implementation of VCCT in ANSYS . Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in the finite element software. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for three-dimensional solid models is required.
Benchmarking and the laboratory
Galloway, M; Nadin, L
2001-01-01
This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112
Performance Analysis of the ARL Linux Networx Cluster
2004-06-01
OVERFLOW, used processors selected by SGE. All benchmarks on the GAMESS, COBALT, LSDYNA and FLUENT. Each code Origin 3800 were executed using IRIX cpusets...scheduler. for these benchmarks defines a missile with grid fins consisting of seventeen million cells [31. 4. Application Performance Results and
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-22
...-AA73 International Services Surveys: BE-180, Benchmark Survey of Financial Services Transactions Between U.S. Financial Services Providers and Foreign Persons AGENCY: Bureau of Economic Analysis... Survey of Financial Services Transactions between U.S. Financial Services Providers and Foreign Persons...
Integral Full Core Multi-Physics PWR Benchmark with Measured Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forget, Benoit; Smith, Kord; Kumar, Shikhar
In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevantmore » multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.« less
Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu
2016-01-01
A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.
Global-local methodologies and their application to nonlinear analysis
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1989-01-01
An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.
Alternative industrial carbon emissions benchmark based on input-output analysis
NASA Astrophysics Data System (ADS)
Han, Mengyao; Ji, Xi
2016-12-01
Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.
Energy benchmarking of commercial buildings: a low-cost pathway toward urban sustainability
NASA Astrophysics Data System (ADS)
Cox, Matt; Brown, Marilyn A.; Sun, Xiaojing
2013-09-01
US cities are beginning to experiment with a regulatory approach to address information failures in the real estate market by mandating the energy benchmarking of commercial buildings. Understanding how a commercial building uses energy has many benefits; for example, it helps building owners and tenants identify poor-performing buildings and subsystems and it enables high-performing buildings to achieve greater occupancy rates, rents, and property values. This paper estimates the possible impacts of a national energy benchmarking mandate through analysis chiefly utilizing the Georgia Tech version of the National Energy Modeling System (GT-NEMS). Correcting input discount rates results in a 4.0% reduction in projected energy consumption for seven major classes of equipment relative to the reference case forecast in 2020, rising to 8.7% in 2035. Thus, the official US energy forecasts appear to overestimate future energy consumption by underestimating investments in energy-efficient equipment. Further discount rate reductions spurred by benchmarking policies yield another 1.3-1.4% in energy savings in 2020, increasing to 2.2-2.4% in 2035. Benchmarking would increase the purchase of energy-efficient equipment, reducing energy bills, CO2 emissions, and conventional air pollution. Achieving comparable CO2 savings would require more than tripling existing US solar capacity. Our analysis suggests that nearly 90% of the energy saved by a national benchmarking policy would benefit metropolitan areas, and the policy’s benefits would outweigh its costs, both to the private sector and society broadly.
Clark, Neil R.; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D.; Jones, Matthew R.; Ma’ayan, Avi
2016-01-01
Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community. PMID:26848405
Clark, Neil R; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D; Jones, Matthew R; Ma'ayan, Avi
2015-11-01
Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community.
De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric
2010-01-11
Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.
Austin Community College Benchmarking Update.
ERIC Educational Resources Information Center
Austin Community Coll., TX. Office of Institutional Effectiveness.
Austin Community College contracted with MGT of America, Inc. in spring 1999 to develop a peer and benchmark (best) practices analysis on key indicators. These indicators were updated in spring 2002 using data from eight Texas community colleges and four non-Texas institutions that represent large, comprehensive, urban community colleges, similar…
ERIC Educational Resources Information Center
Lewin, Heather S.; Passonneau, Sarah M.
2012-01-01
This research provides the first review of publicly available assessment information found on Association of Research Libraries (ARL) members' websites. After providing an overarching review of benchmarking assessment data, and of professionally recommended assessment models, this paper examines if libraries contextualized their assessment…
Benchmarking 2011: Trends in Education Philanthropy
ERIC Educational Resources Information Center
Grantmakers for Education, 2011
2011-01-01
The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…
Benchmarking Universities' Efficiency Indicators in the Presence of Internal Heterogeneity
ERIC Educational Resources Information Center
Agasisti, Tommaso; Bonomi, Francesca
2014-01-01
When benchmarking its performance, a university is usually considered as a single strategic unit. According to the evidence, however, lower levels within an organisation (such as faculties, departments and schools) play a significant role in institutional governance, affecting the overall performance. In this article, an empirical analysis was…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-13
..., describes the fisheries, evaluates the status of the stock, estimates biological benchmarks, projects future.... Participants will evaluate and recommend datasets appropriate for assessment analysis, employ assessment models to evaluate stock status, estimate population benchmarks and management criteria, and project future...
The Role of Institutional Research in Conducting Comparative Analysis of Peers
ERIC Educational Resources Information Center
Trainer, James F.
2008-01-01
In this age of accountability, transparency, and accreditation, colleges and universities increasingly conduct comparative analyses and engage in benchmarking activities. Meant to inform institutional planning and decision making, comparative analyses and benchmarking are employed to let stakeholders know how an institution stacks up against its…
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeHart, Mark D.; Mausolff, Zander; Weems, Zach
2016-08-01
One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outsidemore » of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.« less
Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; Leland M. Montierth
2014-06-01
PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 editionmore » of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.« less
A benchmark for vehicle detection on wide area motion imagery
NASA Astrophysics Data System (ADS)
Catrambone, Joseph; Amzovski, Ismail; Liang, Pengpeng; Blasch, Erik; Sheaff, Carolyn; Wang, Zhonghai; Chen, Genshe; Ling, Haibin
2015-05-01
Wide area motion imagery (WAMI) has been attracting an increased amount of research attention due to its large spatial and temporal coverage. An important application includes moving target analysis, where vehicle detection is often one of the first steps before advanced activity analysis. While there exist many vehicle detection algorithms, a thorough evaluation of them on WAMI data still remains a challenge mainly due to the lack of an appropriate benchmark data set. In this paper, we address a research need by presenting a new benchmark for wide area motion imagery vehicle detection data. The WAMI benchmark is based on the recently available Wright-Patterson Air Force Base (WPAFB09) dataset and the Temple Resolved Uncertainty Target History (TRUTH) associated target annotation. Trajectory annotations were provided in the original release of the WPAFB09 dataset, but detailed vehicle annotations were not available with the dataset. In addition, annotations of static vehicles, e.g., in parking lots, are also not identified in the original release. Addressing these issues, we re-annotated the whole dataset with detailed information for each vehicle, including not only a target's location, but also its pose and size. The annotated WAMI data set should be useful to community for a common benchmark to compare WAMI detection, tracking, and identification methods.
Delgadillo, Jaime; Asaria, Miqdad; Ali, Shehzad; Gilbody, Simon
2016-11-01
Since 2008, the Improving Access to Psychological Therapies (IAPT) programme has disseminated evidence-based interventions for depression and anxiety problems. In order to maintain quality standards, government policy in England sets the expectation that 50% of treated patients should meet recovery criteria according to validated patient-reported outcome measures. Using national IAPT data, we found evidence suggesting that the prevalence of mental health problems is greater in poorer areas and that these areas had lower average recovery rates. After adjusting benchmarks for local index of multiple deprivation, we found significant differences between unadjusted (72.5%) and adjusted (43.1%) proportions of underperforming clinical commissioning group areas. © The Royal College of Psychiatrists 2016.
Simulation-based comprehensive benchmarking of RNA-seq aligners
Baruzzo, Giacomo; Hayer, Katharina E; Kim, Eun Ji; Di Camillo, Barbara; FitzGerald, Garret A; Grant, Gregory R
2018-01-01
Alignment is the first step in most RNA-seq analysis pipelines, and the accuracy of downstream analyses depends heavily on it. Unlike most steps in the pipeline, alignment is particularly amenable to benchmarking with simulated data. We performed a comprehensive benchmarking of 14 common splice-aware aligners for base, read, and exon junction-level accuracy and compared default with optimized parameters. We found that performance varied by genome complexity, and accuracy and popularity were poorly correlated. The most widely cited tool underperforms for most metrics, particularly when using default settings. PMID:27941783
Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E
2016-08-12
Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1986-01-01
An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.
Evaluating the Quantitative Capabilities of Metagenomic Analysis Software.
Kerepesi, Csaba; Grolmusz, Vince
2016-05-01
DNA sequencing technologies are applied widely and frequently today to describe metagenomes, i.e., microbial communities in environmental or clinical samples, without the need for culturing them. These technologies usually return short (100-300 base-pairs long) DNA reads, and these reads are processed by metagenomic analysis software that assign phylogenetic composition-information to the dataset. Here we evaluate three metagenomic analysis software (AmphoraNet--a webserver implementation of AMPHORA2--, MG-RAST, and MEGAN5) for their capabilities of assigning quantitative phylogenetic information for the data, describing the frequency of appearance of the microorganisms of the same taxa in the sample. The difficulties of the task arise from the fact that longer genomes produce more reads from the same organism than shorter genomes, and some software assign higher frequencies to species with longer genomes than to those with shorter ones. This phenomenon is called the "genome length bias." Dozens of complex artificial metagenome benchmarks can be found in the literature. Because of the complexity of those benchmarks, it is usually difficult to judge the resistance of a metagenomic software to this "genome length bias." Therefore, we have made a simple benchmark for the evaluation of the "taxon-counting" in a metagenomic sample: we have taken the same number of copies of three full bacterial genomes of different lengths, break them up randomly to short reads of average length of 150 bp, and mixed the reads, creating our simple benchmark. Because of its simplicity, the benchmark is not supposed to serve as a mock metagenome, but if a software fails on that simple task, it will surely fail on most real metagenomes. We applied three software for the benchmark. The ideal quantitative solution would assign the same proportion to the three bacterial taxa. We have found that AMPHORA2/AmphoraNet gave the most accurate results and the other two software were under-performers: they counted quite reliably each short read to their respective taxon, producing the typical genome length bias. The benchmark dataset is available at http://pitgroup.org/static/3RandomGenome-100kavg150bps.fna.
Rand, Hugh; Shumway, Martin; Trees, Eija K.; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E.; Defibaugh-Chavez, Stephanie; Carleton, Heather A.; Klimke, William A.; Katz, Lee S.
2017-01-01
Background As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. Methods We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and “known” phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Results Our “outbreak” benchmark datasets represent the four major foodborne bacterial pathogens (Listeria monocytogenes, Salmonella enterica, Escherichia coli, and Campylobacter jejuni) and one simulated dataset where the “known tree” can be accurately called the “true tree”. The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. Discussion These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross-institutional collaborations. Our work is part of a global effort to provide collaborative infrastructure for sequence data and analytic tools—we welcome additional benchmark datasets in our recommended format, and, if relevant, we will add these on our GitHub site. Together, these datasets, dataset format, and the underlying GitHub infrastructure present a recommended path for worldwide standardization of phylogenomic pipelines. PMID:29372115
Timme, Ruth E; Rand, Hugh; Shumway, Martin; Trees, Eija K; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E; Defibaugh-Chavez, Stephanie; Carleton, Heather A; Klimke, William A; Katz, Lee S
2017-01-01
As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and "known" phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Our "outbreak" benchmark datasets represent the four major foodborne bacterial pathogens ( Listeria monocytogenes , Salmonella enterica , Escherichia coli , and Campylobacter jejuni ) and one simulated dataset where the "known tree" can be accurately called the "true tree". The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross-institutional collaborations. Our work is part of a global effort to provide collaborative infrastructure for sequence data and analytic tools-we welcome additional benchmark datasets in our recommended format, and, if relevant, we will add these on our GitHub site. Together, these datasets, dataset format, and the underlying GitHub infrastructure present a recommended path for worldwide standardization of phylogenomic pipelines.
McIlrath, Carole; Keeney, Sinead; McKenna, Hugh; McLaughlin, Derek
2010-02-01
This paper is a report of a study conducted to identify and gain consensus on appropriate benchmarks for effective primary care-based nursing services for adults with depression. Worldwide evidence suggests that between 5% and 16% of the population have a diagnosis of depression. Most of their care and treatment takes place in primary care. In recent years, primary care nurses, including community mental health nurses, have become more involved in the identification and management of patients with depression; however, there are no appropriate benchmarks to guide, develop and support their practice. In 2006, a three-round electronic Delphi survey was completed by a United Kingdom multi-professional expert panel (n = 67). Round 1 generated 1216 statements relating to structures (such as training and protocols), processes (such as access and screening) and outcomes (such as patient satisfaction and treatments). Content analysis was used to collapse statements into 140 benchmarks. Seventy-three benchmarks achieved consensus during subsequent rounds. Of these, 45 (61%) were related to structures, 18 (25%) to processes and 10 (14%) to outcomes. Multi-professional primary care staff have similar views about the appropriate benchmarks for care of adults with depression. These benchmarks could serve as a foundation for depression improvement initiatives in primary care and ongoing research into depression management by nurses.
Research on computer systems benchmarking
NASA Technical Reports Server (NTRS)
Smith, Alan Jay (Principal Investigator)
1996-01-01
This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.
Benchmarking Equity in Transfer Policies for Career and Technical Associate's Degrees
ERIC Educational Resources Information Center
Chase, Megan M.
2011-01-01
Using critical policy analysis, this study considers state policies that impede technical credit transfer from public 2-year colleges to 4-year institutions of higher education. The states of Ohio, Texas, Washington, and Wisconsin are considered, and seven policy benchmarks for facilitating the transfer of technical credits are proposed. (Contains…
Benchmarking and Threshold Standards in Higher Education. Staff and Educational Development Series.
ERIC Educational Resources Information Center
Smith, Helen, Ed.; Armstrong, Michael, Ed.; Brown, Sally, Ed.
This book explores the issues involved in developing standards in higher education, examining the practical issues involved in benchmarking and offering a critical analysis of the problems associated with this developmental tool. The book focuses primarily on experience in the United Kingdom (UK), but looks also at international activity in this…
Building Bridges Between Geoscience and Data Science through Benchmark Data Sets
NASA Astrophysics Data System (ADS)
Thompson, D. R.; Ebert-Uphoff, I.; Demir, I.; Gel, Y.; Hill, M. C.; Karpatne, A.; Güereque, M.; Kumar, V.; Cabral, E.; Smyth, P.
2017-12-01
The changing nature of observational field data demands richer and more meaningful collaboration between data scientists and geoscientists. Thus, among other efforts, the Working Group on Case Studies of the NSF-funded RCN on Intelligent Systems Research To Support Geosciences (IS-GEO) is developing a framework to strengthen such collaborations through the creation of benchmark datasets. Benchmark datasets provide an interface between disciplines without requiring extensive background knowledge. The goals are to create (1) a means for two-way communication between geoscience and data science researchers; (2) new collaborations, which may lead to new approaches for data analysis in the geosciences; and (3) a public, permanent repository of complex data sets, representative of geoscience problems, useful to coordinate efforts in research and education. The group identified 10 key elements and characteristics for ideal benchmarks. High impact: A problem with high potential impact. Active research area: A group of geoscientists should be eager to continue working on the topic. Challenge: The problem should be challenging for data scientists. Data science generality and versatility: It should stimulate development of new general and versatile data science methods. Rich information content: Ideally the data set provides stimulus for analysis at many different levels. Hierarchical problem statement: A hierarchy of suggested analysis tasks, from relatively straightforward to open-ended tasks. Means for evaluating success: Data scientists and geoscientists need means to evaluate whether the algorithms are successful and achieve intended purpose. Quick start guide: Introduction for data scientists on how to easily read the data to enable rapid initial data exploration. Geoscience context: Summary for data scientists of the specific data collection process, instruments used, any pre-processing and the science questions to be answered. Citability: A suitable identifier to facilitate tracking the use of the benchmark later on, e.g. allowing search engines to find all research papers using it. A first sample benchmark developed in collaboration with the Jet Propulsion Laboratory (JPL) deals with the automatic analysis of imaging spectrometer data to detect significant methane sources in the atmosphere.
Validation of Shielding Analysis Capability of SuperMC with SINBAD
NASA Astrophysics Data System (ADS)
Chen, Chaobin; Yang, Qi; Wu, Bin; Han, Yuncheng; Song, Jing
2017-09-01
Abstract: The shielding analysis capability of SuperMC was validated with the Shielding Integral Benchmark Archive Database (SINBAD). The SINBAD was compiled by RSICC and NEA, it includes numerous benchmark experiments performed with the D-T fusion neutron source facilities of OKTAVIAN, FNS, IPPE, etc. The results from SuperMC simulation were compared with experimental data and MCNP results. Very good agreement with deviation lower than 1% was achieved and it suggests that SuperMC is reliable in shielding calculation.
Multi-Core Processor Memory Contention Benchmark Analysis Case Study
NASA Technical Reports Server (NTRS)
Simon, Tyler; McGalliard, James
2009-01-01
Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.
Schaffter, Thomas; Marbach, Daniel; Floreano, Dario
2011-08-15
Over the last decade, numerous methods have been developed for inference of regulatory networks from gene expression data. However, accurate and systematic evaluation of these methods is hampered by the difficulty of constructing adequate benchmarks and the lack of tools for a differentiated analysis of network predictions on such benchmarks. Here, we describe a novel and comprehensive method for in silico benchmark generation and performance profiling of network inference methods available to the community as an open-source software called GeneNetWeaver (GNW). In addition to the generation of detailed dynamical models of gene regulatory networks to be used as benchmarks, GNW provides a network motif analysis that reveals systematic prediction errors, thereby indicating potential ways of improving inference methods. The accuracy of network inference methods is evaluated using standard metrics such as precision-recall and receiver operating characteristic curves. We show how GNW can be used to assess the performance and identify the strengths and weaknesses of six inference methods. Furthermore, we used GNW to provide the international Dialogue for Reverse Engineering Assessments and Methods (DREAM) competition with three network inference challenges (DREAM3, DREAM4 and DREAM5). GNW is available at http://gnw.sourceforge.net along with its Java source code, user manual and supporting data. Supplementary data are available at Bioinformatics online. dario.floreano@epfl.ch.
Simplified Numerical Analysis of ECT Probe - Eddy Current Benchmark Problem 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikora, R.; Chady, T.; Gratkowski, S.
2005-04-09
In this paper a third eddy current benchmark problem is considered. The objective of the benchmark is to determine optimal operating frequency and size of the pancake coil designated for testing tubes made of Inconel. It can be achieved by maximization of the change in impedance of the coil due to a flaw. Approximation functions of the probe (coil) characteristic were developed and used in order to reduce number of required calculations. It results in significant speed up of the optimization process. An optimal testing frequency and size of the probe were achieved as a final result of the calculation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobrov, A. A.; Boyarinov, V. F.; Glushkov, A. E.
2012-07-01
Results of critical experiments performed at five ASTRA facility configurations modeling the high-temperature helium-cooled graphite-moderated reactors are presented. Results of experiments on definition of space distribution of {sup 235}U fission reaction rate performed at four from these five configurations are presented more detail. Analysis of available information showed that all experiments on criticality at these five configurations are acceptable for use them as critical benchmark experiments. All experiments on definition of space distribution of {sup 235}U fission reaction rate are acceptable for use them as physical benchmark experiments. (authors)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, T.; Laville, C.; Dyrda, J.
2012-07-01
The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplificationsmore » impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, B.; Ardani, K.; Feldman, D.
2013-10-01
This report presents results from the second U.S. Department of Energy (DOE) sponsored, bottom-up data-collection and analysis of non-hardware balance-of-system costs -- often referred to as 'business process' or 'soft' costs -- for U.S. residential and commercial photovoltaic (PV) systems. In service to DOE's SunShot Initiative, annual expenditure and labor-hour-productivity data are analyzed to benchmark 2012 soft costs related to (1) customer acquisition and system design (2) permitting, inspection, and interconnection (PII). We also include an in-depth analysis of costs related to financing, overhead, and profit. Soft costs are both a major challenge and a major opportunity for reducing PVmore » system prices and stimulating SunShot-level PV deployment in the United States. The data and analysis in this series of benchmarking reports are a step toward the more detailed understanding of PV soft costs required to track and accelerate these price reductions.« less
ERIC Educational Resources Information Center
Stern, Luli; Ahlgren, Andrew
2002-01-01
Project 2061 of the American Association for the Advancement of Science (AAAS) developed and field-tested a procedure for analyzing curriculum materials, including assessments, in terms of contribution to the attainment of benchmarks and standards. Using this procedure, Project 2061 produced a database of reports on nine science middle school…
Benchmark Analysis of Career and Technical Education in Lenawee County. Final Report.
ERIC Educational Resources Information Center
Hollenbeck, Kevin
The career and technical education (CTE) provided in grades K-12 in the county's vocational-technical center and 12 local public school districts of Lenawee County, Michigan, was benchmarked with respect to its attention to career development. Data were collected from the following sources: structured interviews with a number of key respondents…
Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.
Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J
2016-07-01
Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible way to manage infrequent inlet measurements. Its use enables benchmarking on a daily basis and prepares the ground for further investigation. Copyright © 2016 Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-03
... FEDERAL TRADE COMMISSION [File No. 111 0097] Healthcare Technology Holdings, Inc.; Analysis of... Public Comment I. Introduction The Federal Trade Commission (``Commission'') has accepted from Healthcare Technology Holdings, Inc. (``Healthcare Technology''), subject to final approval, an Agreement Containing...
76 FR 71564 - ScanScout, Inc.; Analysis of Proposed Consent Order To Aid Public Comment
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-18
... deceptive acts or practices or unfair methods of competition. The attached Analysis to Aid Public Comment... FEDERAL TRADE COMMISSION [File No. 102 3185] ScanScout, Inc.; Analysis of Proposed Consent Order... Trade Commission Act, 38 Stat. 721, 15 U.S.C. 46(f), and Sec. 2.34 the Commission Rules of Practice, 16...
Raison, Nicholas; Ahmed, Kamran; Fossati, Nicola; Buffi, Nicolò; Mottrie, Alexandre; Dasgupta, Prokar; Van Der Poel, Henk
2017-05-01
To develop benchmark scores of competency for use within a competency based virtual reality (VR) robotic training curriculum. This longitudinal, observational study analysed results from nine European Association of Urology hands-on-training courses in VR simulation. In all, 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performance metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning-curve analysis. Three basic skill and two advanced skill exercises were identified. Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises; however, advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still did not achieve the benchmark standard in the more difficult exercises. Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.
Xia, Yuan; Deshpande, Sameer; Bonates, Tiberius
2016-11-01
Social marketing managers promote desired behaviors to an audience by making them tangible in the form of environmental opportunities to enhance benefits and reduce barriers. This study proposed "benchmarks," modified from those found in the past literature, that would match important concepts of the social marketing framework and the inclusion of which would ensure behavior change effectiveness. In addition, we analyzed behavior change interventions on a "social marketing continuum" to assess whether the number of benchmarks and the role of specific benchmarks influence the effectiveness of physical activity promotion efforts. A systematic review of social marketing interventions available in academic studies published between 1997 and 2013 revealed 173 conditions in 92 interventions. Findings based on χ 2 , Mallows' Cp, and Logical Analysis of Data tests revealed that the presence of more benchmarks in interventions increased the likelihood of success in promoting physical activity. The presence of more than 3 benchmarks improved the success of the interventions; specifically, all interventions were successful when more than 7.5 benchmarks were present. Further, primary formative research, core product, actual product, augmented product, promotion, and behavioral competition all had a significant influence on the effectiveness of interventions. Social marketing is an effective approach in promoting physical activity among adults when a substantial number of benchmarks are used and when managers understand the audience, make the desired behavior tangible, and promote the desired behavior persuasively.
Benchmarking of venous thromboembolism prophylaxis practice with ENT.UK guidelines.
Al-Qahtani, Ali S
2017-05-01
The aim of this study was to benchmark our guidelines of prevention of venous thromboembolism (VTE) in ENT surgical population against ENT.UK guidelines, and also to encourage healthcare providers to utilize benchmarking as an effective method of improving performance. The study design is prospective descriptive analysis. The setting of this study is tertiary referral centre (Assir Central Hospital, Abha, Saudi Arabia). In this study, we are benchmarking our practice guidelines of the prevention of VTE in the ENT surgical population against that of ENT.UK guidelines to mitigate any gaps. ENT guidelines 2010 were downloaded from the ENT.UK Website. Our guidelines were compared with the possibilities that either our performance meets or fall short of ENT.UK guidelines. Immediate corrective actions will take place if there is quality chasm between the two guidelines. ENT.UK guidelines are evidence-based and updated which may serve as role-model for adoption and benchmarking. Our guidelines were accordingly amended to contain all factors required in providing a quality service to ENT surgical patients. While not given appropriate attention, benchmarking is a useful tool in improving quality of health care. It allows learning from others' practices and experiences, and works towards closing any quality gaps. In addition, benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning service provision. It is recommended to be included on the list of quality improvement methods of healthcare services.
Nonparametric estimation of benchmark doses in environmental risk assessment
Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen
2013-01-01
Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133
Gadolinia depletion analysis by CASMO-4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobayashi, Y.; Saji, E.; Toba, A.
1993-01-01
CASMO-4 is the most recent version of the lattice physics code CASMO introduced by Studsvik. The principal aspects of the CASMO-4 model that differ from the models in previous CASMO versions are as follows: (1) heterogeneous model for two-dimensional transport theory calculations; and (2) microregion depletion model for burnable absorbers, such as gadolinia. Of these aspects, the first has previously been benchmarked against measured data of critical experiments and Monte Carlo calculations, verifying the high degree of accuracy. To proceed with CASMO-4 benchmarking, it is desirable to benchmark the microregion depletion model, which enables CASMO-4 to calculate gadolinium depletion directlymore » without the need for precalculated MICBURN cross-section data. This paper presents the benchmarking results for the microregion depletion model in CASMO-4 using the measured data of depleted gadolinium rods.« less
von Websky, Martin W; Raptis, Dimitri A; Vitz, Martina; Rosenthal, Rachel; Clavien, P A; Hahnloser, Dieter
2013-11-01
Virtual reality (VR) simulators are widely used to familiarize surgical novices with laparoscopy, but VR training methods differ in efficacy. In the present trial, self-controlled basic VR training (SC-training) was tested against training based on peer-group-derived benchmarks (PGD-training). First, novice laparoscopic residents were randomized into a SC group (n = 34), and a group using PGD-benchmarks (n = 34) for basic laparoscopic training. After completing basic training, both groups performed 60 VR laparoscopic cholecystectomies for performance analysis. Primary endpoints were simulator metrics; secondary endpoints were program adherence, trainee motivation, and training efficacy. Altogether, 66 residents completed basic training, and 3,837 of 3,960 (96.8 %) cholecystectomies were available for analysis. Course adherence was good, with only two dropouts, both in the SC-group. The PGD-group spent more time and repetitions in basic training until the benchmarks were reached and subsequently showed better performance in the readout cholecystectomies: Median time (gallbladder extraction) showed significant differences of 520 s (IQR 354-738 s) in SC-training versus 390 s (IQR 278-536 s) in the PGD-group (p < 0.001) and 215 s (IQR 175-276 s) in experts, respectively. Path length of the right instrument also showed significant differences, again with the PGD-training group being more efficient. Basic VR laparoscopic training based on PGD benchmarks with external assessment is superior to SC training, resulting in higher trainee motivation and better performance in simulated laparoscopic cholecystectomies. We recommend such a basic course based on PGD benchmarks before advancing to more elaborate VR training.
The ATLAS Tier-3 in Geneva and the Trigger Development Facility
NASA Astrophysics Data System (ADS)
Gadomski, S.; Meunier, Y.; Pasche, P.; Baud, J.-P.; ATLAS Collaboration
2011-12-01
The ATLAS Tier-3 farm at the University of Geneva provides storage and processing power for analysis of ATLAS data. In addition the facility is used for development, validation and commissioning of the High Level Trigger of ATLAS [1]. The latter purpose leads to additional requirements on the availability of latest software and data, which will be presented. The farm is also a part of the WLCG [2], and is available to all members of the ATLAS Virtual Organization. The farm currently provides 268 CPU cores and 177 TB of storage space. A grid Storage Element, implemented with the Disk Pool Manager software [3], is available and integrated with the ATLAS Distributed Data Management system [4]. The batch system can be used directly by local users, or with a grid interface provided by NorduGrid ARC middleware [5]. In this article we will present the use cases that we support, as well as the experience with the software and the hardware we are using. Results of I/O benchmarking tests, which were done for our DPM Storage Element and for the NFS servers we are using, will also be presented.
Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W
2017-08-28
The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
van Lent, Wineke A M; de Beer, Relinde D; van Harten, Wim H
2010-08-31
Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations.Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. The improved benchmarking process and the success factors can produce relevant input to improve the operations management of specialty hospitals.
2010-01-01
Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. Results We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations. Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. Conclusions The improved benchmarking process and the success factors can produce relevant input to improve the operations management of specialty hospitals. PMID:20807408
LUMA: A many-core, Fluid-Structure Interaction solver based on the Lattice-Boltzmann Method
NASA Astrophysics Data System (ADS)
Harwood, Adrian R. G.; O'Connor, Joseph; Sanchez Muñoz, Jonathan; Camps Santasmasas, Marta; Revell, Alistair J.
2018-01-01
The Lattice-Boltzmann Method at the University of Manchester (LUMA) project was commissioned to build a collaborative research environment in which researchers of all abilities can study fluid-structure interaction (FSI) problems in engineering applications from aerodynamics to medicine. It is built on the principles of accessibility, simplicity and flexibility. The LUMA software at the core of the project is a capable FSI solver with turbulence modelling and many-core scalability as well as a wealth of input/output and pre- and post-processing facilities. The software has been validated and several major releases benchmarked on supercomputing facilities internationally. The software architecture is modular and arranged logically using a minimal amount of object-orientation to maintain a simple and accessible software.
Benchmarks of Global Clean Energy Manufacturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandor, Debra; Chung, Donald; Keyser, David
The Clean Energy Manufacturing Analysis Center (CEMAC), sponsored by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), provides objective analysis and up-to-date data on global supply chains and manufacturing of clean energy technologies. Benchmarks of Global Clean Energy Manufacturing sheds light on several fundamental questions about the global clean technology manufacturing enterprise: How does clean energy technology manufacturing impact national economies? What are the economic opportunities across the manufacturing supply chain? What are the global dynamics of clean energy technology manufacturing?
IPRT polarized radiative transfer model intercomparison project - Phase A
NASA Astrophysics Data System (ADS)
Emde, Claudia; Barlakas, Vasileios; Cornet, Céline; Evans, Frank; Korkin, Sergey; Ota, Yoshifumi; Labonnote, Laurent C.; Lyapustin, Alexei; Macke, Andreas; Mayer, Bernhard; Wendisch, Manfred
2015-10-01
The polarization state of electromagnetic radiation scattered by atmospheric particles such as aerosols, cloud droplets, or ice crystals contains much more information about the optical and microphysical properties than the total intensity alone. For this reason an increasing number of polarimetric observations are performed from space, from the ground and from aircraft. Polarized radiative transfer models are required to interpret and analyse these measurements and to develop retrieval algorithms exploiting polarimetric observations. In the last years a large number of new codes have been developed, mostly for specific applications. Benchmark results are available for specific cases, but not for more sophisticated scenarios including polarized surface reflection and multi-layer atmospheres. The International Polarized Radiative Transfer (IPRT) working group of the International Radiation Commission (IRC) has initiated a model intercomparison project in order to fill this gap. This paper presents the results of the first phase A of the IPRT project which includes ten test cases, from simple setups with only one layer and Rayleigh scattering to rather sophisticated setups with a cloud embedded in a standard atmosphere above an ocean surface. All scenarios in the first phase A of the intercomparison project are for a one-dimensional plane-parallel model geometry. The commonly established benchmark results are available at the IPRT website.
A broad-group cross-section library based on ENDF/B-VII.0 for fast neutron dosimetry Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alpan, F.A.
2011-07-01
A new ENDF/B-VII.0-based coupled 44-neutron, 20-gamma-ray-group cross-section library was developed to investigate the latest evaluated nuclear data file (ENDF) ,in comparison to ENDF/B-VI.3 used in BUGLE-96, as well as to generate an objective-specific library. The objectives selected for this work consisted of dosimetry calculations for in-vessel and ex-vessel reactor locations, iron atom displacement calculations for reactor internals and pressure vessel, and {sup 58}Ni(n,{gamma}) calculation that is important for gas generation in the baffle plate. The new library was generated based on the contribution and point-wise cross-section-driven (CPXSD) methodology and was applied to one of the most widely used benchmarks, themore » Oak Ridge National Laboratory Pool Critical Assembly benchmark problem. In addition to the new library, BUGLE-96 and an ENDF/B-VII.0-based coupled 47-neutron, 20-gamma-ray-group cross-section library was generated and used with both SNLRML and IRDF dosimetry cross sections to compute reaction rates. All reaction rates computed by the multigroup libraries are within {+-} 20 % of measurement data and meet the U. S. Nuclear Regulatory Commission acceptance criterion for reactor vessel neutron exposure evaluations specified in Regulatory Guide 1.190. (authors)« less
Key Performance Indicators in the Evaluation of the Quality of Radiation Safety Programs.
Schultz, Cheryl Culver; Shaffer, Sheila; Fink-Bennett, Darlene; Winokur, Kay
2016-08-01
Beaumont is a multiple hospital health care system with a centralized radiation safety department. The health system operates under a broad scope Nuclear Regulatory Commission license but also maintains several other limited use NRC licenses in off-site facilities and clinics. The hospital-based program is expansive including diagnostic radiology and nuclear medicine (molecular imaging), interventional radiology, a comprehensive cardiovascular program, multiple forms of radiation therapy (low dose rate brachytherapy, high dose rate brachytherapy, external beam radiotherapy, and gamma knife), and the Research Institute (including basic bench top, human and animal). Each year, in the annual report, data is analyzed and then tracked and trended. While any summary report will, by nature, include items such as the number of pieces of equipment, inspections performed, staff monitored and educated and other similar parameters, not all include an objective review of the quality and effectiveness of the program. Through objective numerical data Beaumont adopted seven key performance indicators. The assertion made is that key performance indicators can be used to establish benchmarks for evaluation and comparison of the effectiveness and quality of radiation safety programs. Based on over a decade of data collection, and adoption of key performance indicators, this paper demonstrates one way to establish objective benchmarking for radiation safety programs in the health care environment.
Main functions, recent updates, and applications of Synchrotron Radiation Workshop code
NASA Astrophysics Data System (ADS)
Chubar, Oleg; Rakitin, Maksim; Chen-Wiegart, Yu-Chen Karen; Chu, Yong S.; Fluerasu, Andrei; Hidas, Dean; Wiegart, Lutz
2017-08-01
The paper presents an overview of the main functions and new application examples of the "Synchrotron Radiation Workshop" (SRW) code. SRW supports high-accuracy calculations of different types of synchrotron radiation, and simulations of propagation of fully-coherent radiation wavefronts, partially-coherent radiation from a finite-emittance electron beam of a storage ring source, and time-/frequency-dependent radiation pulses of a free-electron laser, through X-ray optical elements of a beamline. An extended library of physical-optics "propagators" for different types of reflective, refractive and diffractive X-ray optics with its typical imperfections, implemented in SRW, enable simulation of practically any X-ray beamline in a modern light source facility. The high accuracy of calculation methods used in SRW allows for multiple applications of this code, not only in the area of development of instruments and beamlines for new light source facilities, but also in areas such as electron beam diagnostics, commissioning and performance benchmarking of insertion devices and individual X-ray optical elements of beamlines. Applications of SRW in these areas, facilitating development and advanced commissioning of beamlines at the National Synchrotron Light Source II (NSLS-II), are described.
Benchmark On Sensitivity Calculation (Phase III)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, Tatiana; Laville, Cedric; Dyrda, James
2012-01-01
The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less
User's Manual for RESRAD-OFFSITE Version 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; Gnanapragasam, E.; Biwer, B. M.
2007-09-05
The RESRAD-OFFSITE code is an extension of the RESRAD (onsite) code, which has been widely used for calculating doses and risks from exposure to radioactively contaminated soils. The development of RESRAD-OFFSITE started more than 10 years ago, but new models and methodologies have been developed, tested, and incorporated since then. Some of the new models have been benchmarked against other independently developed (international) models. The databases used have also expanded to include all the radionuclides (more than 830) contained in the International Commission on Radiological Protection (ICRP) 38 database. This manual provides detailed information on the design and application ofmore » the RESRAD-OFFSITE code. It describes in detail the new models used in the code, such as the three-dimensional dispersion groundwater flow and radionuclide transport model, the Gaussian plume model for atmospheric dispersion, and the deposition model used to estimate the accumulation of radionuclides in offsite locations and in foods. Potential exposure pathways and exposure scenarios that can be modeled by the RESRAD-OFFSITE code are also discussed. A user's guide is included in Appendix A of this manual. The default parameter values and parameter distributions are presented in Appendix B, along with a discussion on the statistical distributions for probabilistic analysis. A detailed discussion on how to reduce run time, especially when conducting probabilistic (uncertainty) analysis, is presented in Appendix C of this manual.« less
BACT Simulation User Guide (Version 7.0)
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1997-01-01
This report documents the structure and operation of a simulation model of the Benchmark Active Control Technology (BACT) Wind-Tunnel Model. The BACT system was designed, built, and tested at NASA Langley Research Center as part of the Benchmark Models Program and was developed to perform wind-tunnel experiments to obtain benchmark quality data to validate computational fluid dynamics and computational aeroelasticity codes, to verify the accuracy of current aeroservoelasticity design and analysis tools, and to provide an active controls testbed for evaluating new and innovative control algorithms for flutter suppression and gust load alleviation. The BACT system has been especially valuable as a control system testbed.
Benchmark Testing of a New 56Fe Evaluation for Criticality Safety Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leal, Luiz C; Ivanov, E.
2015-01-01
The SAMMY code was used to evaluate resonance parameters of the 56Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.
Pedron, Sara; Winter, Vera; Oppel, Eva-Maria; Bialas, Enno
2017-08-23
Operating room (OR) efficiency continues to be a high priority for hospitals. In this context the concept of benchmarking has gained increasing importance as a means to improve OR performance. The aim of this study was to investigate whether and how participation in a benchmarking and reporting program for surgical process data was associated with a change in OR efficiency, measured through raw utilization, turnover times, and first-case tardiness. The main analysis is based on panel data from 202 surgical departments in German hospitals, which were derived from the largest database for surgical process data in Germany. Panel regression modelling was applied. Results revealed no clear and univocal trend of participation in a benchmarking and reporting program for surgical process data. The largest trend was observed for first-case tardiness. In contrast to expectations, turnover times showed a generally increasing trend during participation. For raw utilization no clear and statistically significant trend could be evidenced. Subgroup analyses revealed differences in effects across different hospital types and department specialties. Participation in a benchmarking and reporting program and thus the availability of reliable, timely and detailed analysis tools to support the OR management seemed to be correlated especially with an increase in the timeliness of staff members regarding first-case starts. The increasing trend in turnover time revealed the absence of effective strategies to improve this aspect of OR efficiency in German hospitals and could have meaningful consequences for the medium- and long-run capacity planning in the OR.
Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed
NASA Technical Reports Server (NTRS)
Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie
2009-01-01
Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.
Lira, Renan Bezerra; de Carvalho, André Ywata; de Carvalho, Genival Barbosa; Lewis, Carol M; Weber, Randal S; Kowalski, Luiz Paulo
2016-07-01
Quality assessment is a major tool for evaluation of health care delivery. In head and neck surgery, the University of Texas MD Anderson Cancer Center (MD Anderson) has defined quality standards by publishing benchmarks. We conducted an analysis of 360 head and neck surgeries performed at the AC Camargo Cancer Center (AC Camargo). The procedures were stratified into low-acuity procedures (LAPs) or high-acuity procedures (HAPs) and outcome indicators where compared to MD Anderson benchmarks. In the 360 cases, there were 332 LAPs (92.2%) and 28 HAPs (7.8%). Patients with any comorbid condition had a higher incidence of negative outcome indicators (p = .005). In the LAPs, we achieved the MD Anderson benchmarks in all outcome indicators. In HAPs, the rate of surgical site infection and length of hospital stay were higher than what is established by the benchmarks. Quality assessment of head and neck surgery is possible and should be disseminated, improving effectiveness in health care delivery. © 2015 Wiley Periodicals, Inc. Head Neck 38: 1002-1007, 2016. © 2015 Wiley Periodicals, Inc.
An automated protocol for performance benchmarking a widefield fluorescence microscope.
Halter, Michael; Bier, Elianna; DeRose, Paul C; Cooksey, Gregory A; Choquette, Steven J; Plant, Anne L; Elliott, John T
2014-11-01
Widefield fluorescence microscopy is a highly used tool for visually assessing biological samples and for quantifying cell responses. Despite its widespread use in high content analysis and other imaging applications, few published methods exist for evaluating and benchmarking the analytical performance of a microscope. Easy-to-use benchmarking methods would facilitate the use of fluorescence imaging as a quantitative analytical tool in research applications, and would aid the determination of instrumental method validation for commercial product development applications. We describe and evaluate an automated method to characterize a fluorescence imaging system's performance by benchmarking the detection threshold, saturation, and linear dynamic range to a reference material. The benchmarking procedure is demonstrated using two different materials as the reference material, uranyl-ion-doped glass and Schott 475 GG filter glass. Both are suitable candidate reference materials that are homogeneously fluorescent and highly photostable, and the Schott 475 GG filter glass is currently commercially available. In addition to benchmarking the analytical performance, we also demonstrate that the reference materials provide for accurate day to day intensity calibration. Published 2014 Wiley Periodicals Inc. Published 2014 Wiley Periodicals Inc. This article is a US government work and, as such, is in the public domain in the United States of America.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-11
... market analysis that the Commission intends to undertake in the coming months to assist in evaluating competition in the market for special access services; possible changes to the Commission's pricing flexibility rules after the Commission conducts its market analysis; and the reasonableness of terms and...
Results of the GABLS3 diurnal-cycle benchmark for wind energy applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigo, J. Sanz; Allaerts, D.; Avila, M.
We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less
Results of the GABLS3 diurnal-cycle benchmark for wind energy applications
Rodrigo, J. Sanz; Allaerts, D.; Avila, M.; ...
2017-06-13
We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less
Integrated Sensing Processor, Phase 2
2005-12-01
performance analysis for several baseline classifiers including neural nets, linear classifiers, and kNN classifiers. Use of CCDR as a preprocessing step...below the level of the benchmark non-linear classifier for this problem ( kNN ). Furthermore, the CCDR preconditioned kNN achieved a 10% improvement over...the benchmark kNN without CCDR. Finally, we found an important connection between intrinsic dimension estimation via entropic graphs and the optimal
NASA Astrophysics Data System (ADS)
Dimitriadis, Panayiotis; Tegos, Aristoteles; Oikonomou, Athanasios; Pagana, Vassiliki; Koukouvinos, Antonios; Mamassis, Nikos; Koutsoyiannis, Demetris; Efstratiadis, Andreas
2016-03-01
One-dimensional and quasi-two-dimensional hydraulic freeware models (HEC-RAS, LISFLOOD-FP and FLO-2d) are widely used for flood inundation mapping. These models are tested on a benchmark test with a mixed rectangular-triangular channel cross section. Using a Monte-Carlo approach, we employ extended sensitivity analysis by simultaneously varying the input discharge, longitudinal and lateral gradients and roughness coefficients, as well as the grid cell size. Based on statistical analysis of three output variables of interest, i.e. water depths at the inflow and outflow locations and total flood volume, we investigate the uncertainty enclosed in different model configurations and flow conditions, without the influence of errors and other assumptions on topography, channel geometry and boundary conditions. Moreover, we estimate the uncertainty associated to each input variable and we compare it to the overall one. The outcomes of the benchmark analysis are further highlighted by applying the three models to real-world flood propagation problems, in the context of two challenging case studies in Greece.
ERIC Educational Resources Information Center
Paulson, Julia; Bellino, Michelle J.
2017-01-01
Transitional justice and education both occupy increasingly prominent space on the international peacebuilding agenda, though less is known about the ways they might reinforce one another to contribute towards peace. This paper presents a cross-national analysis of truth commission (TC) reports spanning 1980-2015, exploring the range of…
SU-G-BRC-17: Using Generalized Mean for Equivalent Square Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, S; Fan, Q; Lei, Y
Purpose: Equivalent Square (ES) is a widely used concept in radiotherapy. It enables us to determine many important quantities for a rectangular treatment field, without measurement, based on the corresponding values from its ES field. In this study, we propose a Generalized Mean (GM) type ES formula and compare it with other established formulae using benchmark datasets. Methods: Our GM approach is expressed as ES=(w•fx^α+(1-w)•fy^α)^(1/α), where fx, fy, α, and w represent field sizes, power index, and a weighting factor, respectively. When α=−1 it reduces to well-known Sterling type ES formulae. In our study, α and w are determined throughmore » least-square-fitting. Akaike Information Criterion (AIC) was used to benchmark the performance of each formula. BJR (Supplement 17) ES field table for X-ray PDDs and open field output factor tables in Varian TrueBeam representative dataset were used for validation. Results: Switching from α=−1 to α=−1.25, a 20% reduction in standard deviation of residual error in ES estimation was achieved for the BJR dataset. The maximum relative residual error was reduced from ∼3% (in Sterling formula) or ∼2% (in Vadash/Bjarngard formula) down to ∼1% in GM formula for open fields of all energies and at rectangular field sizes from 3cm to 40cm in the Varian dataset. The improvement of the GM over the Sterling type ES formulae is particularly noticeable for very elongated rectangular fields with short width. AIC analysis confirmed the superior performance of the GM formula after taking into account the expanded parameter space. Conclusion: The GM significantly outperforms Sterling type formulae at slightly increased computational cost. The GM calculation may nullify the requirement of data measurement for many rectangular fields and hence shorten the Linac commissioning process. Improved dose calculation accuracy is also expected by adopting the GM formula into treatment planning and secondary MU check systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezler, P.; Hartzman, M.; Reich, M.
1980-08-01
A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.
REPORT FOR COMMERCIAL GRADE NICKEL CHARACTERIZATION AND BENCHMARKING
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-12-20
Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, has completed the collection, sample analysis, and review of analytical results to benchmark the concentrations of gross alpha-emitting radionuclides, gross beta-emitting radionuclides, and technetium-99 in commercial grade nickel. This report presents methods, change management, observations, and statistical analysis of materials procured from sellers representing nine countries on four continents. The data suggest there is a low probability of detecting alpha- and beta-emitting radionuclides in commercial nickel. Technetium-99 was not detected in any samples, thus suggesting it is not present in commercial nickel.
Particle shape analysis of volcanic clast samples with the Matlab tool MORPHEO
NASA Astrophysics Data System (ADS)
Charpentier, Isabelle; Sarocchi, Damiano; Rodriguez Sedano, Luis Angel
2013-02-01
This paper presents a modular Matlab tool, namely MORPHEO, devoted to the study of particle morphology by Fourier analysis. A benchmark made of four sample images with different features (digitized coins, a pebble chart, gears, digitized volcanic clasts) is then proposed to assess the abilities of the software. Attention is brought to the Weibull distribution introduced to enhance fine variations of particle morphology. Finally, as an example, samples pertaining to a lahar deposit located in La Lumbre ravine (Colima Volcano, Mexico) are analysed. MORPHEO and the benchmark are freely available for research purposes.
Length of stay benchmarking in the Australian private hospital sector.
Hanning, Brian W T
2007-02-01
Length of stay (LOS) benchmarking is a means of comparing hospital efficiency. Analysis of private cases in private facilities using Australian Institute of Health and Welfare (AIHW) data shows interstate variation in same-day (SD) cases and overnight average LOS (ONALOS) on an Australian Refined Diagnosis Related Groups version 4 (ARDRGv4) standardised basis. ARDRGv4 standardised analysis from 1998-99 to 2003-04 shows a steady increase in private sector SD cases (approximately 1.4% per annum) and a decrease in ONALOS (approximately 4.3% per annum). Overall, the data show significant variation in LOS parameters between private hospitals.
Building Commissioning: A Golden Opportunity for Reducing Energy Costs and Greenhouse-gas Emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, Evan
2009-07-16
The aim of commissioning new buildings is to ensure that they deliver, if not exceed, the performance and energy savings promised by their design. When applied to existing buildings, commissioning identifies the almost inevitable 'drift' from where things should be and puts the building back on course. In both contexts, commissioning is a systematic, forensic approach to quality assurance, rather than a technology per se. Although commissioning has earned increased recognition in recent years - even a toehold in Wikipedia - it remains an enigmatic practice whose visibility severely lags its potential. Over the past decade, Lawrence Berkeley National Laboratorymore » has built the world's largest compilation and meta-analysis of commissioning experience in commercial buildings. Since our last report (Mills et al. 2004) the database has grown from 224 to 643 buildings (all located in the United States, and spanning 26 states), from 30 to 100 million square feet of floorspace, and from $17 million to $43 million in commissioning expenditures. The recorded cases of new-construction commissioning took place in buildings representing $2.2 billion in total construction costs (up from 1.5 billion). The work of many more commissioning providers (18 versus 37) is represented in this study, as is more evidence of energy and peak-power savings as well as cost-effectiveness. We now translate these impacts into avoided greenhouse gases and provide new indicators of cost-effectiveness. We also draw attention to the specific challenges and opportunities for high-tech facilities such as labs, cleanrooms, data centers, and healthcare facilities. The results are compelling. We developed an array of benchmarks for characterizing project performance and cost-effectiveness. The median normalized cost to deliver commissioning was $0.30/ft2 for existing buildings and $1.16/ft2 for new construction (or 0.4% of the overall construction cost). The commissioning projects for which data are available revealed over 10,000 energy-related problems, resulting in 16% median whole-building energy savings in existing buildings and 13% in new construction, with payback time of 1.1 years and 4.2 years, respectively. In terms of other cost-benefit indicators, median benefit-cost ratios of 4.5 and 1.1, and cash-on-cash returns of 91% and 23% were attained for existing and new buildings, respectively. High-tech buildings were particularly cost-effective, and saved higher amounts of energy due to their energy-intensiveness. Projects with a comprehensive approach to commissioning attained nearly twice the overall median level of savings and five-times the savings of the least-thorough projects. It is noteworthy that virtually all existing building projects were cost-effective by each metric (0.4 years for the upper quartile and 2.4 years for the lower quartile), as were the majority of new-construction projects (1.5 years and 10.8 years, respectively). We also found high cost-effectiveness for each specific measure for which we have data. Contrary to a common perception, cost-effectiveness is often achieved even in smaller buildings. Thanks to energy savings valued more than the cost of the commissioning process, associated reductions in greenhouse gas emissions come at 'negative' cost. In fact, the median cost of conserved carbon is negative - -$110 per tonne for existing buildings and -$25/tonne for new construction - as compared with market prices for carbon trading and offsets in the +$10 to +$30/tonne range. Further enhancing the value of commissioning, its non-energy benefits surpass those of most other energy-management practices. Significant first-cost savings (e.g., through right-sizing of heating and cooling equipment) routinely offset at least a portion of commissioning costs - fully in some cases. When accounting for these benefits, the net median commissioning project cost was reduced by 49% on average, while in many cases they exceeded the direct value of the energy savings. Commissioning also improves worker comfort, mitigates indoor air quality problems, increases the competence of in-house staff, plus a host of other non-energy benefits. These findings demonstrate that commissioning is arguably the single-most cost-effective strategy for reducing energy, costs, and greenhouse gas emissions in buildings today. Energy savings tend to persist well over at least a 3- to 5-year timeframe, but data over longer time horizons are not available. It is thus important to 'Trust but Verify,' and indeed the field is moving towards a monitoring-based paradigm in which instrumentation is used not only to confirm savings, but to identify opportunities that would otherwise go undetected. On balance, we view the findings here as conservative, in the sense that they underestimate the actual performance of projects when all costs and benefits are considered. They underestimate the technical potential for a scenario in which best practices are applied.« less
Kang, Guangliang; Du, Li; Zhang, Hong
2016-06-22
The growing complexity of biological experiment design based on high-throughput RNA sequencing (RNA-seq) is calling for more accommodative statistical tools. We focus on differential expression (DE) analysis using RNA-seq data in the presence of multiple treatment conditions. We propose a novel method, multiDE, for facilitating DE analysis using RNA-seq read count data with multiple treatment conditions. The read count is assumed to follow a log-linear model incorporating two factors (i.e., condition and gene), where an interaction term is used to quantify the association between gene and condition. The number of the degrees of freedom is reduced to one through the first order decomposition of the interaction, leading to a dramatically power improvement in testing DE genes when the number of conditions is greater than two. In our simulation situations, multiDE outperformed the benchmark methods (i.e. edgeR and DESeq2) even if the underlying model was severely misspecified, and the power gain was increasing in the number of conditions. In the application to two real datasets, multiDE identified more biologically meaningful DE genes than the benchmark methods. An R package implementing multiDE is available publicly at http://homepage.fudan.edu.cn/zhangh/softwares/multiDE . When the number of conditions is two, multiDE performs comparably with the benchmark methods. When the number of conditions is greater than two, multiDE outperforms the benchmark methods.
Benchmarking Discount Rate in Natural Resource Damage Assessment with Risk Aversion.
Wu, Desheng; Chen, Shuzhen
2017-08-01
Benchmarking a credible discount rate is of crucial importance in natural resource damage assessment (NRDA) and restoration evaluation. This article integrates a holistic framework of NRDA with prevailing low discount rate theory, and proposes a discount rate benchmarking decision support system based on service-specific risk aversion. The proposed approach has the flexibility of choosing appropriate discount rates for gauging long-term services, as opposed to decisions based simply on duration. It improves injury identification in NRDA since potential damages and side-effects to ecosystem services are revealed within the service-specific framework. A real embankment case study demonstrates valid implementation of the method. © 2017 Society for Risk Analysis.
Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen
2017-02-21
To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.
Practice Benchmarking in the Age of Targeted Auditing
Langdale, Ryan P.; Holland, Ben F.
2012-01-01
The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists. PMID:23598847
Practice benchmarking in the age of targeted auditing.
Langdale, Ryan P; Holland, Ben F
2012-11-01
The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists.
RETRANO3 benchmarks for Beaver Valley plant transients and FSAR analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaumont, E.T.; Feltus, M.A.
1993-01-01
Any best-estimate code (e.g., RETRANO3) results must be validated against plant data and final safety analysis report (FSAR) predictions. The need for two independent means of benchmarking is necessary to ensure that the results were not biased toward a particular data set and to have a certain degree of accuracy. The code results need to be compared with previous results and show improvements over previous code results. Ideally, the two best means of benchmarking a thermal hydraulics code are comparing results from previous versions of the same code along with actual plant data. This paper describes RETRAN03 benchmarks against RETRAN02more » results, actual plant data, and FSAR predictions. RETRAN03, the Electric Power Research Institute's latest version of the RETRAN thermal-hydraulic analysis codes, offers several upgrades over its predecessor, RETRAN02 Mod5. RETRAN03 can use either implicit or semi-implicit numerics, whereas RETRAN02 Mod5 uses only semi-implicit numerics. Another major upgrade deals with slip model options. RETRAN03 added several new models, including a five-equation model for more accurate modeling of two-phase flow. RETPAN02 Mod5 should give similar but slightly more conservative results than RETRAN03 when executed with RETRAN02 Mod5 options.« less
Clarke, Aileen; Taylor-Phillips, Sian; Swan, Jacky; Gkeredakis, Emmanouil; Mills, Penny; Powell, John; Nicolini, Davide; Roginski, Claudia; Scarbrough, Harry; Grove, Amy
2013-05-28
To investigate types of evidence used by healthcare commissioners when making decisions and whether decisions were influenced by commissioners' experience, personal characteristics or role at work. Cross-sectional survey of 345 National Health Service (NHS) staff members. The study was conducted across 11 English Primary Care Trusts between 2010 and 2011. A total of 440 staff involved in commissioning decisions and employed at NHS band 7 or above were invited to participate in the study. Of those, 345 (78%) completed all or a part of the survey. Participants were asked to rate how important different sources of evidence (empirical or practical) were in a recent decision that had been made. Backwards stepwise logistic regression analyses were undertaken to assess the contributions of age, gender and professional background, as well as the years of experience in NHS commissioning, pay grade and work role. The extent to which empirical evidence was used for commissioning decisions in the NHS varied according to the professional background. Only 50% of respondents stated that clinical guidelines and cost-effectiveness evidence were important for healthcare decisions. Respondents were more likely to report use of empirical evidence if they worked in Public Health in comparison to other departments (p<0.0005, commissioning and contracts OR 0.32, 95%CI 0.18 to 0.57, finance OR 0.19, 95%CI 0.05 to 0.78, other departments OR 0.35, 95%CI 0.17 to 0.71) or if they were female (OR 1.8 95% CI 1.01 to 3.1) rather than male. Respondents were more likely to report use of practical evidence if they were more senior within the organisation (pay grade 8b or higher OR 2.7, 95%CI 1.4 to 5.3, p=0.004 in comparison to lower pay grades). Those trained in Public Health appeared more likely to use external empirical evidence while those at higher pay scales were more likely to use practical evidence when making commissioning decisions. Clearly, National Institute for Clinical Excellence (NICE) guidance and government publications (eg, National Service Frameworks) are important for decision-making, but practical sources of evidence such as local intelligence, benchmarking data and expert advice are also influential. New Clinical Commissioning Groups will need a variety of different evidence sources and expert involvement to ensure that effective decisions are made for their populations.
Clarke, Aileen; Taylor-Phillips, Sian; Swan, Jacky; Gkeredakis, Emmanouil; Mills, Penny; Powell, John; Nicolini, Davide; Roginski, Claudia; Scarbrough, Harry; Grove, Amy
2013-01-01
Objectives To investigate types of evidence used by healthcare commissioners when making decisions and whether decisions were influenced by commissioners’ experience, personal characteristics or role at work. Design Cross-sectional survey of 345 National Health Service (NHS) staff members. Setting The study was conducted across 11 English Primary Care Trusts between 2010 and 2011. Participants A total of 440 staff involved in commissioning decisions and employed at NHS band 7 or above were invited to participate in the study. Of those, 345 (78%) completed all or a part of the survey. Main outcome measures Participants were asked to rate how important different sources of evidence (empirical or practical) were in a recent decision that had been made. Backwards stepwise logistic regression analyses were undertaken to assess the contributions of age, gender and professional background, as well as the years of experience in NHS commissioning, pay grade and work role. Results The extent to which empirical evidence was used for commissioning decisions in the NHS varied according to the professional background. Only 50% of respondents stated that clinical guidelines and cost-effectiveness evidence were important for healthcare decisions. Respondents were more likely to report use of empirical evidence if they worked in Public Health in comparison to other departments (p<0.0005, commissioning and contracts OR 0.32, 95%CI 0.18 to 0.57, finance OR 0.19, 95%CI 0.05 to 0.78, other departments OR 0.35, 95%CI 0.17 to 0.71) or if they were female (OR 1.8 95% CI 1.01 to 3.1) rather than male. Respondents were more likely to report use of practical evidence if they were more senior within the organisation (pay grade 8b or higher OR 2.7, 95%CI 1.4 to 5.3, p=0.004 in comparison to lower pay grades). Conclusions Those trained in Public Health appeared more likely to use external empirical evidence while those at higher pay scales were more likely to use practical evidence when making commissioning decisions. Clearly, National Institute for Clinical Excellence (NICE) guidance and government publications (eg, National Service Frameworks) are important for decision-making, but practical sources of evidence such as local intelligence, benchmarking data and expert advice are also influential. New Clinical Commissioning Groups will need a variety of different evidence sources and expert involvement to ensure that effective decisions are made for their populations. PMID:23793669
NASA Technical Reports Server (NTRS)
Pedretti, Kevin T.; Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)
1997-01-01
A variety of different network technologies and topologies are currently being evaluated as part of the Whitney Project. This paper reports on the implementation and performance of a Fast Ethernet network configured in a 4x4 2D torus topology in a testbed cluster of 'commodity' Pentium Pro PCs. Several benchmarks were used for performance evaluation: an MPI point to point message passing benchmark, an MPI collective communication benchmark, and the NAS Parallel Benchmarks version 2.2 (NPB2). Our results show that for point to point communication on an unloaded network, the hub and 1 hop routes on the torus have about the same bandwidth and latency. However, the bandwidth decreases and the latency increases on the torus for each additional route hop. Collective communication benchmarks show that the torus provides roughly four times more aggregate bandwidth and eight times faster MPI barrier synchronizations than a hub based network for 16 processor systems. Finally, the SOAPBOX benchmarks, which simulate real-world CFD applications, generally demonstrated substantially better performance on the torus than on the hub. In the few cases the hub was faster, the difference was negligible. In total, our experimental results lead to the conclusion that for Fast Ethernet networks, the torus topology has better performance and scales better than a hub based network.
Ontology for Semantic Data Integration in the Domain of IT Benchmarking.
Pfaff, Matthias; Neubig, Stefan; Krcmar, Helmut
2018-01-01
A domain-specific ontology for IT benchmarking has been developed to bridge the gap between a systematic characterization of IT services and their data-based valuation. Since information is generally collected during a benchmark exercise using questionnaires on a broad range of topics, such as employee costs, software licensing costs, and quantities of hardware, it is commonly stored as natural language text; thus, this information is stored in an intrinsically unstructured form. Although these data form the basis for identifying potentials for IT cost reductions, neither a uniform description of any measured parameters nor the relationship between such parameters exists. Hence, this work proposes an ontology for the domain of IT benchmarking, available at https://w3id.org/bmontology. The design of this ontology is based on requirements mainly elicited from a domain analysis, which considers analyzing documents and interviews with representatives from Small- and Medium-Sized Enterprises and Information and Communications Technology companies over the last eight years. The development of the ontology and its main concepts is described in detail (i.e., the conceptualization of benchmarking events, questionnaires, IT services, indicators and their values) together with its alignment with the DOLCE-UltraLite foundational ontology.
Encoding color information for visual tracking: Algorithms and benchmark.
Liang, Pengpeng; Blasch, Erik; Ling, Haibin
2015-12-01
While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.
Dexter, Franklin; O'Neill, Liam; Xin, Lei; Ledolter, Johannes
2008-12-01
We use resampling of data to explore the basic statistical properties of super-efficient data envelopment analysis (DEA) when used as a benchmarking tool by the manager of a single decision-making unit. Our focus is the gaps in the outputs (i.e., slacks adjusted for upward bias), as they reveal which outputs can be increased. The numerical experiments show that the estimates of the gaps fail to exhibit asymptotic consistency, a property expected for standard statistical inference. Specifically, increased sample sizes were not always associated with more accurate forecasts of the output gaps. The baseline DEA's gaps equaled the mode of the jackknife and the mode of resampling with/without replacement from any subset of the population; usually, the baseline DEA's gaps also equaled the median. The quartile deviations of gaps were close to zero when few decision-making units were excluded from the sample and the study unit happened to have few other units contributing to its benchmark. The results for the quartile deviations can be explained in terms of the effective combinations of decision-making units that contribute to the DEA solution. The jackknife can provide all the combinations contributing to the quartile deviation and only needs to be performed for those units that are part of the benchmark set. These results show that there is a strong rationale for examining DEA results with a sensitivity analysis that excludes one benchmark hospital at a time. This analysis enhances the quality of decision support using DEA estimates for the potential ofa decision-making unit to grow one or more of its outputs.
NASA Astrophysics Data System (ADS)
Kokkoris, M.; Dede, S.; Kantre, K.; Lagoyannis, A.; Ntemou, E.; Paneta, V.; Preketes-Sigalas, K.; Provatas, G.; Vlastou, R.; Bogdanović-Radović, I.; Siketić, Z.; Obajdin, N.
2017-08-01
The evaluated proton differential cross sections suitable for the Elastic Backscattering Spectroscopy (EBS) analysis of natSi and 16O, as obtained from SigmaCalc 2.0, have been benchmarked over a wide energy and angular range at two different accelerator laboratories, namely at N.C.S.R. 'Demokritos', Athens, Greece and at Ruđer Bošković Institute (RBI), Zagreb, Croatia, using a variety of high-purity thick targets of known stoichiometry. The results are presented in graphical and tabular forms, while the observed discrepancies, as well as, the limits in accuracy of the benchmarking procedure, along with target related effects, are thoroughly discussed and analysed. In the case of oxygen the agreement between simulated and experimental spectra was generally good, while for silicon serious discrepancies were observed above Ep,lab = 2.5 MeV, suggesting that a further tuning of the appropriate nuclear model parameters in the evaluated differential cross-section datasets is required.
Krasowska, Małgorzata; Schneider, Wolfgang B; Mehring, Michael; Auer, Alexander A
2018-05-02
This work reports high-level ab initio calculations and a detailed analysis on the nature of intermolecular interactions of heavy main-group element compounds and π systems. For this purpose we have chosen a set of benchmark molecules of the form MR 3 , in which M=As, Sb, or Bi, and R=CH 3 , OCH 3 , or Cl. Several methods for the description of weak intermolecular interactions are benchmarked including DFT-D, DFT-SAPT, MP2, and high-level coupled cluster methods in the DLPNO-CCSD(T) approximation. Using local energy decomposition (LED) and an analysis of the electron density, details of the nature of this interaction are unraveled. The results yield insight into the nature of dispersion and donor-acceptor interactions in this type of system, including systematic trends in the periodic table, and also provide a benchmark for dispersion interactions in heavy main-group element compounds. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
'Calling executives and clinicians to account': user involvement in commissioning cancer services.
Evans, David H; Bacon, Roger J; Greer, Elizabeth; Stagg, Angela M; Turton, Pat
2015-08-01
English NHS guidance emphasizes the importance of involving users in commissioning cancer services. There has been considerable previous research on involving users in service improvement, but not on involvement in commissioning cancer services. To identify how users were involved as local cancer service commissioning projects sought to implement good practice and what has been learned. Participatory evaluation with four qualitative case studies based on semi-structured interviews with project stakeholders, observation and documentary analysis. Users were involved in every stage from design to analysis and reporting. Four English cancer network user involvement in commissioning projects, with 22 stakeholders interviewed. Thematic analysis identified nine themes: initial involvement, preparation for the role, ability to exercise voice, consistency and continuity, where decisions are made, closing the feedback loop, assessing impact, value of experience and diversity. Our findings on the impact of user involvement in commissioning cancer services are consistent with other findings on user involvement in service improvement, but highlight the specific issues for involvement in commissioning. Key points include the different perspectives users and professionals may have on the impact of user involvement in commissioning, the time necessary for meaningful involvement, the importance of involving users from the beginning and the value of senior management and PPI facilitator support and training. Users can play an important role in commissioning cancer services, but their ability to do so is contingent on resources being available to support them. © 2013 John Wiley & Sons Ltd.
New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)
NASA Astrophysics Data System (ADS)
Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.
2017-09-01
Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by adjustment of the original ENDF format file.
ERIC Educational Resources Information Center
Gouran, Dennis S.
Although the Attorney General's Commission on Pornography, also known as the Meese Commission, has been criticized excessively at times for threatening freedom of speech and press and individual rights to privacy, an analysis of its "Final Report" reveals numerous deficiencies in the Commission's decision-making process. These…
Teamwork in the operating room: frontline perspectives among hospitals and operating room personnel.
Sexton, J Bryan; Makary, Martin A; Tersigni, Anthony R; Pryor, David; Hendrich, Ann; Thomas, Eric J; Holzmueller, Christine G; Knight, Andrew P; Wu, Yun; Pronovost, Peter J
2006-11-01
The Joint Commission on Accreditation of Healthcare Organizations is proposing that hospitals measure culture beginning in 2007. However, a reliable and widely used measurement tool for the operating room (OR) setting does not currently exist. OR personnel in 60 US hospitals were surveyed using the Safety Attitudes Questionnaire. The teamwork climate domain of the survey uses six items about difficulty speaking up, conflict resolution, physician-nurse collaboration, feeling supported by others, asking questions, and heeding nurse input. To justify grouping individual-level responses to a single score at each hospital OR level, the authors used a multilevel confirmatory factor analysis, intraclass correlations, within-group interrater reliability, and Cronbach's alpha. To detect differences at the hospital OR level and by caregiver type, the authors used multivariate analysis of variance (items) and analysis of variance (scale). The response rate was 77.1%. There was robust evidence for grouping individual-level respondents to the hospital OR level using the diverse set of statistical tests, e.g., Comparative Fit Index = 0.99, root mean squared error of approximation = 0.05, and acceptable intraclasss correlations, within-group interrater reliability values, and Cronbach's alpha = 0.79. Teamwork climate differed significantly by hospital (F59, 1,911 = 4.06, P < 0.001) and OR caregiver type (F4, 1,911 = 9.96, P < 0.001). Rigorous assessment of teamwork climate is possible using this psychometrically sound teamwork climate scale. This tool and initial benchmarks allow others to compare their teamwork climate to national means, in an effort to focus more on what excellent surgical teams do well.
RNA-seq mixology: designing realistic control experiments to compare protocols and analysis methods
Holik, Aliaksei Z.; Law, Charity W.; Liu, Ruijie; Wang, Zeya; Wang, Wenyi; Ahn, Jaeil; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.
2017-01-01
Abstract Carefully designed control experiments provide a gold standard for benchmarking different genomics research tools. A shortcoming of many gene expression control studies is that replication involves profiling the same reference RNA sample multiple times. This leads to low, pure technical noise that is atypical of regular studies. To achieve a more realistic noise structure, we generated a RNA-sequencing mixture experiment using two cell lines of the same cancer type. Variability was added by extracting RNA from independent cell cultures and degrading particular samples. The systematic gene expression changes induced by this design allowed benchmarking of different library preparation kits (standard poly-A versus total RNA with Ribozero depletion) and analysis pipelines. Data generated using the total RNA kit had more signal for introns and various RNA classes (ncRNA, snRNA, snoRNA) and less variability after degradation. For differential expression analysis, voom with quality weights marginally outperformed other popular methods, while for differential splicing, DEXSeq was simultaneously the most sensitive and the most inconsistent method. For sample deconvolution analysis, DeMix outperformed IsoPure convincingly. Our RNA-sequencing data set provides a valuable resource for benchmarking different protocols and data pre-processing workflows. The extra noise mimics routine lab experiments more closely, ensuring any conclusions are widely applicable. PMID:27899618
Precise Ages for the Benchmark Brown Dwarfs HD 19467 B and HD 4747 B
NASA Astrophysics Data System (ADS)
Wood, Charlotte; Boyajian, Tabetha; Crepp, Justin; von Braun, Kaspar; Brewer, John; Schaefer, Gail; Adams, Arthur; White, Tim
2018-01-01
Large uncertainty in the age of brown dwarfs, stemming from a mass-age degeneracy, makes it difficult to constrain substellar evolutionary models. To break the degeneracy, we need ''benchmark" brown dwarfs (found in binary systems) whose ages can be determined independent of their masses. HD~19467~B and HD~4747~B are two benchmark brown dwarfs detected through the TRENDS (TaRgeting bENchmark objects with Doppler Spectroscopy) high-contrast imaging program for which we have dynamical mass measurements. To constrain their ages independently through isochronal analysis, we measured the radii of the host stars with interferometry using the Center for High Angular Resolution Astronomy (CHARA) Array. Assuming the brown dwarfs have the same ages as their host stars, we use these results to distinguish between several substellar evolutionary models. In this poster, we present new age estimates for HD~19467 and HD~4747 that are more accurate and precise and show our preliminary comparisons to cooling models.
NASA Astrophysics Data System (ADS)
Rimov, A. A.; Chukanova, T. I.; Trofimov, Yu. V.
2016-12-01
Data on the comparative analysis variants of the quality of power installations (benchmarking) applied in the power industry is systematized. It is shown that the most efficient variant of implementation of the benchmarking technique is the analysis of statistical distributions of the indicators in the composed homogenous group of the uniform power installations. The benchmarking technique aimed at revealing the available reserves on improvement of the reliability and heat efficiency indicators of the power installations of the thermal power plants is developed in the furtherance of this approach. The technique provides a possibility of reliable comparison of the quality of the power installations in their homogenous group limited by the number and adoption of the adequate decision on improving some or other technical characteristics of this power installation. The technique provides structuring of the list of the comparison indicators and internal factors affecting them represented according to the requirements of the sectoral standards and taking into account the price formation characteristics in the Russian power industry. The mentioned structuring ensures traceability of the reasons of deviation of the internal influencing factors from the specified values. The starting point for further detail analysis of the delay of the certain power installation indicators from the best practice expressed in the specific money equivalent is positioning of this power installation on distribution of the key indicator being a convolution of the comparison indicators. The distribution of the key indicator is simulated by the Monte-Carlo method after receiving the actual distributions of the comparison indicators: specific lost profit due to the short supply of electric energy and short delivery of power, specific cost of losses due to the nonoptimal expenditures for repairs, and specific cost of excess fuel equivalent consumption. The quality loss indicators are developed facilitating the analysis of the benchmarking results permitting to represent the quality loss of this power installation in the form of the difference between the actual value of the key indicator or comparison indicator and the best quartile of the existing distribution. The uncertainty of the obtained values of the quality loss indicators was evaluated by transforming the standard uncertainties of the input values into the expanded uncertainties of the output values with the confidence level of 95%. The efficiency of the technique is demonstrated in terms of benchmarking of the main thermal and mechanical equipment of the extraction power-generating units T-250 and power installations of the thermal power plants with the main steam pressure 130 atm.
Open Rotor - Analysis of Diagnostic Data
NASA Technical Reports Server (NTRS)
Envia, Edmane
2011-01-01
NASA is researching open rotor propulsion as part of its technology research and development plan for addressing the subsonic transport aircraft noise, emission and fuel burn goals. The low-speed wind tunnel test for investigating the aerodynamic and acoustic performance of a benchmark blade set at the approach and takeoff conditions has recently concluded. A high-speed wind tunnel diagnostic test campaign has begun to investigate the performance of this benchmark open rotor blade set at the cruise condition. Databases from both speed regimes will comprise a comprehensive collection of benchmark open rotor data for use in assessing/validating aerodynamic and noise prediction tools (component & system level) as well as providing insights into the physics of open rotors to help guide the development of quieter open rotors.
A benchmark for comparison of dental radiography analysis algorithms.
Wang, Ching-Wei; Huang, Cheng-Ta; Lee, Jia-Hong; Li, Chung-Hsing; Chang, Sheng-Wei; Siao, Ming-Jhih; Lai, Tat-Ming; Ibragimov, Bulat; Vrtovec, Tomaž; Ronneberger, Olaf; Fischer, Philipp; Cootes, Tim F; Lindner, Claudia
2016-07-01
Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/). Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly
NASA Astrophysics Data System (ADS)
Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.
2014-04-01
We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.
Data processing has major impact on the outcome of quantitative label-free LC-MS analysis.
Chawade, Aakash; Sandin, Marianne; Teleman, Johan; Malmström, Johan; Levander, Fredrik
2015-02-06
High-throughput multiplexed protein quantification using mass spectrometry is steadily increasing in popularity, with the two major techniques being data-dependent acquisition (DDA) and targeted acquisition using selected reaction monitoring (SRM). However, both techniques involve extensive data processing, which can be performed by a multitude of different software solutions. Analysis of quantitative LC-MS/MS data is mainly performed in three major steps: processing of raw data, normalization, and statistical analysis. To evaluate the impact of data processing steps, we developed two new benchmark data sets, one each for DDA and SRM, with samples consisting of a long-range dilution series of synthetic peptides spiked in a total cell protein digest. The generated data were processed by eight different software workflows and three postprocessing steps. The results show that the choice of the raw data processing software and the postprocessing steps play an important role in the final outcome. Also, the linear dynamic range of the DDA data could be extended by an order of magnitude through feature alignment and a charge state merging algorithm proposed here. Furthermore, the benchmark data sets are made publicly available for further benchmarking and software developments.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-18
... the Commission orders otherwise. Nothing in this Order shall prevent the Commission from exercising... previous Orders issued by the Commission concerning Part 4. III. Cost-Benefit Analysis Section 15(a) of the Act \\11\\ requires the Commission to consider the costs and benefits of its actions before promulgating...
A comprehensive analysis of sodium levels in the Canadian packaged food supply
Arcand, JoAnne; Au, Jennifer T.C.; Schermel, Alyssa; L’Abbe, Mary R.
2016-01-01
Background Population-wide sodium reduction strategies aim to reduce the cardiovascular burden of excess dietary sodium. Lowering sodium in packaged foods, which contribute the most sodium to the diet, is an important intervention to lower population intakes. Purpose To determine sodium levels in Canadian packaged foods and evaluate the proportion of foods meeting sodium benchmark targets set by Health Canada. Methods A cross-sectional analysis of 7234 packaged foods available in Canada in 2010–11. Sodium values were obtained from the Nutrition Facts table. Results Overall, 51.4% of foods met one of the sodium benchmark levels: 11.5% met Phase 1, 11.1% met Phase 2, and 28.7% met 2016 goal (Phase 3) benchmarks. Food groups with the greatest proportion meeting goal benchmarks were dairy (52.0%) and breakfast cereals (42.2%). Overall 48.6% of foods did not meet any benchmark level and 25% of all products exceeded maximum levels. Meats (61.2%) and canned vegetables/legumes and legumes (29.6%) had the most products exceeding maximum levels. There was large variability in the range of sodium within and between food categories. Food categories highest in sodium (mg/serving) were dry, condensed and ready-to-serve soups (834 ± 256, 754 ± 163, and 636 ± 173, respectively), oriental noodles (783 ± 433), broth (642 ± 239), and frozen appetizers/sides (642 ± 292). Conclusion These data provide a critical baseline assessment for monitoring sodium levels in Canadian foods. While some segments of the market are making progress towards sodium reduction, all sectors need encouragement to continue to reduce the amount of sodium added during food processing. PMID:24842740
Baseline Evaluations to Support Control Room Modernization at Nuclear Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald L.; Joe, Jeffrey C.
2015-02-01
For any major control room modernization activity at a commercial nuclear power plant (NPP) in the U.S., a utility should carefully follow the four phases prescribed by the U.S. Nuclear Regulatory Commission in NUREG-0711, Human Factors Engineering Program Review Model. These four phases include Planning and Analysis, Design, Verification and Validation, and Implementation and Operation. While NUREG-0711 is a useful guideline, it is written primarily from the perspective of regulatory review, and it therefore does not provide a nuanced account of many of the steps the utility might undertake as part of control room modernization. The guideline is largely summative—intendedmore » to catalog final products—rather than formative—intended to guide the overall modernization process. In this paper, we highlight two crucial formative sub-elements of the Planning and Analysis phase specific to control room modernization that are not covered in NUREG-0711. These two sub-elements are the usability and ergonomics baseline evaluations. A baseline evaluation entails evaluating the system as-built and currently in use. The usability baseline evaluation provides key insights into operator performance using the control system currently in place. The ergonomics baseline evaluation identifies possible deficiencies in the physical configuration of the control system. Both baseline evaluations feed into the design of the replacement system and subsequent summative benchmarking activities that help ensure that control room modernization represents a successful evolution of the control system.« less
Schneider, T; Sommerfeld, W; Möller, J
2003-06-01
The Medizinischer Dienst der Krankenversicherung (MDK) is a non-profit medical consulting organisation serving the German Healthcare Insurance System. Despite its uniform commission throughout Germany, organisation and structure differ considerably between Provincial States which is reflected by differing results. A common nationwide system of key figures and indicators aims at analysing results and learning from one another. Development of an audit concept for analysing key figures and indicators within the MDK aiming at quality improvement. Development of a system of key figures and indicators covering five spheres (products, staff, costs, data analysis, structure). Analysis by means of audits carried out across provincial state borders in five steps (audit manual, training of auditors, visitation, audit report, repetition audit). The system of key figures and indicators assures relevant and comparable data. Audit manual, training of auditors, visitation, and audit report meet the needs of all people and institutions involved. Preparation of auditors as well as openness, and flow of information within audited organisations offer areas for improvement. There is as yet no assessment of the cost-benefit ratio of audits. The concept presented in this article consists of two parts: A system of key figures and indicators as well as a concept for audits. The concept is suitable for a) generating and analysing relevant key figures and indicators for each MDK, and b) providing information for benchmarking between different MDK. Further development of the concept to a comprehensive management concept is necessary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackillop, William J., E-mail: william.mackillop@krcc.on.ca; Department of Public Health Sciences, Queen's University, Kingston, Ontario; Department of Oncology, Queen's University, Kingston, Ontario
Purpose: Palliative radiation therapy (PRT) benefits many patients with incurable cancer, but the overall need for PRT is unknown. Our primary objective was to estimate the appropriate rate of use of PRT in Ontario. Methods and Materials: The Ontario Cancer Registry identified patients who died of cancer in Ontario between 2006 and 2010. Comprehensive RT records were linked to the registry. Multivariate analysis identified social and health system-related factors affecting the use of PRT, enabling us to define a benchmark population of patients with unimpeded access to PRT. The proportion of cases treated at any time (PRT{sub lifetime}), the proportionmore » of cases treated in the last 2 years of life (PRT{sub 2y}), and number of courses of PRT per thousand cancer deaths were measured in the benchmark population. These benchmarks were standardized to the characteristics of the overall population, and province-wide PRT rates were then compared to benchmarks. Results: Cases diagnosed at hospitals with no RT on-site and residents of poorer communities and those who lived farther from an RT center, were significantly less likely than others to receive PRT. However, availability of RT at the diagnosing hospital was the dominant factor. Neither socioeconomic status nor distance from home to nearest RT center had a significant effect on the use of PRT in patients diagnosed at a hospital with RT facilities. The benchmark population therefore consisted of patients diagnosed at a hospital with RT facilities. The standardized benchmark for PRT{sub lifetime} was 33.9%, and the corresponding province-wide rate was 28.5%. The standardized benchmark for PRT{sub 2y} was 32.4%, and the corresponding province-wide rate was 27.0%. The standardized benchmark for the number of courses of PRT per thousand cancer deaths was 652, and the corresponding province-wide rate was 542. Conclusions: Approximately one-third of patients who die of cancer in Ontario need PRT, but many of them are never treated.« less
Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; J. Blair Briggs; David W. Nigg
2009-11-01
One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-03
... throughout the agency. The FY 2010 analysis provides additional information about the Federal Communications... FY 2011 Service Contract Inventory and FY 2010 Service Contract Inventory Analysis AGENCY: Federal Communications Commission. ACTION: Notice of public availability of service contract inventory and analysis...
39 CFR 3002.12 - Office of Accountability and Compliance.
Code of Federal Regulations, 2014 CFR
2014-07-01
... analysis and the formulation of policy recommendations for the Commission in both domestic and... Commission.” The functional areas of expertise within this office are: (1) The economic analysis of the... service; (2) The analysis of the operational characteristics of the postal system and its interface with...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-05
... FEDERAL TRADE COMMISSION [Docket No. 9350] Graco, Inc.; Analysis of Proposed Agreement Containing... or deceptive acts or practices or unfair methods of competition. The attached Analysis to Aid Public... Commission, has been placed on the public record for a period of thirty (30) days. The following Analysis to...
Fisk-based criteria to support validation of detection methods for drinking water and air.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacDonell, M.; Bhattacharyya, M.; Finster, M.
2009-02-18
This report was prepared to support the validation of analytical methods for threat contaminants under the U.S. Environmental Protection Agency (EPA) National Homeland Security Research Center (NHSRC) program. It is designed to serve as a resource for certain applications of benchmark and fate information for homeland security threat contaminants. The report identifies risk-based criteria from existing health benchmarks for drinking water and air for potential use as validation targets. The focus is on benchmarks for chronic public exposures. The priority sources are standard EPA concentration limits for drinking water and air, along with oral and inhalation toxicity values. Many contaminantsmore » identified as homeland security threats to drinking water or air would convert to other chemicals within minutes to hours of being released. For this reason, a fate analysis has been performed to identify potential transformation products and removal half-lives in air and water so appropriate forms can be targeted for detection over time. The risk-based criteria presented in this report to frame method validation are expected to be lower than actual operational targets based on realistic exposures following a release. Note that many target criteria provided in this report are taken from available benchmarks without assessing the underlying toxicological details. That is, although the relevance of the chemical form and analogues are evaluated, the toxicological interpretations and extrapolations conducted by the authoring organizations are not. It is also important to emphasize that such targets in the current analysis are not health-based advisory levels to guide homeland security responses. This integrated evaluation of chronic public benchmarks and contaminant fate has identified more than 200 risk-based criteria as method validation targets across numerous contaminants and fate products in drinking water and air combined. The gap in directly applicable values is considerable across the full set of threat contaminants, so preliminary indicators were developed from other well-documented benchmarks to serve as a starting point for validation efforts. By this approach, at least preliminary context is available for water or air, and sometimes both, for all chemicals on the NHSRC list that was provided for this evaluation. This means that a number of concentrations presented in this report represent indirect measures derived from related benchmarks or surrogate chemicals, as described within the many results tables provided in this report.« less
TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking
NASA Astrophysics Data System (ADS)
Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.
2014-06-01
The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.
Development of Benchmark Examples for Delamination Onset and Fatigue Growth Prediction
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2011-01-01
An approach for assessing the delamination propagation and growth capabilities in commercial finite element codes was developed and demonstrated for the Virtual Crack Closure Technique (VCCT) implementations in ABAQUS. The Double Cantilever Beam (DCB) specimen was chosen as an example. First, benchmark results to assess delamination propagation capabilities under static loading were created using models simulating specimens with different delamination lengths. For each delamination length modeled, the load and displacement at the load point were monitored. The mixed-mode strain energy release rate components were calculated along the delamination front across the width of the specimen. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. The calculated critical loads and critical displacements for delamination onset for each delamination length modeled were used as a benchmark. The load/displacement relationship computed during automatic propagation should closely match the benchmark case. Second, starting from an initially straight front, the delamination was allowed to propagate based on the algorithms implemented in the commercial finite element software. The load-displacement relationship obtained from the propagation analysis results and the benchmark results were compared. Good agreements could be achieved by selecting the appropriate input parameters, which were determined in an iterative procedure.
Cereda, Carlo W; Christensen, Søren; Campbell, Bruce Cv; Mishra, Nishant K; Mlynash, Michael; Levi, Christopher; Straka, Matus; Wintermark, Max; Bammer, Roland; Albers, Gregory W; Parsons, Mark W; Lansberg, Maarten G
2016-10-01
Differences in research methodology have hampered the optimization of Computer Tomography Perfusion (CTP) for identification of the ischemic core. We aim to optimize CTP core identification using a novel benchmarking tool. The benchmarking tool consists of an imaging library and a statistical analysis algorithm to evaluate the performance of CTP. The tool was used to optimize and evaluate an in-house developed CTP-software algorithm. Imaging data of 103 acute stroke patients were included in the benchmarking tool. Median time from stroke onset to CT was 185 min (IQR 180-238), and the median time between completion of CT and start of MRI was 36 min (IQR 25-79). Volumetric accuracy of the CTP-ROIs was optimal at an rCBF threshold of <38%; at this threshold, the mean difference was 0.3 ml (SD 19.8 ml), the mean absolute difference was 14.3 (SD 13.7) ml, and CTP was 67% sensitive and 87% specific for identification of DWI positive tissue voxels. The benchmarking tool can play an important role in optimizing CTP software as it provides investigators with a novel method to directly compare the performance of alternative CTP software packages. © The Author(s) 2015.
Faught, Austin M; Davidson, Scott E; Popple, Richard; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S
2017-09-01
The Imaging and Radiation Oncology Core-Houston (IROC-H) Quality Assurance Center (formerly the Radiological Physics Center) has reported varying levels of compliance from their anthropomorphic phantom auditing program. IROC-H studies have suggested that one source of disagreement between institution submitted calculated doses and measurement is the accuracy of the institution's treatment planning system dose calculations and heterogeneity corrections used. In order to audit this step of the radiation therapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Varian flattening filter free (FFF) 6 MV and FFF 10 MV therapeutic x-ray beams were commissioned based on central axis depth dose data from a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open-field measurements in a water tank for field sizes ranging from 3 × 3 cm 2 to 40 × 40 cm 2 . The models were then benchmarked against IROC-H's anthropomorphic head and neck phantom and lung phantom measurements. Validation results, assessed with a ±2%/2 mm gamma criterion, showed average agreement of 99.9% and 99.0% for central axis depth dose data for FFF 6 MV and FFF 10 MV models, respectively. Dose profile agreement using the same evaluation technique averaged 97.8% and 97.9% for the respective models. Phantom benchmarking comparisons were evaluated with a ±3%/2 mm gamma criterion, and agreement averaged 90.1% and 90.8% for the respective models. Multiple source models for Varian FFF 6 MV and FFF 10 MV beams have been developed, validated, and benchmarked for inclusion in an independent dose calculation quality assurance tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.
RF transient analysis and stabilization of the phase and energy of the proposed PIP-II LINAC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, J. P.; Chase, B. E.
This paper describes a recent effort to develop and benchmark a simulation tool for the analysis of RF transients and their compensation in an H- linear accelerator. Existing tools in this area either focus on electron LINACs or lack fundamental details about the LLRF system that are necessary to provide realistic performance estimates. In our paper we begin with a discussion of our computational models followed by benchmarking with existing beam-dynamics codes and measured data. We then analyze the effect of RF transients and their compensation in the PIP-II LINAC, followed by an analysis of calibration errors and how amore » Newton’s Method based feedback scheme can be used to regulate the beam energy to within the specified limits.« less
Analysis of b quark pair production signal from neutral 2HDM Higgs bosons at future linear colliders
NASA Astrophysics Data System (ADS)
Hashemi, Majid; MahdaviKhorrami, Mostafa
2018-06-01
In this paper, the b quark pair production events are analyzed as a source of neutral Higgs bosons of the two Higgs doublet model type I at linear colliders. The production mechanism is e+e- → Z^{(*)} → HA → b{\\bar{b}}b{\\bar{b}} assuming a fully hadronic final state. The analysis aim is to identify both CP-even and CP-odd Higgs bosons in different benchmark points accommodating moderate boson masses. Due to pair production of Higgs bosons, the analysis is most suitable for a linear collider operating at √{s} = 1 TeV. Results show that in selected benchmark points, signal peaks are observable in the b-jet pair invariant mass distributions at integrated luminosity of 500 fb^{-1}.
Benthic algae of benchmark streams in agricultural areas of eastern Wisconsin
Scudder, Barbara C.; Stewart, Jana S.
2001-01-01
Multivariate analyses indicated multiple scales of environmental factors affect algae. Although two-way indicator species analysis (TWINSPAN), detrended correspondence analysis (DCA), and canonical correspondence analysis (CCA) generally separated sites according to RHU, only DCA ordination indicated a separation of sites according to ecoregion. Environmental variables con-elated with DCA axes 1 and 2 and therefore indicated as important explanatory factors for algal distribution and abundance were factors related to stream size, basin land use/cover, geomorphology, hydrogeology, and riparian disturbance. CCA analyses with a more limited set of environmental variables indicated that pH, average width of natural riparian vegetation (segment scale), basin land use/cover and Q/Q2 were the most important variables affecting the distribution and relative abundance of benthic algae at the 20 benchmark streams,
NASA Technical Reports Server (NTRS)
Ganapol, Barry D.; Townsend, Lawrence W.; Wilson, John W.
1989-01-01
Nontrivial benchmark solutions are developed for the galactic ion transport (GIT) equations in the straight-ahead approximation. These equations are used to predict potential radiation hazards in the upper atmosphere and in space. Two levels of difficulty are considered: (1) energy independent, and (2) spatially independent. The analysis emphasizes analytical methods never before applied to the GIT equations. Most of the representations derived have been numerically implemented and compared to more approximate calculations. Accurate ion fluxes are obtained (3 to 5 digits) for nontrivial sources. For monoenergetic beams, both accurate doses and fluxes are found. The benchmarks presented are useful in assessing the accuracy of transport algorithms designed to accommodate more complex radiation protection problems. In addition, these solutions can provide fast and accurate assessments of relatively simple shield configurations.
Analysis of 100Mb/s Ethernet for the Whitney Commodity Computing Testbed
NASA Technical Reports Server (NTRS)
Fineberg, Samuel A.; Pedretti, Kevin T.; Kutler, Paul (Technical Monitor)
1997-01-01
We evaluate the performance of a Fast Ethernet network configured with a single large switch, a single hub, and a 4x4 2D torus topology in a testbed cluster of "commodity" Pentium Pro PCs. We also evaluated a mixed network composed of ethernet hubs and switches. An MPI collective communication benchmark, and the NAS Parallel Benchmarks version 2.2 (NPB2) show that the torus network performs best for all sizes that we were able to test (up to 16 nodes). For larger networks the ethernet switch outperforms the hub, though its performance is far less than peak. The hub/switch combination tests indicate that the NAS parallel benchmarks are relatively insensitive to hub densities of less than 7 nodes per hub.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-23
... Alimentarius Commission: Meeting of the Codex Committee on Methods of Analysis and Sampling AGENCY: Office of... discussed at the 33rd Session of the Codex Committee on Methods of Analysis and Sampling (CCMAS) of the... the criteria appropriate to Codex Methods of Analysis and Sampling; serving as a coordinating body for...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
... Alimentarius Commission: Meeting of the Codex Committee on Methods of Analysis and Sampling AGENCY: Office of... discussed at the 32nd session of the Codex Committee on Methods of Analysis and Sampling (CCMAS) of the... appropriate to Codex Methods of Analysis and Sampling; serving as a coordinating body for Codex with other...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-18
... FEDERAL TRADE COMMISSION [File No. 061 0172] Roaring Fork Valley Physicians I.P.A.; Analysis of... attached Analysis to Aid Public Comment describes both the allegations in the draft complaint and the terms... Commission, has been placed on the public record for a period of thirty (30) days. The following Analysis to...
Simulation Studies for Inspection of the Benchmark Test with PATRASH
NASA Astrophysics Data System (ADS)
Shimosaki, Y.; Igarashi, S.; Machida, S.; Shirakata, M.; Takayama, K.; Noda, F.; Shigaki, K.
2002-12-01
In order to delineate the halo-formation mechanisms in a typical FODO lattice, a 2-D simulation code PATRASH (PArticle TRAcking in a Synchrotron for Halo analysis) has been developed. The electric field originating from the space charge is calculated by the Hybrid Tree code method. Benchmark tests utilizing three simulation codes of ACCSIM, PATRASH and SIMPSONS were carried out. These results have been confirmed to be fairly in agreement with each other. The details of PATRASH simulation are discussed with some examples.
Benchmarking Customer Service Practices of Air Cargo Carriers: A Case Study Approach
1994-09-01
customer toll free hotlines, comment and complaint analysis, and consumer advisory panels (Zemke and Schaaf, 1989:31-34). The correct use of any or all of... customer service criteria. The research also provides a host of customer service criteria that the researchers find important to most consumers . Bhote...AD-A285 014 DTIC ELECI’E SEP 2 9 1994 kOF4 * BENCHMARKING CUSTOMER SERVICE -, PRACTICES OF AIR CARGO CARRIERS: A CASE STUDY APPROACH THESIS Patrick D
Benchmarking reference services: step by step.
Buchanan, H S; Marshall, J G
1996-01-01
This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.
Complex Systems Simulation and Optimization Group on performance analysis and benchmarking latest . Research Interests High Performance Computing|Embedded System |Microprocessors & Microcontrollers
Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy
Kauppi, Tomi; Kämäräinen, Joni-Kristian; Kalesnykiene, Valentina; Sorri, Iiris; Uusitalo, Hannu; Kälviäinen, Heikki
2013-01-01
We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions. PMID:23956787
Stanford, Robert E
2004-05-01
This paper uses a non-parametric frontier model and adaptations of the concepts of cross-efficiency and peer-appraisal to develop a formal methodology for benchmarking provider performance in the treatment of Acute Myocardial Infarction (AMI). Parameters used in the benchmarking process are the rates of proper recognition of indications of six standard treatment processes for AMI; the decision making units (DMUs) to be compared are the Medicare eligible hospitals of a particular state; the analysis produces an ordinal ranking of individual hospital performance scores. The cross-efficiency/peer-appraisal calculation process is constructed to accommodate DMUs that experience no patients in some of the treatment categories. While continuing to rate highly the performances of DMUs which are efficient in the Pareto-optimal sense, our model produces individual DMU performance scores that correlate significantly with good overall performance, as determined by a comparison of the sums of the individual DMU recognition rates for the six standard treatment processes. The methodology is applied to data collected from 107 state Medicare hospitals.
Bria, Emilio; Massari, Francesco; Maines, Francesca; Pilotto, Sara; Bonomi, Maria; Porta, Camillo; Bracarda, Sergio; Heng, Daniel; Santini, Daniele; Sperduti, Isabella; Giannarelli, Diana; Cognetti, Francesco; Tortora, Giampaolo; Milella, Michele
2015-01-01
A correlation, power and benchmarking analysis between progression-free and overall survival (PFS, OS) of randomized trials with targeted agents or immunotherapy for advanced renal cell carcinoma (RCC) was performed to provide a practical tool for clinical trial design. For 1st-line of treatment, a significant correlation was observed between 6-month PFS and 12-month OS, between 3-month PFS and 9-month OS and between the distributions of the cumulative PFS and OS estimates. According to the regression equation derived for 1st-line targeted agents, 7859, 2873, 712, and 190 patients would be required to determine a 3%, 5%, 10% and 20% PFS advantage at 6 months, corresponding to an absolute increase in 12-month OS rates of 2%, 3%, 6% and 11%, respectively. These data support PFS as a reliable endpoint for advanced RCC receiving up-front therapies. Benchmarking and power analyses, on the basis of the updated survival expectations, may represent practical tools for future trial' design. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
The Filament Sensor for Near Real-Time Detection of Cytoskeletal Fiber Structures
Eltzner, Benjamin; Wollnik, Carina; Gottschlich, Carsten; Huckemann, Stephan; Rehfeldt, Florian
2015-01-01
A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source. PMID:25996921
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Investigating the Transonic Flutter Boundary of the Benchmark Supercritical Wing
NASA Technical Reports Server (NTRS)
Heeg, Jennifer; Chwalowski, Pawel
2017-01-01
This paper builds on the computational aeroelastic results published previously and generated in support of the second Aeroelastic Prediction Workshop for the NASA Benchmark Supercritical Wing configuration. The computational results are obtained using FUN3D, an unstructured grid Reynolds-Averaged Navier-Stokes solver developed at the NASA Langley Research Center. The analysis results focus on understanding the dip in the transonic flutter boundary at a single Mach number (0.74), exploring an angle of attack range of ??1 to 8 and dynamic pressures from wind off to beyond flutter onset. The rigid analysis results are examined for insights into the behavior of the aeroelastic system. Both static and dynamic aeroelastic simulation results are also examined.
76 FR 33375 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-08
... Securities and Exchange Commission (``Commission'') is soliciting comments on the collection of information summarized below. The Commission plans to submit this existing collection of information to the Office of... Gathering, Analysis and Retrieval (``EDGAR'') system. Regulation S-T is assigned one burden hour for [[Page...
78 FR 75391 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-11
... one burden hour to ease future renewals of rule 17a-6's collection of information analysis should... Exchange Commission (the ``Commission'') is soliciting comments on the collections of information summarized below. The Commission plans to submit these existing collections of information to the Office of...
75 FR 81681 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
... one burden hour to ease future renewals of rule 17a-6's collection of information analysis should... Exchange Commission (the ``Commission'') is soliciting comments on the collections of information summarized below. The Commission plans to submit these existing collections of information to the Office of...
A comprehensive assessment of somatic mutation detection in cancer using whole-genome sequencing
Alioto, Tyler S.; Buchhalter, Ivo; Derdak, Sophia; Hutter, Barbara; Eldridge, Matthew D.; Hovig, Eivind; Heisler, Lawrence E.; Beck, Timothy A.; Simpson, Jared T.; Tonon, Laurie; Sertier, Anne-Sophie; Patch, Ann-Marie; Jäger, Natalie; Ginsbach, Philip; Drews, Ruben; Paramasivam, Nagarajan; Kabbe, Rolf; Chotewutmontri, Sasithorn; Diessl, Nicolle; Previti, Christopher; Schmidt, Sabine; Brors, Benedikt; Feuerbach, Lars; Heinold, Michael; Gröbner, Susanne; Korshunov, Andrey; Tarpey, Patrick S.; Butler, Adam P.; Hinton, Jonathan; Jones, David; Menzies, Andrew; Raine, Keiran; Shepherd, Rebecca; Stebbings, Lucy; Teague, Jon W.; Ribeca, Paolo; Giner, Francesc Castro; Beltran, Sergi; Raineri, Emanuele; Dabad, Marc; Heath, Simon C.; Gut, Marta; Denroche, Robert E.; Harding, Nicholas J.; Yamaguchi, Takafumi N.; Fujimoto, Akihiro; Nakagawa, Hidewaki; Quesada, Víctor; Valdés-Mas, Rafael; Nakken, Sigve; Vodák, Daniel; Bower, Lawrence; Lynch, Andrew G.; Anderson, Charlotte L.; Waddell, Nicola; Pearson, John V.; Grimmond, Sean M.; Peto, Myron; Spellman, Paul; He, Minghui; Kandoth, Cyriac; Lee, Semin; Zhang, John; Létourneau, Louis; Ma, Singer; Seth, Sahil; Torrents, David; Xi, Liu; Wheeler, David A.; López-Otín, Carlos; Campo, Elías; Campbell, Peter J.; Boutros, Paul C.; Puente, Xose S.; Gerhard, Daniela S.; Pfister, Stefan M.; McPherson, John D.; Hudson, Thomas J.; Schlesner, Matthias; Lichter, Peter; Eils, Roland; Jones, David T. W.; Gut, Ivo G.
2015-01-01
As whole-genome sequencing for cancer genome analysis becomes a clinical tool, a full understanding of the variables affecting sequencing analysis output is required. Here using tumour-normal sample pairs from two different types of cancer, chronic lymphocytic leukaemia and medulloblastoma, we conduct a benchmarking exercise within the context of the International Cancer Genome Consortium. We compare sequencing methods, analysis pipelines and validation methods. We show that using PCR-free methods and increasing sequencing depth to ∼100 × shows benefits, as long as the tumour:control coverage ratio remains balanced. We observe widely varying mutation call rates and low concordance among analysis pipelines, reflecting the artefact-prone nature of the raw data and lack of standards for dealing with the artefacts. However, we show that, using the benchmark mutation set we have created, many issues are in fact easy to remedy and have an immediate positive impact on mutation detection accuracy. PMID:26647970
D'Onza, Giuseppe; Greco, Giulio; Allegrini, Marco
2016-02-01
Recycling implies additional costs for separated municipal solid waste (MSW) collection. The aim of the present study is to propose and implement a management tool - the full cost accounting (FCA) method - to calculate the full collection costs of different types of waste. Our analysis aims for a better understanding of the difficulties of putting FCA into practice in the MSW sector. We propose a FCA methodology that uses standard cost and actual quantities to calculate the collection costs of separate and undifferentiated waste. Our methodology allows cost efficiency analysis and benchmarking, overcoming problems related to firm-specific accounting choices, earnings management policies and purchase policies. Our methodology allows benchmarking and variance analysis that can be used to identify the causes of off-standards performance and guide managers to deploy resources more efficiently. Our methodology can be implemented by companies lacking a sophisticated management accounting system. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D.; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z.; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J.; Chia, Burton K. H.; Denis, Bertrand; Froula, Jeff L.; Wang, Zhong; Egan, Robert; Kang, Dongwan Don; Cook, Jeffrey J.; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W.; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z.; Cuevas, Daniel A.; Edwards, Robert A.; Saha, Surya; Piro, Vitor C.; Renard, Bernhard Y.; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C.; Woyke, Tanja; Vorholt, Julia A.; Schulze-Lefert, Paul; Rubin, Edward M.; Darling, Aaron E.; Rattei, Thomas; McHardy, Alice C.
2018-01-01
In metagenome analysis, computational methods for assembly, taxonomic profiling and binning are key components facilitating downstream biological data interpretation. However, a lack of consensus about benchmarking datasets and evaluation metrics complicates proper performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on datasets of unprecedented complexity and realism. Benchmark metagenomes were generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids, including genomes with varying degrees of relatedness to each other and to publicly available ones and representing common experimental setups. Across all datasets, assembly and genome binning programs performed well for species represented by individual genomes, while performance was substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below the family level. Parameter settings substantially impacted performances, underscoring the importance of program reproducibility. While highlighting current challenges in computational metagenomics, the CAMI results provide a roadmap for software selection to answer specific research questions. PMID:28967888
Optimization of a solid-state electron spin qubit using Gate Set Tomography
Dehollain, Juan P.; Muhonen, Juha T.; Blume-Kohout, Robin J.; ...
2016-10-13
Here, state of the art qubit systems are reaching the gate fidelities required for scalable quantum computation architectures. Further improvements in the fidelity of quantum gates demands characterization and benchmarking protocols that are efficient, reliable and extremely accurate. Ideally, a benchmarking protocol should also provide information on how to rectify residual errors. Gate Set Tomography (GST) is one such protocol designed to give detailed characterization of as-built qubits. We implemented GST on a high-fidelity electron-spin qubit confined by a single 31P atom in 28Si. The results reveal systematic errors that a randomized benchmarking analysis could measure but not identify, whereasmore » GST indicated the need for improved calibration of the length of the control pulses. After introducing this modification, we measured a new benchmark average gate fidelity of 99.942(8)%, an improvement on the previous value of 99.90(2)%. Furthermore, GST revealed high levels of non-Markovian noise in the system, which will need to be understood and addressed when the qubit is used within a fault-tolerant quantum computation scheme.« less
Correlation of Noncancer Benchmark Doses in Short- and Long-Term Rodent Bioassays.
Kratchman, Jessica; Wang, Bing; Fox, John; Gray, George
2018-05-01
This study investigated whether, in the absence of chronic noncancer toxicity data, short-term noncancer toxicity data can be used to predict chronic toxicity effect levels by focusing on the dose-response relationship instead of a critical effect. Data from National Toxicology Program (NTP) technical reports have been extracted and modeled using the Environmental Protection Agency's Benchmark Dose Software. Best-fit, minimum benchmark dose (BMD), and benchmark dose lower limits (BMDLs) have been modeled for all NTP pathologist identified significant nonneoplastic lesions, final mean body weight, and mean organ weight of 41 chemicals tested by NTP between 2000 and 2012. Models were then developed at the chemical level using orthogonal regression techniques to predict chronic (two years) noncancer health effect levels using the results of the short-term (three months) toxicity data. The findings indicate that short-term animal studies may reasonably provide a quantitative estimate of a chronic BMD or BMDL. This can allow for faster development of human health toxicity values for risk assessment for chemicals that lack chronic toxicity data. © 2017 Society for Risk Analysis.
Maximal Unbiased Benchmarking Data Sets for Human Chemokine Receptors and Comparative Analysis.
Xia, Jie; Reid, Terry-Elinor; Wu, Song; Zhang, Liangren; Wang, Xiang Simon
2018-05-29
Chemokine receptors (CRs) have long been druggable targets for the treatment of inflammatory diseases and HIV-1 infection. As a powerful technique, virtual screening (VS) has been widely applied to identifying small molecule leads for modern drug targets including CRs. For rational selection of a wide variety of VS approaches, ligand enrichment assessment based on a benchmarking data set has become an indispensable practice. However, the lack of versatile benchmarking sets for the whole CRs family that are able to unbiasedly evaluate every single approach including both structure- and ligand-based VS somewhat hinders modern drug discovery efforts. To address this issue, we constructed Maximal Unbiased Benchmarking Data sets for human Chemokine Receptors (MUBD-hCRs) using our recently developed tools of MUBD-DecoyMaker. The MUBD-hCRs encompasses 13 subtypes out of 20 chemokine receptors, composed of 404 ligands and 15756 decoys so far and is readily expandable in the future. It had been thoroughly validated that MUBD-hCRs ligands are chemically diverse while its decoys are maximal unbiased in terms of "artificial enrichment", "analogue bias". In addition, we studied the performance of MUBD-hCRs, in particular CXCR4 and CCR5 data sets, in ligand enrichment assessments of both structure- and ligand-based VS approaches in comparison with other benchmarking data sets available in the public domain and demonstrated that MUBD-hCRs is very capable of designating the optimal VS approach. MUBD-hCRs is a unique and maximal unbiased benchmarking set that covers major CRs subtypes so far.
47 CFR 0.31 - Functions of the Office.
Code of Federal Regulations, 2010 CFR
2010-10-01
... communications, and special projects to obtain theoretical and experimental data on new or improved techniques... work of the Commission. (f) To advise and represent the Commission on frequency allocation and spectrum... Planning and Policy Analysis, advice to the Commission, participate in and coordinate staff work with...
Tremblay, Marlène; Hess, Justin P; Christenson, Brock M; McIntyre, Kolby K; Smink, Ben; van der Kamp, Arjen J; de Jong, Lisanne G; Döpfer, Dörte
2016-07-01
Automatic milking systems (AMS) are implemented in a variety of situations and environments. Consequently, there is a need to characterize individual farming practices and regional challenges to streamline management advice and objectives for producers. Benchmarking is often used in the dairy industry to compare farms by computing percentile ranks of the production values of groups of farms. Grouping for conventional benchmarking is commonly limited to the use of a few factors such as farms' geographic region or breed of cattle. We hypothesized that herds' production data and management information could be clustered in a meaningful way using cluster analysis and that this clustering approach would yield better peer groups of farms than benchmarking methods based on criteria such as country, region, breed, or breed and region. By applying mixed latent-class model-based cluster analysis to 529 North American AMS dairy farms with respect to 18 significant risk factors, 6 clusters were identified. Each cluster (i.e., peer group) represented unique management styles, challenges, and production patterns. When compared with peer groups based on criteria similar to the conventional benchmarking standards, the 6 clusters better predicted milk produced (kilograms) per robot per day. Each cluster represented a unique management and production pattern that requires specialized advice. For example, cluster 1 farms were those that recently installed AMS robots, whereas cluster 3 farms (the most northern farms) fed high amounts of concentrates through the robot to compensate for low-energy feed in the bunk. In addition to general recommendations for farms within a cluster, individual farms can generate their own specific goals by comparing themselves to farms within their cluster. This is very comparable to benchmarking but adds the specific characteristics of the peer group, resulting in better farm management advice. The improvement that cluster analysis allows for is characterized by the multivariable approach and the fact that comparisons between production units can be accomplished within a cluster and between clusters as a choice. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Understanding and benchmarking health service achievement of policy goals for chronic disease
2012-01-01
Background Key challenges in benchmarking health service achievement of policy goals in areas such as chronic disease are: 1) developing indicators and understanding how policy goals might work as indicators of service performance; 2) developing methods for economically collecting and reporting stakeholder perceptions; 3) combining and sharing data about the performance of organizations; 4) interpreting outcome measures; 5) obtaining actionable benchmarking information. This study aimed to explore how a new Boolean-based small-N method from the social sciences—Qualitative Comparative Analysis or QCA—could contribute to meeting these internationally shared challenges. Methods A ‘multi-value QCA’ (MVQCA) analysis was conducted of data from 24 senior staff at 17 randomly selected services for chronic disease, who provided perceptions of 1) whether government health services were improving their achievement of a set of statewide policy goals for chronic disease and 2) the efficacy of state health office actions in influencing this improvement. The analysis produced summaries of configurations of perceived service improvements. Results Most respondents observed improvements in most areas but uniformly good improvements across services were not perceived as happening (regardless of whether respondents identified a state health office contribution to that improvement). The sentinel policy goal of using evidence to develop service practice was not achieved at all in four services and appears to be reliant on other kinds of service improvements happening. Conclusions The QCA method suggested theoretically plausible findings and an approach that with further development could help meet the five benchmarking challenges. In particular, it suggests that achievement of one policy goal may be reliant on achievement of another goal in complex ways that the literature has not yet fully accommodated but which could help prioritize policy goals. The weaknesses of QCA can be found wherever traditional big-N statistical methods are needed and possible, and in its more complex and therefore difficult to empirically validate findings. It should be considered a potentially valuable adjunct method for benchmarking complex health policy goals such as those for chronic disease. PMID:23020943
Root cause analysis of laboratory turnaround times for patients in the emergency department.
Fernandes, Christopher M B; Worster, Andrew; Hill, Stephen; McCallum, Catherine; Eva, Kevin
2004-03-01
Laboratory investigations are essential to patient care and are conducted routinely in emergency departments (EDs). This study reports the turnaround times at an academic, tertiary care ED, using root cause analysis to identify potential areas of improvement. Our objectives were to compare the laboratory turnaround times with established benchmarks and identify root causes for delays. Turnaround and process event times for a consecutive sample of hemoglobin and potassium measurements were recorded during an 8-day study period using synchronized time stamps. A log transformation (ln [minutes + 1]) was performed to normalize the time data, which were then compared with established benchmarks using one-sample t tests. The turnaround time for hemoglobin was significantly less than the established benchmark (n = 140, t = -5.69, p < 0.001) and that of potassium was significantly greater (n = 121, t = 12.65, p < 0.001). The hemolysis rate was 5.8%, with 0.017% of samples needing recollection. Causes of delays included order-processing time, a high proportion (43%) of tests performed on patients who had been admitted but were still in the ED waiting for a bed, and excessive laboratory process times for potassium. The turnaround time for hemoglobin (18 min) met the established benchmark, but that for potassium (49 min) did not. Root causes for delay were order-processing time, excessive queue and instrument times for potassium and volume of tests for admitted patients. Further study of these identified causes of delays is required to see whether laboratory TATs can be reduced.
2015/2016 Quality Risk Management Benchmarking Survey.
Waldron, Kelly; Ramnarine, Emma; Hartman, Jeffrey
2017-01-01
This paper investigates the concept of quality risk management (QRM) maturity as it applies to the pharmaceutical and biopharmaceutical industries, using the results and analysis from a QRM benchmarking survey conducted in 2015 and 2016. QRM maturity can be defined as the effectiveness and efficiency of a quality risk management program, moving beyond "check-the-box" compliance with guidelines such as ICH Q9 Quality Risk Management , to explore the value QRM brings to business and quality operations. While significant progress has been made towards full adoption of QRM principles and practices across industry, the full benefits of QRM have not yet been fully realized. The results of the QRM Benchmarking Survey indicate that the pharmaceutical and biopharmaceutical industries are approximately halfway along the journey towards full QRM maturity. LAY ABSTRACT: The management of risks associated with medicinal product quality and patient safety are an important focus for the pharmaceutical and biopharmaceutical industries. These risks are identified, analyzed, and controlled through a defined process called quality risk management (QRM), which seeks to protect the patient from potential quality-related risks. This paper summarizes the outcomes of a comprehensive survey of industry practitioners performed in 2015 and 2016 that aimed to benchmark the level of maturity with regard to the application of QRM. The survey results and subsequent analysis revealed that the pharmaceutical and biopharmaceutical industries have made significant progress in the management of quality risks over the last ten years, and they are roughly halfway towards reaching full maturity of QRM. © PDA, Inc. 2017.
Pasler, Marlies; Kaas, Jochem; Perik, Thijs; Geuze, Job; Dreindl, Ralf; Künzler, Thomas; Wittkamper, Frits; Georg, Dietmar
2015-12-01
To systematically evaluate machine specific quality assurance (QA) for volumetric modulated arc therapy (VMAT) based on log files by applying a dynamic benchmark plan. A VMAT benchmark plan was created and tested on 18 Elekta linacs (13 MLCi or MLCi2, 5 Agility) at 4 different institutions. Linac log files were analyzed and a delivery robustness index was introduced. For dosimetric measurements an ionization chamber array was used. Relative dose deviations were assessed by mean gamma for each control point and compared to the log file evaluation. Fourteen linacs delivered the VMAT benchmark plan, while 4 linacs failed by consistently terminating the delivery. The mean leaf error (±1SD) was 0.3±0.2 mm for all linacs. Large MLC maximum errors up to 6.5 mm were observed at reversal positions. Delivery robustness index accounting for MLC position correction (0.8-1.0) correlated with delivery time (80-128 s) and depended on dose rate performance. Dosimetric evaluation indicated in general accurate plan reproducibility with γ(mean)(±1 SD)=0.4±0.2 for 1 mm/1%. However single control point analysis revealed larger deviations and attributed well to log file analysis. The designed benchmark plan helped identify linac related malfunctions in dynamic mode for VMAT. Log files serve as an important additional QA measure to understand and visualize dynamic linac parameters. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, S.M.
1995-01-01
The requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. If credit for the negative reactivity of the depleted (or spent) fuel isotopics is desired, it is necessary to benchmark computational methods against spent fuel critical configurations. This report summarizes a portion of the ongoing effort to benchmark away-from-reactor criticality analysis methods using critical configurations from commercial pressurized-water reactors. The analysis methodology selected for all the calculations reported herein is based on the codes and data provided in the SCALE-4 code system. The isotopic densities for the spent fuel assemblies inmore » the critical configurations were calculated using the SAS2H analytical sequence of the SCALE-4 system. The sources of data and the procedures for deriving SAS2H input parameters are described in detail. The SNIKR code module was used to extract the necessary isotopic densities from the SAS2H results and provide the data in the format required by the SCALE criticality analysis modules. The CSASN analytical sequence in SCALE-4 was used to perform resonance processing of the cross sections. The KENO V.a module of SCALE-4 was used to calculate the effective multiplication factor (k{sub eff}) of each case. The SCALE-4 27-group burnup library containing ENDF/B-IV (actinides) and ENDF/B-V (fission products) data was used for all the calculations. This volume of the report documents the SCALE system analysis of three reactor critical configurations for the Sequoyah Unit 2 Cycle 3. This unit and cycle were chosen because of the relevance in spent fuel benchmark applications: (1) the unit had a significantly long downtime of 2.7 years during the middle of cycle (MOC) 3, and (2) the core consisted entirely of burned fuel at the MOC restart. The first benchmark critical calculation was the MOC restart at hot, full-power (HFP) critical conditions. The other two benchmark critical calculations were the beginning-of-cycle (BOC) startup at both hot, zero-power (HZP) and HFP critical conditions. These latter calculations were used to check for consistency in the calculated results for different burnups and downtimes. The k{sub eff} results were in the range of 1.00014 to 1.00259 with a standard deviation of less than 0.001.« less
Limitations of Community College Benchmarking and Benchmarks
ERIC Educational Resources Information Center
Bers, Trudy H.
2006-01-01
This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, G.A.
2011-07-01
Document available in abstract form only, full text of document follows: The dosimetry from the H. B. Robinson Unit 2 Pressure Vessel Benchmark is analyzed with a suite of Westinghouse-developed codes and data libraries. The radiation transport from the reactor core to the surveillance capsule and ex-vessel locations is performed by RAPTOR-M3G, a parallel deterministic radiation transport code that calculates high-resolution neutron flux information in three dimensions. The cross-section library used in this analysis is the ALPAN library, an Evaluated Nuclear Data File (ENDF)/B-VII.0-based library designed for reactor dosimetry and fluence analysis applications. Dosimetry is evaluated with the industry-standard SNLRMLmore » reactor dosimetry cross-section data library. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
MILLS, EVAN; MATTHE, PAUL; STOUFER, MARTIN
2016-10-06
EnergyIQ-the first "action-oriented" benchmarking tool for non-residential buildings-provides a standardized opportunity assessment based on benchmarking results. along with decision-support information to help refine action plans. EnergyIQ offers a wide array of benchmark metrics, with visuall as well as tabular display. These include energy, costs, greenhouse-gas emissions, and a large array of characteristics (e.g. building components or operational strategies). The tool supports cross-sectional benchmarking for comparing the user's building to it's peers at one point in time, as well as longitudinal benchmarking for tracking the performance of an individual building or enterprise portfolio over time. Based on user inputs, the toolmore » generates a list of opportunities and recommended actions. Users can then explore the "Decision Support" module for helpful information on how to refine action plans, create design-intent documentation, and implement improvements. This includes information on best practices, links to other energy analysis tools and more. The variety of databases are available within EnergyIQ from which users can specify peer groups for comparison. Using the tool, this data can be visually browsed and used as a backdrop against which to view a variety of energy benchmarking metrics for the user's own building. User can save their project information and return at a later date to continue their exploration. The initial database is the CA Commercial End-Use Survey (CEUS), which provides details on energy use and characteristics for about 2800 buildings (and 62 building types). CEUS is likely the most thorough survey of its kind every conducted. The tool is built as a web service. The EnergyIQ web application is written in JSP with pervasive us of JavaScript and CSS2. EnergyIQ also supports a SOAP based web service to allow the flow of queries and data to occur with non-browser implementations. Data are stored in an Oracle 10g database. References: Mills, Mathew, Brook and Piette. 2008. "Action Oriented Benchmarking: Concepts and Tools." Energy Engineering, Vol.105, No. 4, pp 21-40. LBNL-358E; Mathew, Mills, Bourassa, Brook. 2008. "Action-Oriented Benchmarking: Using the CEUS Database to Benchmark Commercial Buildings in California." Energy Engineering, Vol 105, No. 5, pp 6-18. LBNL-502E.« less
Cumming, Oliver; Elliott, Mark; Overbo, Alycia; Bartram, Jamie
2014-01-01
Safe drinking water and sanitation are important determinants of human health and wellbeing and have recently been declared human rights by the international community. Increased access to both were included in the Millennium Development Goals under a single dedicated target for 2015. This target was reached in 2010 for water but sanitation will fall short; however, there is an important difference in the benchmarks used for assessing global access. For drinking water the benchmark is community-level access whilst for sanitation it is household-level access, so a pit latrine shared between households does not count toward the Millennium Development Goal (MDG) target. We estimated global progress for water and sanitation under two scenarios: with equivalent household- and community-level benchmarks. Our results demonstrate that the “sanitation deficit” is apparent only when household-level sanitation access is contrasted with community-level water access. When equivalent benchmarks are used for water and sanitation, the global deficit is as great for water as it is for sanitation, and sanitation progress in the MDG-period (1990–2015) outstrips that in water. As both drinking water and sanitation access yield greater benefits at the household-level than at the community-level, we conclude that any post–2015 goals should consider a household-level benchmark for both. PMID:25502659
Jimenez-Del-Toro, Oscar; Muller, Henning; Krenn, Markus; Gruenberg, Katharina; Taha, Abdel Aziz; Winterstein, Marianne; Eggel, Ivan; Foncubierta-Rodriguez, Antonio; Goksel, Orcun; Jakab, Andras; Kontokotsios, Georgios; Langs, Georg; Menze, Bjoern H; Salas Fernandez, Tomas; Schaer, Roger; Walleyo, Anna; Weber, Marc-Andre; Dicente Cid, Yashin; Gass, Tobias; Heinrich, Mattias; Jia, Fucang; Kahl, Fredrik; Kechichian, Razmig; Mai, Dominic; Spanier, Assaf B; Vincent, Graham; Wang, Chunliang; Wyeth, Daniel; Hanbury, Allan
2016-11-01
Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.
Cumming, Oliver; Elliott, Mark; Overbo, Alycia; Bartram, Jamie
2014-01-01
Safe drinking water and sanitation are important determinants of human health and wellbeing and have recently been declared human rights by the international community. Increased access to both were included in the Millennium Development Goals under a single dedicated target for 2015. This target was reached in 2010 for water but sanitation will fall short; however, there is an important difference in the benchmarks used for assessing global access. For drinking water the benchmark is community-level access whilst for sanitation it is household-level access, so a pit latrine shared between households does not count toward the Millennium Development Goal (MDG) target. We estimated global progress for water and sanitation under two scenarios: with equivalent household- and community-level benchmarks. Our results demonstrate that the "sanitation deficit" is apparent only when household-level sanitation access is contrasted with community-level water access. When equivalent benchmarks are used for water and sanitation, the global deficit is as great for water as it is for sanitation, and sanitation progress in the MDG-period (1990-2015) outstrips that in water. As both drinking water and sanitation access yield greater benefits at the household-level than at the community-level, we conclude that any post-2015 goals should consider a household-level benchmark for both.
Medin, E.; Gazi, R.; Koehlmoos, T.P.; Rehnberg, C.; Saifi, R.; Bhuiya, A.; Khan, J.
2010-01-01
Calculation of costs of different medical and surgical services has numerous uses, which include monitoring the performance of service-delivery, setting the efficiency target, benchmarking of services across all sectors, considering investment decisions, commissioning to meet health needs, and negotiating revised levels of funding. The role of private-sector healthcare facilities has been increasing rapidly over the last decade. Despite the overall improvement in the public and private healthcare sectors in Bangladesh, lack of price benchmarking leads to patients facing unexplained price discrimination when receiving healthcare services. The aim of the study was to calculate the hospital-care cost of disease-specific cases, specifically pregnancy- and puerperium-related cases, and to indentify the practical challenges of conducting costing studies in the hospital setting in Bangladesh. A combination of micro-costing and step-down cost allocation was used for collecting information on the cost items and, ultimately, for calculating the unit cost for each diagnostic case. Data were collected from the hospital records of 162 patients having 11 different clinical diagnoses. Caesarean section due to maternal and foetal complications was the most expensive type of case whereas the length of stay due to complications was the major driver of cost. Some constraints in keeping hospital medical records and accounting practices were observed. Despite these constraints, the findings of the study indicate that it is feasible to carry out a large-scale study to further explore the costs of different hospital-care services. PMID:20635637
The COA360: a tool for assessing the cultural competency of healthcare organizations.
LaVeist, Thomas A; Relosa, Rachel; Sawaya, Nadia
2008-01-01
The U.S. Census Bureau projects that by 2050, non-Hispanic whites will be in the numerical minority. This rapid diversification requires healthcare organizations to pay closer attention to cross-cultural issues if they are to meet the healthcare needs of the nation and continue to maintain a high standard of care. Although scorecards and benchmarking are widely used to gauge healthcare organizations' performance in various areas, these tools have been underused in relation to cultural preparedness or initiatives. The likely reason for this is the lack of a validated tool specifically designed to examine cultural competency. Existing validated cultural competency instruments evaluate individuals, not organizations. In this article, we discuss a study to validate the Cultural Competency Organizational Assessment--360 or the COA360, an instrument designed to appraise a healthcare organization's cultural competence. The Office of Minority Health and the Joint Commission have each developed standards for measuring the cultural competency of organizations. The COA360 is designed to assess adherence to both of these sets of standards. For this validation study, we enlisted a panel of national experts. The panel rated each dimension of the COA360, and the combination of items for each of the scale's 14 dimensions was rated above 4.13 (on 5-point scale). Our conclusion points to the validity of the COA360. As such, it is a valuable tool not only for assessing a healthcare organization's cultural readiness but also for benchmarking its progress in addressing cultural and diversity issues.
Maximum unbiased validation (MUV) data sets for virtual screening based on PubChem bioactivity data.
Rohrer, Sebastian G; Baumann, Knut
2009-02-01
Refined nearest neighbor analysis was recently introduced for the analysis of virtual screening benchmark data sets. It constitutes a technique from the field of spatial statistics and provides a mathematical framework for the nonparametric analysis of mapped point patterns. Here, refined nearest neighbor analysis is used to design benchmark data sets for virtual screening based on PubChem bioactivity data. A workflow is devised that purges data sets of compounds active against pharmaceutically relevant targets from unselective hits. Topological optimization using experimental design strategies monitored by refined nearest neighbor analysis functions is applied to generate corresponding data sets of actives and decoys that are unbiased with regard to analogue bias and artificial enrichment. These data sets provide a tool for Maximum Unbiased Validation (MUV) of virtual screening methods. The data sets and a software package implementing the MUV design workflow are freely available at http://www.pharmchem.tu-bs.de/lehre/baumann/MUV.html.
Indicators of AEI applied to the Delaware Estuary.
Barnthouse, Lawrence W; Heimbuch, Douglas G; Anthony, Vaughn C; Hilborn, Ray W; Myers, Ransom A
2002-05-18
We evaluated the impacts of entrainment and impingement at the Salem Generating Station on fish populations and communities in the Delaware Estuary. In the absence of an agreed-upon regulatory definition of "adverse environmental impact" (AEI), we developed three independent benchmarks of AEI based on observed or predicted changes that could threaten the sustainability of a population or the integrity of a community. Our benchmarks of AEI included: (1) disruption of the balanced indigenous community of fish in the vicinity of Salem (the "BIC" analysis); (2) a continued downward trend in the abundance of one or more susceptible fish species (the "Trends" analysis); and (3) occurrence of entrainment/impingement mortality sufficient, in combination with fishing mortality, to jeopardize the future sustainability of one or more populations (the "Stock Jeopardy" analysis). The BIC analysis utilized nearly 30 years of species presence/absence data collected in the immediate vicinity of Salem. The Trends analysis examined three independent data sets that document trends in the abundance of juvenile fish throughout the estuary over the past 20 years. The Stock Jeopardy analysis used two different assessment models to quantify potential long-term impacts of entrainment and impingement on susceptible fish populations. For one of these models, the compensatory capacities of the modeled species were quantified through meta-analysis of spawner-recruit data available for several hundred fish stocks. All three analyses indicated that the fish populations and communities of the Delaware Estuary are healthy and show no evidence of an adverse impact due to Salem. Although the specific models and analyses used at Salem are not applicable to every facility, we believe that a weight of evidence approach that evaluates multiple benchmarks of AEI using both retrospective and predictive methods is the best approach for assessing entrainment and impingement impacts at existing facilities.
77 FR 35389 - EPN, Inc.; Analysis of Proposed Consent Order to Aid Public Comment
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-13
... practices or unfair methods of competition. The attached Analysis to Aid Public Comment describes both the... FEDERAL TRADE COMMISSION [File No. 112 3143] EPN, Inc.; Analysis of Proposed Consent Order to Aid.... 721, 15 U.S.C. 46(f), and Sec. 2.34 the Commission Rules of Practice, 16 CFR 2.34, notice is hereby...
ANALYSES OF NEUROBEHAVIORAL SCREENING DATA: BENCHMARK DOSE ESTIMATION.
Analysis of neurotoxicological screening data such as those of the functional observational battery (FOB) traditionally relies on analysis of variance (ANOVA) with repeated measurements, followed by determination of a no-adverse-effect level (NOAEL). The US EPA has proposed the ...
Translating an AI application from Lisp to Ada: A case study
NASA Technical Reports Server (NTRS)
Davis, Gloria J.
1991-01-01
A set of benchmarks was developed to test the performance of a newly designed computer executing both Lisp and Ada. Among these was AutoClassII -- a large Artificial Intelligence (AI) application written in Common Lisp. The extraction of a representative subset of this complex application was aided by a Lisp Code Analyzer (LCA). The LCA enabled rapid analysis of the code, putting it in a concise and functionally readable form. An equivalent benchmark was created in Ada through manual translation of the Lisp version. A comparison of the execution results of both programs across a variety of compiler-machine combinations indicate that line-by-line translation coupled with analysis of the initial code can produce relatively efficient and reusable target code.
NASA Technical Reports Server (NTRS)
Hall, Laverne
1995-01-01
Modeling of the Multi-mission Image Processing System (MIPS) will be described as an example of the use of a modeling tool to design a distributed system that supports multiple application scenarios. This paper examines: (a) modeling tool selection, capabilities, and operation (namely NETWORK 2.5 by CACl), (b) pointers for building or constructing a model and how the MIPS model was developed, (c) the importance of benchmarking or testing the performance of equipment/subsystems being considered for incorporation the design/architecture, (d) the essential step of model validation and/or calibration using the benchmark results, (e) sample simulation results from the MIPS model, and (f) how modeling and simulation analysis affected the MIPS design process by having a supportive and informative impact.
Benchmarking specialty hospitals, a scoping review on theory and practice.
Wind, A; van Harten, W H
2017-04-04
Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-31
... FEDERAL TRADE COMMISSION [File No. 131 0152] Actavis, Inc. a corporation, and Warner Chilott PLC; Analysis of Agreement Containing Consent Orders To Aid Public Comment AGENCY: Federal Trade Commission... (``Consent Agreement'') from Actavis, Inc. (``Actavis'') and Warner Chilcott plc (``Warner Chilcott'') that...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-04
... FEDERAL TRADE COMMISSION [File No. 111 0051] Hikma Pharmaceuticals PLC; Analysis of Agreement Containing Consent Orders To Aid Public Comment AGENCY: Federal Trade Commission. ACTION: Proposed consent... Pharmaceuticals PLC (``Hikma'') that is designed to remedy the anticompetitive effects of Hikma's acquisition of...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-22
... FEDERAL TRADE COMMISSION [File No. 091 0094] Magnesium Elektron; Analysis of Agreement Containing Consent Orders To Aid Public Comment AGENCY: Federal Trade Commission. ACTION: Proposed Consent Agreement. SUMMARY: The consent agreement in this matter settles alleged violations of federal law prohibiting unfair...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-15
... NUCLEAR REGULATORY COMMISSION [NRC-2011-0254] Common-Cause Failure Analysis in Event and Condition Assessment: Guidance and Research, Draft Report for Comment; Correction AGENCY: Nuclear Regulatory Commission. ACTION: Draft NUREG; request for comment; correction. SUMMARY: This document corrects a notice appearing...
76 FR 44865 - Domestic Licensing of Source Material-Amendments/Integrated Safety Analysis
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-27
... NUCLEAR REGULATORY COMMISSION 10 CFR Part 40 RIN 3150-AI50 [NRC-2009-0079 and NRC-2011-0080] Domestic Licensing of Source Material--Amendments/Integrated Safety Analysis AGENCY: Nuclear Regulatory Commission. ACTION: Extension of public comment period and public meeting. SUMMARY: The U.S. Nuclear...
NASA Technical Reports Server (NTRS)
Padovan, J.; Adams, M.; Fertis, J.; Zeid, I.; Lam, P.
1982-01-01
Finite element codes are used in modelling rotor-bearing-stator structure common to the turbine industry. Engine dynamic simulation is used by developing strategies which enable the use of available finite element codes. benchmarking the elements developed are benchmarked by incorporation into a general purpose code (ADINA); the numerical characteristics of finite element type rotor-bearing-stator simulations are evaluated through the use of various types of explicit/implicit numerical integration operators. Improving the overall numerical efficiency of the procedure is improved.
NASA Astrophysics Data System (ADS)
Davidson, S.; Cui, J.; Followill, D.; Ibbott, G.; Deasy, J.
2008-02-01
The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD).
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-21
... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-67663] Public Availability of the Securities and Exchange Commission's FY 2011 Service Contract Inventory AGENCY: U.S. Securities and Exchange... Inventory Analysis for FY2010 provides information based on the FY 2010 Inventory. The SEC has posted its...
46 CFR 504.9 - Information required by the Commission.
Code of Federal Regulations, 2010 CFR
2010-10-01
... responsible for assuring its accuracy if used by it in the preparation of an environmental assessment or EIS... ENVIRONMENTAL POLICY ANALYSIS § 504.9 Information required by the Commission. (a) Upon request, a person filing... requested Commission action on the quality of the human environment, if such requested action will: (1...
16 CFR § 1101.12 - Commission must disclose information to the public.
Code of Federal Regulations, 2013 CFR
2013-01-01
... PRODUCT SAFETY ACT REGULATIONS INFORMATION DISCLOSURE UNDER SECTION 6(b) OF THE CONSUMER PRODUCT SAFETY ACT Information Subject to Notice and Analysis Provisions of Section 6(b)(1) § 1101.12 Commission must... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Commission must disclose information to the...
16 CFR 1101.12 - Commission must disclose information to the public.
Code of Federal Regulations, 2012 CFR
2012-01-01
... SAFETY ACT REGULATIONS INFORMATION DISCLOSURE UNDER SECTION 6(b) OF THE CONSUMER PRODUCT SAFETY ACT Information Subject to Notice and Analysis Provisions of Section 6(b)(1) § 1101.12 Commission must disclose... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Commission must disclose information to the...
16 CFR 1101.12 - Commission must disclose information to the public.
Code of Federal Regulations, 2014 CFR
2014-01-01
... SAFETY ACT REGULATIONS INFORMATION DISCLOSURE UNDER SECTION 6(b) OF THE CONSUMER PRODUCT SAFETY ACT Information Subject to Notice and Analysis Provisions of Section 6(b)(1) § 1101.12 Commission must disclose... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Commission must disclose information to the...
16 CFR 1101.12 - Commission must disclose information to the public.
Code of Federal Regulations, 2010 CFR
2010-01-01
... SAFETY ACT REGULATIONS INFORMATION DISCLOSURE UNDER SECTION 6(b) OF THE CONSUMER PRODUCT SAFETY ACT Information Subject to Notice and Analysis Provisions of Section 6(b)(1) § 1101.12 Commission must disclose... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Commission must disclose information to the...
16 CFR 1101.12 - Commission must disclose information to the public.
Code of Federal Regulations, 2011 CFR
2011-01-01
... SAFETY ACT REGULATIONS INFORMATION DISCLOSURE UNDER SECTION 6(b) OF THE CONSUMER PRODUCT SAFETY ACT Information Subject to Notice and Analysis Provisions of Section 6(b)(1) § 1101.12 Commission must disclose... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Commission must disclose information to the...
Towards accurate modeling of noncovalent interactions for protein rigidity analysis.
Fox, Naomi; Streinu, Ileana
2013-01-01
Protein rigidity analysis is an efficient computational method for extracting flexibility information from static, X-ray crystallography protein data. Atoms and bonds are modeled as a mechanical structure and analyzed with a fast graph-based algorithm, producing a decomposition of the flexible molecule into interconnected rigid clusters. The result depends critically on noncovalent atomic interactions, primarily on how hydrogen bonds and hydrophobic interactions are computed and modeled. Ongoing research points to the stringent need for benchmarking rigidity analysis software systems, towards the goal of increasing their accuracy and validating their results, either against each other and against biologically relevant (functional) parameters. We propose two new methods for modeling hydrogen bonds and hydrophobic interactions that more accurately reflect a mechanical model, without being computationally more intensive. We evaluate them using a novel scoring method, based on the B-cubed score from the information retrieval literature, which measures how well two cluster decompositions match. To evaluate the modeling accuracy of KINARI, our pebble-game rigidity analysis system, we use a benchmark data set of 20 proteins, each with multiple distinct conformations deposited in the Protein Data Bank. Cluster decompositions for them were previously determined with the RigidFinder method from Gerstein's lab and validated against experimental data. When KINARI's default tuning parameters are used, an improvement of the B-cubed score over a crude baseline is observed in 30% of this data. With our new modeling options, improvements were observed in over 70% of the proteins in this data set. We investigate the sensitivity of the cluster decomposition score with case studies on pyruvate phosphate dikinase and calmodulin. To substantially improve the accuracy of protein rigidity analysis systems, thorough benchmarking must be performed on all current systems and future extensions. We have measured the gain in performance by comparing different modeling methods for noncovalent interactions. We showed that new criteria for modeling hydrogen bonds and hydrophobic interactions can significantly improve the results. The two new methods proposed here have been implemented and made publicly available in the current version of KINARI (v1.3), together with the benchmarking tools, which can be downloaded from our software's website, http://kinari.cs.umass.edu.
Towards accurate modeling of noncovalent interactions for protein rigidity analysis
2013-01-01
Background Protein rigidity analysis is an efficient computational method for extracting flexibility information from static, X-ray crystallography protein data. Atoms and bonds are modeled as a mechanical structure and analyzed with a fast graph-based algorithm, producing a decomposition of the flexible molecule into interconnected rigid clusters. The result depends critically on noncovalent atomic interactions, primarily on how hydrogen bonds and hydrophobic interactions are computed and modeled. Ongoing research points to the stringent need for benchmarking rigidity analysis software systems, towards the goal of increasing their accuracy and validating their results, either against each other and against biologically relevant (functional) parameters. We propose two new methods for modeling hydrogen bonds and hydrophobic interactions that more accurately reflect a mechanical model, without being computationally more intensive. We evaluate them using a novel scoring method, based on the B-cubed score from the information retrieval literature, which measures how well two cluster decompositions match. Results To evaluate the modeling accuracy of KINARI, our pebble-game rigidity analysis system, we use a benchmark data set of 20 proteins, each with multiple distinct conformations deposited in the Protein Data Bank. Cluster decompositions for them were previously determined with the RigidFinder method from Gerstein's lab and validated against experimental data. When KINARI's default tuning parameters are used, an improvement of the B-cubed score over a crude baseline is observed in 30% of this data. With our new modeling options, improvements were observed in over 70% of the proteins in this data set. We investigate the sensitivity of the cluster decomposition score with case studies on pyruvate phosphate dikinase and calmodulin. Conclusion To substantially improve the accuracy of protein rigidity analysis systems, thorough benchmarking must be performed on all current systems and future extensions. We have measured the gain in performance by comparing different modeling methods for noncovalent interactions. We showed that new criteria for modeling hydrogen bonds and hydrophobic interactions can significantly improve the results. The two new methods proposed here have been implemented and made publicly available in the current version of KINARI (v1.3), together with the benchmarking tools, which can be downloaded from our software's website, http://kinari.cs.umass.edu. PMID:24564209
Ellis, Judith
2006-07-01
The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; Faillace, E.; Chen, S.Y.
RESRAD was one of the multimedia models selected by the US Nuclear Regulatory Commission (NRC) to include in its workshop on radiation dose modeling and demonstration of compliance with the radiological criteria for license termination. This paper is a summary of the presentation made at the workshop and focuses on the 10 questions the NRC distributed to all participants prior to the workshop. The code selection criteria, which were solicited by the NRC, for demonstrating compliance with the license termination rule are also included. Among the RESRAD family of codes, RESRAD and RESRAD-BUILD are designed for evaluating radiological contamination inmore » soils and in buildings. Many documents have been published to support the use of these codes. This paper focuses on these two codes. The pathways considered, the databases and parameters used, quality control and quality assurance, benchmarking, verification and validation of these codes, and capabilities as well as limitations of these codes are discussed in detail.« less
Janicke, David M.; McQuaid, Elizabeth L.; Mullins, Larry L.; Robins, Paul M.; Wu, Yelena P.
2014-01-01
Objective As a field, pediatric psychology has focused considerable efforts on the education and training of students and practitioners. Alongside a broader movement toward competency attainment in professional psychology and within the health professions, the Society of Pediatric Psychology commissioned a Task Force to establish core competencies in pediatric psychology and address the need for contemporary training recommendations. Methods The Task Force adapted the framework proposed by the Competency Benchmarks Work Group on preparing psychologists for health service practice and defined competencies applicable across training levels ranging from initial practicum training to entry into the professional workforce in pediatric psychology. Results Competencies within 6 cluster areas, including science, professionalism, interpersonal, application, education, and systems, and 1 crosscutting cluster, crosscutting knowledge competencies in pediatric psychology, are presented in this report. Conclusions Recommendations for the use of, and the further refinement of, these suggested competencies are discussed. PMID:24719239
Simulation of a cascaded longitudinal space charge amplifier for coherent radiation generation
Halavanau, A.; Piot, P.
2016-03-03
Longitudinal space charge (LSC) effects are generally considered as harmful in free-electron lasers as they can seed unfavorable energy modulations that can result in density modulations with associated emittance dilution. It was pointed out, however, that such \\micro-bunching instabilities" could be potentially useful to support the generation of broadband coherent radiation. Therefore there has been an increasing interest in devising accelerator beam lines capable of controlling LSC induced density modulations. In the present paper we augment these previous investigations by combining a grid-less space charge algorithm with the popular particle-tracking program elegant. This high-fidelity model of the space charge ismore » used to benchmark conventional LSC models. We then employ the developed model to optimize the performance of a cascaded longitudinal space charge amplifier using beam parameters comparable to the ones achievable at Fermilab Accelerator Science & Technology (FAST) facility currently under commissioning at Fermilab.« less
A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei
2013-08-01
We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neary, Vincent Sinclair; Yang, Zhaoqing; Wang, Taiping
A wave model test bed is established to benchmark, test and evaluate spectral wave models and modeling methodologies (i.e., best practices) for predicting the wave energy resource parameters recommended by the International Electrotechnical Commission, IEC TS 62600-101Ed. 1.0 ©2015. Among other benefits, the model test bed can be used to investigate the suitability of different models, specifically what source terms should be included in spectral wave models under different wave climate conditions and for different classes of resource assessment. The overarching goal is to use these investigations to provide industry guidance for model selection and modeling best practices depending onmore » the wave site conditions and desired class of resource assessment. Modeling best practices are reviewed, and limitations and knowledge gaps in predicting wave energy resource parameters are identified.« less
Brucker, Sara Y; Schumacher, Claudia; Sohn, Christoph; Rezai, Mahdi; Bamberg, Michael; Wallwiener, Diethelm
2008-01-01
Background The main study objectives were: to establish a nationwide voluntary collaborative network of breast centres with independent data analysis; to define suitable quality indicators (QIs) for benchmarking the quality of breast cancer (BC) care; to demonstrate existing differences in BC care quality; and to show that BC care quality improved with benchmarking from 2003 to 2007. Methods BC centres participated voluntarily in a scientific benchmarking procedure. A generic XML-based data set was developed and used for data collection. Nine guideline-based quality targets serving as rate-based QIs were initially defined, reviewed annually and modified or expanded accordingly. QI changes over time were analysed descriptively. Results During 2003–2007, respective increases in participating breast centres and postoperatively confirmed BCs were from 59 to 220 and from 5,994 to 31,656 (> 60% of new BCs/year in Germany). Starting from 9 process QIs, 12 QIs were developed by 2007 as surrogates for long-term outcome. Results for most QIs increased. From 2003 to 2007, the most notable increases seen were for preoperative histological confirmation of diagnosis (58% (in 2003) to 88% (in 2007)), appropriate endocrine therapy in hormone receptor-positive patients (27 to 93%), appropriate radiotherapy after breast-conserving therapy (20 to 79%) and appropriate radiotherapy after mastectomy (8 to 65%). Conclusion Nationwide external benchmarking of BC care is feasible and successful. The benchmarking system described allows both comparisons among participating institutions as well as the tracking of changes in average quality of care over time for the network as a whole. Marked QI increases indicate improved quality of BC care. PMID:19055735
Brucker, Sara Y; Schumacher, Claudia; Sohn, Christoph; Rezai, Mahdi; Bamberg, Michael; Wallwiener, Diethelm
2008-12-02
The main study objectives were: to establish a nationwide voluntary collaborative network of breast centres with independent data analysis; to define suitable quality indicators (QIs) for benchmarking the quality of breast cancer (BC) care; to demonstrate existing differences in BC care quality; and to show that BC care quality improved with benchmarking from 2003 to 2007. BC centres participated voluntarily in a scientific benchmarking procedure. A generic XML-based data set was developed and used for data collection. Nine guideline-based quality targets serving as rate-based QIs were initially defined, reviewed annually and modified or expanded accordingly. QI changes over time were analysed descriptively. During 2003-2007, respective increases in participating breast centres and postoperatively confirmed BCs were from 59 to 220 and from 5,994 to 31,656 (> 60% of new BCs/year in Germany). Starting from 9 process QIs, 12 QIs were developed by 2007 as surrogates for long-term outcome. Results for most QIs increased. From 2003 to 2007, the most notable increases seen were for preoperative histological confirmation of diagnosis (58% (in 2003) to 88% (in 2007)), appropriate endocrine therapy in hormone receptor-positive patients (27 to 93%), appropriate radiotherapy after breast-conserving therapy (20 to 79%) and appropriate radiotherapy after mastectomy (8 to 65%). Nationwide external benchmarking of BC care is feasible and successful. The benchmarking system described allows both comparisons among participating institutions as well as the tracking of changes in average quality of care over time for the network as a whole. Marked QI increases indicate improved quality of BC care.
Energy benchmarking in wastewater treatment plants: the importance of site operation and layout.
Belloir, C; Stanford, C; Soares, A
2015-01-01
Energy benchmarking is a powerful tool in the optimization of wastewater treatment plants (WWTPs) in helping to reduce costs and greenhouse gas emissions. Traditionally, energy benchmarking methods focused solely on reporting electricity consumption, however, recent developments in this area have led to the inclusion of other types of energy, including electrical, manual, chemical and mechanical consumptions that can be expressed in kWh/m3. In this study, two full-scale WWTPs were benchmarked, both incorporated preliminary, secondary (oxidation ditch) and tertiary treatment processes, Site 1 also had an additional primary treatment step. The results indicated that Site 1 required 2.32 kWh/m3 against 0.98 kWh/m3 for Site 2. Aeration presented the highest energy consumption for both sites with 2.08 kWh/m3 required for Site 1 and 0.91 kWh/m3 in Site 2. The mechanical energy represented the second biggest consumption for Site 1 (9%, 0.212 kWh/m3) and chemical input was significant in Site 2 (4.1%, 0.026 kWh/m3). The analysis of the results indicated that Site 2 could be optimized by constructing a primary settling tank that would reduce the biochemical oxygen demand, total suspended solids and NH4 loads to the oxidation ditch by 55%, 75% and 12%, respectively, and at the same time reduce the aeration requirements by 49%. This study demonstrated that the effectiveness of the energy benchmarking exercise in identifying the highest energy-consuming assets, nevertheless it points out the need to develop a holistic overview of the WWTP and the need to include parameters such as effluent quality, site operation and plant layout to allow adequate benchmarking.
NASA Technical Reports Server (NTRS)
Davis, G. J.
1994-01-01
One area of research of the Information Sciences Division at NASA Ames Research Center is devoted to the analysis and enhancement of processors and advanced computer architectures, specifically in support of automation and robotic systems. To compare systems' abilities to efficiently process Lisp and Ada, scientists at Ames Research Center have developed a suite of non-parallel benchmarks called ELAPSE. The benchmark suite was designed to test a single computer's efficiency as well as alternate machine comparisons on Lisp, and/or Ada languages. ELAPSE tests the efficiency with which a machine can execute the various routines in each environment. The sample routines are based on numeric and symbolic manipulations and include two-dimensional fast Fourier transformations, Cholesky decomposition and substitution, Gaussian elimination, high-level data processing, and symbol-list references. Also included is a routine based on a Bayesian classification program sorting data into optimized groups. The ELAPSE benchmarks are available for any computer with a validated Ada compiler and/or Common Lisp system. Of the 18 routines that comprise ELAPSE, provided within this package are 14 developed or translated at Ames. The others are readily available through literature. The benchmark that requires the most memory is CHOLESKY.ADA. Under VAX/VMS, CHOLESKY.ADA requires 760K of main memory. ELAPSE is available on either two 5.25 inch 360K MS-DOS format diskettes (standard distribution) or a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The ELAPSE benchmarks were written in 1990. VAX and VMS are trademarks of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
77 FR 8082 - Regulatory Flexibility Agenda
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-13
... which the Commission's staff completed compilation of the data. To the extent possible, rulemaking... indicated that preparation of a Regulatory Flexibility Act analysis is required. The Commission's complete...
78 FR 1708 - Regulatory Flexibility Agenda
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-08
... which the Commission's staff completed compilation of the data. To the extent possible, rulemaking... indicated that preparation of a Regulatory Flexibility Act analysis is required. The Commission's complete...
78 FR 44407 - Regulatory Flexibility Agenda
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-23
... the Commission's staff completed compilation of the data. To the extent possible, rulemaking actions... indicated that preparation of a Regulatory Flexibility Act analysis is required. The Commission's complete...
TRACE/PARCS analysis of the OECD/NEA Oskarshamn-2 BWR stability benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozlowski, T.; Downar, T.; Xu, Y.
2012-07-01
On February 25, 1999, the Oskarshamn-2 NPP experienced a stability event which culminated in diverging power oscillations with a decay ratio of about 1.4. The event was successfully modeled by the TRACE/PARCS coupled code system, and further analysis of the event is described in this paper. The results show very good agreement with the plant data, capturing the entire behavior of the transient including the onset of instability, growth of the oscillations (decay ratio) and oscillation frequency. This provides confidence in the prediction of other parameters which are not available from the plant records. The event provides coupled code validationmore » for a challenging BWR stability event, which involves the accurate simulation of neutron kinetics (NK), thermal-hydraulics (TH), and TH/NK. coupling. The success of this work has demonstrated the ability of the 3-D coupled systems code TRACE/PARCS to capture the complex behavior of BWR stability events. The problem was released as an international OECD/NEA benchmark, and it is the first benchmark based on measured plant data for a stability event with a DR greater than one. Interested participants are invited to contact authors for more information. (authors)« less
District Heating Systems Performance Analyses. Heat Energy Tariff
NASA Astrophysics Data System (ADS)
Ziemele, Jelena; Vigants, Girts; Vitolins, Valdis; Blumberga, Dagnija; Veidenbergs, Ivars
2014-12-01
The paper addresses an important element of the European energy sector: the evaluation of district heating (DH) system operations from the standpoint of increasing energy efficiency and increasing the use of renewable energy resources. This has been done by developing a new methodology for the evaluation of the heat tariff. The paper presents an algorithm of this methodology, which includes not only a data base and calculation equation systems, but also an integrated multi-criteria analysis module using MADM/MCDM (Multi-Attribute Decision Making / Multi-Criteria Decision Making) based on TOPSIS (Technique for Order Performance by Similarity to Ideal Solution). The results of the multi-criteria analysis are used to set the tariff benchmarks. The evaluation methodology has been tested for Latvian heat tariffs, and the obtained results show that only half of heating companies reach a benchmark value equal to 0.5 for the efficiency closeness to the ideal solution indicator. This means that the proposed evaluation methodology would not only allow companies to determine how they perform with regard to the proposed benchmark, but also to identify their need to restructure so that they may reach the level of a low-carbon business.
A web-based system architecture for ontology-based data integration in the domain of IT benchmarking
NASA Astrophysics Data System (ADS)
Pfaff, Matthias; Krcmar, Helmut
2018-03-01
In the domain of IT benchmarking (ITBM), a variety of data and information are collected. Although these data serve as the basis for business analyses, no unified semantic representation of such data yet exists. Consequently, data analysis across different distributed data sets and different benchmarks is almost impossible. This paper presents a system architecture and prototypical implementation for an integrated data management of distributed databases based on a domain-specific ontology. To preserve the semantic meaning of the data, the ITBM ontology is linked to data sources and functions as the central concept for database access. Thus, additional databases can be integrated by linking them to this domain-specific ontology and are directly available for further business analyses. Moreover, the web-based system supports the process of mapping ontology concepts to external databases by introducing a semi-automatic mapping recommender and by visualizing possible mapping candidates. The system also provides a natural language interface to easily query linked databases. The expected result of this ontology-based approach of knowledge representation and data access is an increase in knowledge and data sharing in this domain, which will enhance existing business analysis methods.
A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems
NASA Astrophysics Data System (ADS)
Abtahi, Amir-Reza; Bijari, Afsane
2017-03-01
In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.
Molinos-Senante, María; Donoso, Guillermo; Sala-Garrido, Ramon; Villegas, Andrés
2018-03-01
Benchmarking the efficiency of water companies is essential to set water tariffs and to promote their sustainability. In doing so, most of the previous studies have applied conventional data envelopment analysis (DEA) models. However, it is a deterministic method that does not allow to identify environmental factors influencing efficiency scores. To overcome this limitation, this paper evaluates the efficiency of a sample of Chilean water and sewerage companies applying a double-bootstrap DEA model. Results evidenced that the ranking of water and sewerage companies changes notably whether efficiency scores are computed applying conventional or double-bootstrap DEA models. Moreover, it was found that the percentage of non-revenue water and customer density are factors influencing the efficiency of Chilean water and sewerage companies. This paper illustrates the importance of using a robust and reliable method to increase the relevance of benchmarking tools.
Insights Gained from Forensic Analysis with MELCOR of the Fukushima-Daiichi Accidents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Nathan C.; Gauntt, Randall O.
Since the accidents at Fukushima-Daiichi, Sandia National Laboratories has been modeling these accident scenarios using the severe accident analysis code, MELCOR. MELCOR is a widely used computer code developed at Sandia National Laboratories since ~1982 for the U.S. Nuclear Regulatory Commission. Insights from the modeling of these accidents is being used to better inform future code development and potentially improved accident management. To date, our necessity to better capture in-vessel thermal-hydraulic and ex-vessel melt coolability and concrete interactions has led to the implementation of new models. The most recent analyses, presented in this paper, have been in support of themore » of the Organization for Economic Cooperation and Development Nuclear Energy Agency’s (OECD/NEA) Benchmark Study of the Accident at the Fukushima Daiichi Nuclear Power Station (BSAF) Project. The goal of this project is to accurately capture the source term from all three releases and then model the atmospheric dispersion. In order to do this, a forensic approach is being used in which available plant data and release timings is being used to inform the modeled MELCOR accident scenario. For example, containment failures, core slumping events and lower head failure timings are all enforced parameters in these analyses. This approach is fundamentally different from a blind code assessment analysis often used in standard problem exercises. The timings of these events are informed by representative spikes or decreases in plant data. The combination of improvements to the MELCOR source code resulting from analysis previous accident analysis and this forensic approach has allowed Sandia to generate representative and plausible source terms for all three accidents at Fukushima Daiichi out to three weeks after the accident to capture both early and late releases. In particular, using the source terms developed by MELCOR, the MACCS software code, which models atmospheric dispersion and deposition, we are able to reasonably capture the deposition of radionuclides to the northwest of the reactor site.« less
Academic Productivity in Psychiatry: Benchmarks for the H-Index.
MacMaster, Frank P; Swansburg, Rose; Rittenbach, Katherine
2017-08-01
Bibliometrics play an increasingly critical role in the assessment of faculty for promotion and merit increases. Bibliometrics is the statistical analysis of publications, aimed at evaluating their impact. The objective of this study is to describe h-index and citation benchmarks in academic psychiatry. Faculty lists were acquired from online resources for all academic departments of psychiatry listed as having residency training programs in Canada (as of June 2016). Potential authors were then searched on Web of Science (Thomson Reuters) for their corresponding h-index and total number of citations. The sample included 1683 faculty members in academic psychiatry departments. Restricted to those with a rank of assistant, associate, or full professor resulted in 1601 faculty members (assistant = 911, associate = 387, full = 303). h-index and total citations differed significantly by academic rank. Both were highest in the full professor rank, followed by associate, then assistant. The range in each, however, was large. This study provides the initial benchmarks for the h-index and total citations in academic psychiatry. Regardless of any controversies or criticisms of bibliometrics, they are increasingly influencing promotion, merit increases, and grant support. As such, benchmarking by specialties is needed in order to provide needed context.
[Benchmarking in patient identification: An opportunity to learn].
Salazar-de-la-Guerra, R M; Santotomás-Pajarrón, A; González-Prieto, V; Menéndez-Fraga, M D; Rocha Hurtado, C
To perform a benchmarking on the safe identification of hospital patients involved in "Club de las tres C" (Calidez, Calidad y Cuidados) in order to prepare a common procedure for this process. A descriptive study was conducted on the patient identification process in palliative care and stroke units in 5medium-stay hospitals. The following steps were carried out: Data collection from each hospital; organisation and data analysis, and preparation of a common procedure for this process. The data obtained for the safe identification of all stroke patients were: hospital 1 (93%), hospital 2 (93.1%), hospital 3 (100%), and hospital 5 (93.4%), and for the palliative care process: hospital 1 (93%), hospital 2 (92.3%), hospital 3 (92%), hospital 4 (98.3%), and hospital 5 (85.2%). The aim of the study has been accomplished successfully. Benchmarking activities have been developed and knowledge on the patient identification process has been shared. All hospitals had good results. The hospital 3 was best in the ictus identification process. The benchmarking identification is difficult, but, a useful common procedure that collects the best practices has been identified among the 5 hospitals. Copyright © 2017 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.
Quantitative phenotyping via deep barcode sequencing.
Smith, Andrew M; Heisler, Lawrence E; Mellor, Joseph; Kaper, Fiona; Thompson, Michael J; Chee, Mark; Roth, Frederick P; Giaever, Guri; Nislow, Corey
2009-10-01
Next-generation DNA sequencing technologies have revolutionized diverse genomics applications, including de novo genome sequencing, SNP detection, chromatin immunoprecipitation, and transcriptome analysis. Here we apply deep sequencing to genome-scale fitness profiling to evaluate yeast strain collections in parallel. This method, Barcode analysis by Sequencing, or "Bar-seq," outperforms the current benchmark barcode microarray assay in terms of both dynamic range and throughput. When applied to a complex chemogenomic assay, Bar-seq quantitatively identifies drug targets, with performance superior to the benchmark microarray assay. We also show that Bar-seq is well-suited for a multiplex format. We completely re-sequenced and re-annotated the yeast deletion collection using deep sequencing, found that approximately 20% of the barcodes and common priming sequences varied from expectation, and used this revised list of barcode sequences to improve data quality. Together, this new assay and analysis routine provide a deep-sequencing-based toolkit for identifying gene-environment interactions on a genome-wide scale.
NASA Technical Reports Server (NTRS)
Tuey, Richard C.; Moore, Fred W.; Ryan, Christine A.
1995-01-01
The report is presented in four sections: The Introduction describes the duplicating configuration under evaluation and the Background contains a chronological description of the evaluation segmented by phases 1 and 2. This section includes the evaluation schedule, printing and duplicating requirements, storage and communication requirements, electronic publishing system configuration, existing processes and proposed processes, billing rates, costs and productivity analysis, and the return on investment based upon the data gathered to date. The third section contains the phase 1 comparative cost and productivity analysis. This analysis demonstrated that LaRC should proceed with a 90-day evaluation of the DocuTech and follow with a phase 2 cycle to actually demonstrate that the proposed system would meet the needs of LaRC's printing and duplicating requirements, benchmark results, cost comparisons, benchmark observations, and recommendations. These are documented after the recommendations.
NASA Technical Reports Server (NTRS)
Bell, Michael A.
1999-01-01
Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.
A framework to analyze the stochastic harmonics and resonance of wind energy grid interconnection
Cho, Youngho; Lee, Choongman; Hur, Kyeon; ...
2016-08-31
This study addresses a modeling and analysis methodology for investigating the stochastic harmonics and resonance concerns of wind power plants (WPPs). Wideband harmonics from modern wind turbines are observed to be stochastic, associated with real power production, and they may adversely interact with the grid impedance and cause unexpected harmonic resonance if not comprehensively addressed in the planning and commissioning of the WPPs. These issues should become more critical as wind penetration levels increase. We thus propose a planning study framework comprising the following functional steps: First, the best-fitted probability density functions (PDFs) of the harmonic components of interest inmore » the frequency domain are determined. In operations planning, maximum likelihood estimations followed by a chi-square test are used once field measurements or manufacturers' data are available. Second, harmonic currents from the WPP are represented by randomly-generating harmonic components based on their PDFs (frequency spectrum) and then synthesized for time-domain simulations via inverse Fourier transform. Finally, we conduct a comprehensive assessment by including the impacts of feeder configurations, harmonic filters, and the variability of parameters. We demonstrate the efficacy of the proposed study approach for a 100-MW offshore WPP consisting of 20 units of 5-MW full-converter turbines, a realistic benchmark system adapted from a WPP under development in Korea, and discuss lessons learned through this research.« less
NASA Astrophysics Data System (ADS)
Ruşitoru, Mihaela-Viorica
2015-10-01
Educational policies in the European Union: from intergovernmentalism to integration? A survey conducted among European officials - Officially, education remains a national competence of the Member States of the European Union. However, in the context of Europeanisation, policy changes are taking place in education. In this article, the author argues that, at the dawn of the third millennium, educational policies in the European Union are shifting from intergovernmentalism to integration. The European Qualifications Framework, the key competencies for lifelong education and training, and the benchmark criteria set out in two European strategies - Lisbon and Europe 2020 - attest to a real change in the field of educational policies. The author conducted interviews with officials from various European institutions, including the Commission, the Parliament and the Council, in order to compare their testimonies to the official discourse on education policies. The qualitative analysis of the interviews reveals that the principles of subsidiarity and neutrality have been called into question since the introduction of the open method of coordination. In contradiction with the legal framework and the official discourse, it would appear that, due to the growing influence of the European Union in education policy, the objective of reaching a common education policy in the Member States could become a reality in the coming decades.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-13
... FEDERAL TRADE COMMISSION [File No. 102 3094] Franklin Budget Car Sales, Inc.; Analysis of Proposed Consent Order To Aid Public Comment AGENCY: Federal Trade Commission. ACTION: Proposed Consent Agreement... from Franklin's Budget Car Sales, Inc., also doing business as Franklin Toyota/Scion (``Franklin Toyota...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-16
... FEDERAL TRADE COMMISSION [File No. 112 3195] Filiquarian Publishing, LLC; Choice Level, LLC; and Joshua Linsk; Analysis of Proposed Consent Order To Aid Public Comment AGENCY: Federal Trade Commission... approval, an agreement containing a consent order from Filiquarian Publishing, LLC; Choice Level, LLC; and...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-15
... referrals to the Office of General Counsel from the Commission's Reports Analysis Division or Audit Division... hearing before the Commission prior to the Commission's adoption of a Final Audit Report,\\11\\ and (3) a.../2009/notice_2009-11.pdf . \\11\\ See Procedural Rules for Audit Hearings, 74 FR 33140 (July 10, 2009...
The Effects of Discrete-Trial Training Commission Errors on Learner Outcomes: An Extension
ERIC Educational Resources Information Center
Jenkins, Sarah R.; Hirst, Jason M.; DiGennaro Reed, Florence D.
2015-01-01
We conducted a parametric analysis of treatment integrity errors during discrete-trial training and investigated the effects of three integrity conditions (0, 50, or 100 % errors of commission) on performance in the presence and absence of programmed errors. The presence of commission errors impaired acquisition for three of four participants.…
Chemical Facility Preparedness: A Comprehensive Approach
2006-09-01
25 E. SWOT ANALYSIS & STRATEGIC ISSUE DEVELOPMENT ..............27 1. What can DHS do to Improve...be calculated and reviewed. Continuous benchmarking against best practices will be a necessity. E. SWOT ANALYSIS & STRATEGIC ISSUE DEVELOPMENT...In order to identify strategic issues for DHS, a Strength, Weakness, Opportunity and Threat ( SWOT ) analysis is necessary. Conducting this kind of
Fuel Cell Technology Status Analysis Project: Partnership Opportunities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fact sheet describing the National Renewable Energy Laboratory's (NREL's) Fuel Cell Technology Status Analysis Project. NREL is seeking fuel cell industry partners from the United States and abroad to participate in an objective and credible analysis of commercially available fuel cell products to benchmark the current state of the technology and support industry growth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W. II; Tsao, C.L.
1996-06-01
This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less
Benchmarking in emergency health systems.
Kennedy, Marcus P; Allen, Jacqueline; Allen, Greg
2002-12-01
This paper discusses the role of benchmarking as a component of quality management. It describes the historical background of benchmarking, its competitive origin and the requirement in today's health environment for a more collaborative approach. The classical 'functional and generic' types of benchmarking are discussed with a suggestion to adopt a different terminology that describes the purpose and practicalities of benchmarking. Benchmarking is not without risks. The consequence of inappropriate focus and the need for a balanced overview of process is explored. The competition that is intrinsic to benchmarking is questioned and the negative impact it may have on improvement strategies in poorly performing organizations is recognized. The difficulty in achieving cross-organizational validity in benchmarking is emphasized, as is the need to scrutinize benchmarking measures. The cost effectiveness of benchmarking projects is questioned and the concept of 'best value, best practice' in an environment of fixed resources is examined.
Quality Assessment of College Admissions Processes.
ERIC Educational Resources Information Center
Fisher, Caroline; Weymann, Elizabeth; Todd, Amy
2000-01-01
This study evaluated the admissions process for a Master's in Business Administration Program using such quality improvement techniques as customer surveys, benchmarking, and gap analysis. Analysis revealed that student dissatisfaction with the admissions process may be a factor influencing declining enrollment. Cycle time and number of student…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maslenikov, O.R.; Mraz, M.J.; Johnson, J.J.
1986-03-01
This report documents the seismic analyses performed by SMA for the MFTF-B Axicell vacuum vessel. In the course of this study we performed response spectrum analyses, CLASSI fixed-base analyses, and SSI analyses that included interaction effects between the vessel and vault. The response spectrum analysis served to benchmark certain modeling differences between the LLNL and SMA versions of the vessel model. The fixed-base analysis benchmarked the differences between analysis techniques. The SSI analyses provided our best estimate of vessel response to the postulated seismic excitation for the MFTF-B facility, and included consideration of uncertainties in soil properties by calculating responsemore » for a range of soil shear moduli. Our results are presented in this report as tables of comparisons of specific member forces from our analyses and the analyses performed by LLNL. Also presented are tables of maximum accelerations and relative displacements and plots of response spectra at various selected locations.« less
NASA Technical Reports Server (NTRS)
Waszak, Martin R.; Fung, Jimmy
1998-01-01
This report describes the development of transfer function models for the trailing-edge and upper and lower spoiler actuators of the Benchmark Active Control Technology (BACT) wind tunnel model for application to control system analysis and design. A simple nonlinear least-squares parameter estimation approach is applied to determine transfer function parameters from frequency response data. Unconstrained quasi-Newton minimization of weighted frequency response error was employed to estimate the transfer function parameters. An analysis of the behavior of the actuators over time to assess the effects of wear and aerodynamic load by using the transfer function models is also presented. The frequency responses indicate consistent actuator behavior throughout the wind tunnel test and only slight degradation in effectiveness due to aerodynamic hinge loading. The resulting actuator models have been used in design, analysis, and simulation of controllers for the BACT to successfully suppress flutter over a wide range of conditions.
Space station operating system study
NASA Technical Reports Server (NTRS)
Horn, Albert E.; Harwell, Morris C.
1988-01-01
The current phase of the Space Station Operating System study is based on the analysis, evaluation, and comparison of the operating systems implemented on the computer systems and workstations in the software development laboratory. Primary emphasis has been placed on the DEC MicroVMS operating system as implemented on the MicroVax II computer, with comparative analysis of the SUN UNIX system on the SUN 3/260 workstation computer, and to a limited extent, the IBM PC/AT microcomputer running PC-DOS. Some benchmark development and testing was also done for the Motorola MC68010 (VM03 system) before the system was taken from the laboratory. These systems were studied with the objective of determining their capability to support Space Station software development requirements, specifically for multi-tasking and real-time applications. The methodology utilized consisted of development, execution, and analysis of benchmark programs and test software, and the experimentation and analysis of specific features of the system or compilers in the study.
NASA Technical Reports Server (NTRS)
Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)
1993-01-01
A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.
On the sensitivity of TG-119 and IROC credentialing to TPS commissioning errors.
McVicker, Drew; Yin, Fang-Fang; Adamson, Justus D
2016-01-08
We investigate the sensitivity of IMRT commissioning using the TG-119 C-shape phantom and credentialing with the IROC head and neck phantom to treatment planning system commissioning errors. We introduced errors into the various aspects of the commissioning process for a 6X photon energy modeled using the analytical anisotropic algorithm within a commercial treatment planning system. Errors were implemented into the various components of the dose calculation algorithm including primary photons, secondary photons, electron contamination, and MLC parameters. For each error we evaluated the probability that it could be committed unknowingly during the dose algorithm commissioning stage, and the probability of it being identified during the verification stage. The clinical impact of each commissioning error was evaluated using representative IMRT plans including low and intermediate risk prostate, head and neck, mesothelioma, and scalp; the sensitivity of the TG-119 and IROC phantoms was evaluated by comparing dosimetric changes to the dose planes where film measurements occur and change in point doses where dosimeter measurements occur. No commissioning errors were found to have both a low probability of detection and high clinical severity. When errors do occur, the IROC credentialing and TG 119 commissioning criteria are generally effective at detecting them; however, for the IROC phantom, OAR point-dose measurements are the most sensitive despite being currently excluded from IROC analysis. Point-dose measurements with an absolute dose constraint were the most effective at detecting errors, while film analysis using a gamma comparison and the IROC film distance to agreement criteria were less effective at detecting the specific commissioning errors implemented here.
Benchmarking and Performance Measurement.
ERIC Educational Resources Information Center
Town, J. Stephen
This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
Navier-Stokes analysis of a liquid rocket engine disk cavity
NASA Technical Reports Server (NTRS)
Benjamin, Theodore G.; Mcconnaughey, Paul K.
1991-01-01
This paper presents a Navier-Stokes analysis of hydrodynamic phenomena occurring in the aft disk cavity of a liquid rocket engine turbine. The cavity analyzed in the Space Shuttle Main Engine Alternate Turbopump currently being developed by NASA and Pratt and Whitney. Comparison of results obtained from the Navier-Stokes code for two rotating disk datasets available in the literature are presented as benchmark validations. The benchmark results obtained using the code show good agreement relative to experimental data, and the turbine disk cavity was analyzed with comparable grid resolution, dissipation levels, and turbulence models. Predicted temperatures in the cavity show that little mixing of hot and cold fluid occurs in the cavity and the flow is dominated by swirl and pumping up the rotating disk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W., II
1993-01-01
One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less
The KMAT: Benchmarking Knowledge Management.
ERIC Educational Resources Information Center
de Jager, Martha
Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kydonieos, M; Folgueras, A; Florescu, L
2016-06-15
Purpose: Elekta recently developed a solution for in-vivo EPID dosimetry (iViewDose, Elekta AB, Stockholm, Sweden) in conjunction with the Netherlands Cancer Institute (NKI). This uses a simplified commissioning approach via Template Commissioning Models (TCMs), consisting of a subset of linac-independent pre-defined parameters. This work compares the performance of iViewDose using a TCM commissioning approach with that corresponding to full commissioning. Additionally, the dose reconstruction based on the simplified commissioning approach is validated via independent dose measurements. Methods: Measurements were performed at the NKI on a VersaHD™ (Elekta AB, Stockholm, Sweden). Treatment plans were generated with Pinnacle 9.8 (Philips Medical Systems,more » Eindhoven, The Netherlands). A farmer chamber dose measurement and two EPID images were used to create a linac-specific commissioning model based on a TCM. A complete set of commissioning measurements was collected and a full commissioning model was created.The performance of iViewDose based on the two commissioning approaches was compared via a series of set-to-work tests in a slab phantom. In these tests, iViewDose reconstructs and compares EPID to TPS dose for square fields, IMRT and VMAT plans via global gamma analysis and isocentre dose difference. A clinical VMAT plan was delivered to a homogeneous Octavius 4D phantom (PTW, Freiburg, Germany). Dose was measured with the Octavius 1500 array and VeriSoft software was used for 3D dose reconstruction. EPID images were acquired. TCM-based iViewDose and 3D Octavius dose distributions were compared against the TPS. Results: For both the TCM-based and the full commissioning approaches, the pass rate, mean γ and dose difference were >97%, <0.5 and <2.5%, respectively. Equivalent gamma analysis results were obtained for iViewDose (TCM approach) and Octavius for a VMAT plan. Conclusion: iViewDose produces similar results with the simplified and full commissioning approaches. Good agreement is obtained between iViewDose (simplified approach) and the independent measurement tool. This research is funded by Elekta Limited.« less
NASA Technical Reports Server (NTRS)
Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.
1991-01-01
A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.
CatReg Software for Categorical Regression Analysis (May 2016)
CatReg 3.0 is a Microsoft Windows enhanced version of the Agency’s categorical regression analysis (CatReg) program. CatReg complements EPA’s existing Benchmark Dose Software (BMDS) by greatly enhancing a risk assessor’s ability to determine whether data from separate toxicologic...
User-Centric Approach for Benchmark RDF Data Generator in Big Data Performance Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purohit, Sumit; Paulson, Patrick R.; Rodriguez, Luke R.
This research focuses on user-centric approach of building such tools and proposes a flexible, extensible, and easy to use framework to support performance analysis of Big Data systems. Finally, case studies from two different domains are presented to validate the framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fact sheet describing the National Renewable Energy Laboratory's (NREL's) Fuel Cell Technology Status Analysis Project. NREL is seeking fuel cell industry partners from the United States and abroad to participate in an objective and credible analysis of commercially available fuel cell products to benchmark the current state of the technology and support industry growth.
A health risk benchmark for the neurologic effects of styrene: comparison with NOAEL/LOAEL approach.
Rabovsky, J; Fowles, J; Hill, M D; Lewis, D C
2001-02-01
Benchmark dose (BMD) analysis was used to estimate an inhalation benchmark concentration for styrene neurotoxicity. Quantal data on neuropsychologic test results from styrene-exposed workers [Mutti et al. (1984). American Journal of Industrial Medicine, 5, 275-286] were used to quantify neurotoxicity, defined as the percent of tested workers who responded abnormally to > or = 1, > or = 2, or > or = 3 out of a battery of eight tests. Exposure was based on previously published results on mean urinary mandelic- and phenylglyoxylic acid levels in the workers, converted to air styrene levels (15, 44, 74, or 115 ppm). Nonstyrene-exposed workers from the same region served as a control group. Maximum-likelihood estimates (MLEs) and BMDs at 5 and 10% response levels of the exposed population were obtained from log-normal analysis of the quantal data. The highest MLE was 9 ppm (BMD = 4 ppm) styrene and represents abnormal responses to > or = 3 tests by 10% of the exposed population. The most health-protective MLE was 2 ppm styrene (BMD = 0.3 ppm) and represents abnormal responses to > or = 1 test by 5% of the exposed population. A no observed adverse effect level/lowest observed adverse effect level (NOAEL/LOAEL) analysis of the same quantal data showed workers in all styrene exposure groups responded abnormally to > or = 1, > or = 2, or > or = 3 tests, compared to controls, and the LOAEL was 15 ppm. A comparison of the BMD and NOAEL/LOAEL analyses suggests that at air styrene levels below the LOAEL, a segment of the worker population may be adversely affected. The benchmark approach will be useful for styrene noncancer risk assessment purposes by providing a more accurate estimate of potential risk that should, in turn, help to reduce the uncertainty that is a common problem in setting exposure levels.
Benchmark Shock Tube Experiments for Radiative Heating Relevant to Earth Re-Entry
NASA Technical Reports Server (NTRS)
Brandis, A. M.; Cruden, B. A.
2017-01-01
Detailed spectrally and spatially resolved radiance has been measured in the Electric Arc Shock Tube (EAST) facility for conditions relevant to high speed entry into a variety of atmospheres, including Earth, Venus, Titan, Mars and the Outer Planets. The tests that measured radiation relevant for Earth re-entry are the focus of this work and are taken from campaigns 47, 50, 52 and 57. These tests covered conditions from 8 km/s to 15.5 km/s at initial pressures ranging from 0.05 Torr to 1 Torr, of which shots at 0.1 and 0.2 Torr are analyzed in this paper. These conditions cover a range of points of interest for potential fight missions, including return from Low Earth Orbit, the Moon and Mars. The large volume of testing available from EAST is useful for statistical analysis of radiation data, but is problematic for identifying representative experiments for performing detailed analysis. Therefore, the intent of this paper is to select a subset of benchmark test data that can be considered for further detailed study. These benchmark shots are intended to provide more accessible data sets for future code validation studies and facility-to-facility comparisons. The shots that have been selected as benchmark data are the ones in closest agreement to a line of best fit through all of the EAST results, whilst also showing the best experimental characteristics, such as test time and convergence to equilibrium. The EAST data are presented in different formats for analysis. These data include the spectral radiance at equilibrium, the spatial dependence of radiance over defined wavelength ranges and the mean non-equilibrium spectral radiance (so-called 'spectral non-equilibrium metric'). All the information needed to simulate each experimental trace, including free-stream conditions, shock time of arrival (i.e. x-t) relation, and the spectral and spatial resolution functions, are provided.
42 CFR 440.335 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...
42 CFR 440.335 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...
Impacts of Modeled Recommendations of the National Commission on Energy Policy
2005-01-01
This report provides the Energy Information Administration's analysis of those National Commission on Energy Policy (NCEP) energy policy recommendations that could be simulated using the National Energy Modeling System (NEMS).
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...
Yiadom, Maame Yaa A B; Scheulen, James; McWade, Conor M; Augustine, James J
2016-07-01
The objective was to obtain a commitment to adopt a common set of definitions for emergency department (ED) demographic, clinical process, and performance metrics among the ED Benchmarking Alliance (EDBA), ED Operations Study Group (EDOSG), and Academy of Academic Administrators of Emergency Medicine (AAAEM) by 2017. A retrospective cross-sectional analysis of available data from three ED operations benchmarking organizations supported a negotiation to use a set of common metrics with identical definitions. During a 1.5-day meeting-structured according to social change theories of information exchange, self-interest, and interdependence-common definitions were identified and negotiated using the EDBA's published definitions as a start for discussion. Methods of process analysis theory were used in the 8 weeks following the meeting to achieve official consensus on definitions. These two lists were submitted to the organizations' leadership for implementation approval. A total of 374 unique measures were identified, of which 57 (15%) were shared by at least two organizations. Fourteen (4%) were common to all three organizations. In addition to agreement on definitions for the 14 measures used by all three organizations, agreement was reached on universal definitions for 17 of the 57 measures shared by at least two organizations. The negotiation outcome was a list of 31 measures with universal definitions to be adopted by each organization by 2017. The use of negotiation, social change, and process analysis theories achieved the adoption of universal definitions among the EDBA, EDOSG, and AAAEM. This will impact performance benchmarking for nearly half of US EDs. It initiates a formal commitment to utilize standardized metrics, and it transitions consistency in reporting ED operations metrics from consensus to implementation. This work advances our ability to more accurately characterize variation in ED care delivery models, resource utilization, and performance. In addition, it permits future aggregation of these three data sets, thus facilitating the creation of more robust ED operations research data sets unified by a universal language. Negotiation, social change, and process analysis principles can be used to advance the adoption of additional definitions. © 2016 by the Society for Academic Emergency Medicine.
SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis Smith; James Knudsen
As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure usingmore » the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to obtain the correct results. While conventional methods may be used to evaluate a dynamic system like that in the benchmark, the level of effort required may preclude its use on real-world problems.« less
Krige, Jake E; Jonas, Eduard; Thomson, Sandie R; Kotze, Urda K; Setshedi, Mashiko; Navsaria, Pradeep H; Nicol, Andrew J
2017-01-01
AIM To benchmark severity of complications using the Accordion Severity Grading System (ASGS) in patients undergoing operation for severe pancreatic injuries. METHODS A prospective institutional database of 461 patients with pancreatic injuries treated from 1990 to 2015 was reviewed. One hundred and thirty patients with AAST grade 3, 4 or 5 pancreatic injuries underwent resection (pancreatoduodenectomy, n = 20, distal pancreatectomy, n = 110), including 30 who had an initial damage control laparotomy (DCL) and later definitive surgery. AAST injury grades, type of pancreatic resection, need for DCL and incidence and ASGS severity of complications were assessed. Uni- and multivariate logistic regression analysis was applied. RESULTS Overall 238 complications occurred in 95 (73%) patients of which 73% were ASGS grades 3-6. Nineteen patients (14.6%) died. Patients more likely to have complications after pancreatic resection were older, had a revised trauma score (RTS) < 7.8, were shocked on admission, had grade 5 injuries of the head and neck of the pancreas with associated vascular and duodenal injuries, required a DCL, received a larger blood transfusion, had a pancreatoduodenectomy (PD) and repeat laparotomies. Applying univariate logistic regression analysis, mechanism of injury, RTS < 7.8, shock on admission, DCL, increasing AAST grade and type of pancreatic resection were significant variables for complications. Multivariate logistic regression analysis however showed that only age and type of pancreatic resection (PD) were significant. CONCLUSION This ASGS-based study benchmarked postoperative morbidity after pancreatic resection for trauma. The detailed outcome analysis provided may serve as a reference for future institutional comparisons. PMID:28396721
Identifying Peer Institutions Using Cluster Analysis
ERIC Educational Resources Information Center
Boronico, Jess; Choksi, Shail S.
2012-01-01
The New York Institute of Technology's (NYIT) School of Management (SOM) wishes to develop a list of peer institutions for the purpose of benchmarking and monitoring/improving performance against other business schools. The procedure utilizes relevant criteria for the purpose of establishing this peer group by way of a cluster analysis. The…
Treating technology as a luxury? 10 necessary tools.
Berger, Steven H
2007-02-01
Technology and techniques that every hospital should acquire and use for effective financial management include: Daily dashboards. Balanced scorecards. Benchmarking. Flexible budgeting and monitoring. Labor management systems. Nonlabor management analysis. Service, line, physician, and patient-level reporting and analysis. Cost accounting technology. Contract management technology. Denials management software.
Trend Analysis on Mathematics Achievements: A Comparative Study Using TIMSS Data
ERIC Educational Resources Information Center
Ker, H. W.
2013-01-01
Research addressed the importance of mathematics education for the students' preparation to enter scientific and technological workforce. This paper utilized Trends in International Mathematics and Science Study (TIMSS) 2011 data to conduct a global comparative analysis on mathematics performance at varied International Benchmark levels. The…
Cascades/Aleutian Play Fairway Analysis: Data and Map Files
Lisa Shevenell
2015-11-15
Contains Excel data files used to quantifiably rank the geothermal potential of each of the young volcanic centers of the Cascade and Aleutian Arcs using world power production volcanic centers as benchmarks. Also contains shapefiles used in play fairway analysis with power plant, volcano, geochemistry and structural data.
ERIC Educational Resources Information Center
Jones, Thomas B.; Larson, Olaf F.
Utilizing unpublished data derived from the Roosevelt Commission on Country Life (1908), national summary reports of a 12-item questionnaire were examined and compared with the commission's published report. For purposes of analysis, the 12 questions were grouped as: (1) economic concerns, (2) farm labor, (3) services, (4) quality of life, and (5)…
Simulations of the propagation of multiple-FM smoothing by spectral dispersion on OMEGA EP
Kelly, J. H.; Shvydky, A.; Marozas, J. A.; ...
2013-02-18
A one-dimensional (1-D) smoothing by spectral dispersion (SSD) system for smoothing focal-spot nonuniformities using multiple modulation frequencies has been commissioned on one long-pulse beamline of OMEGA EP, the first use of such a system in a high-energy laser. Frequency modulation (FM) to amplitude modulation (AM) conversion in the infrared (IR) output, frequency conversion, and final optics affected the accumulation of B-integral in that beamline. Modeling of this FM-to-AM conversion using the code Miró. was used as input to set the beamline performance limits for picket (short) pulses with multi-FM SSD applied. This article first describes that modeling. The 1-D SSDmore » analytical model of Chuang is first extended to the case of multiple modulators and then used to benchmark Miró simulations. Comparison is also made to an alternative analytic model developed by Hocquet et al. With the confidence engendered by this benchmarking, Miró results for multi-FM SSD applied on OMEGA EP are then presented. The relevant output section(s) of the OMEGA EP Laser System are described. The additional B-integral in OMEGA EP IR components upstream of the frequency converters due to AM is modeled. The importance of locating the image of the SSD dispersion grating at the frequency converters is demonstrated. In conclusion, since frequency conversion is not performed in OMEGA EP’s target chamber, the additional AM due to propagation to the target chamber’s vacuum window is modeled.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W. II; Mabrey, J.B.
1994-07-01
This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronicmore » Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.« less
Raising Quality and Achievement. A College Guide to Benchmarking.
ERIC Educational Resources Information Center
Owen, Jane
This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…
Benchmarking in Education: Tech Prep, a Case in Point. IEE Brief Number 8.
ERIC Educational Resources Information Center
Inger, Morton
Benchmarking is a process by which organizations compare their practices, processes, and outcomes to standards of excellence in a systematic way. The benchmarking process entails the following essential steps: determining what to benchmark and establishing internal baseline data; identifying the benchmark; determining how that standard has been…
Benchmarks: The Development of a New Approach to Student Evaluation.
ERIC Educational Resources Information Center
Larter, Sylvia
The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…
NASA Astrophysics Data System (ADS)
Velioglu Sogut, Deniz; Yalciner, Ahmet Cevdet
2018-06-01
Field observations provide valuable data regarding nearshore tsunami impact, yet only in inundation areas where tsunami waves have already flooded. Therefore, tsunami modeling is essential to understand tsunami behavior and prepare for tsunami inundation. It is necessary that all numerical models used in tsunami emergency planning be subject to benchmark tests for validation and verification. This study focuses on two numerical codes, NAMI DANCE and FLOW-3D®, for validation and performance comparison. NAMI DANCE is an in-house tsunami numerical model developed by the Ocean Engineering Research Center of Middle East Technical University, Turkey and Laboratory of Special Research Bureau for Automation of Marine Research, Russia. FLOW-3D® is a general purpose computational fluid dynamics software, which was developed by scientists who pioneered in the design of the Volume-of-Fluid technique. The codes are validated and their performances are compared via analytical, experimental and field benchmark problems, which are documented in the ``Proceedings and Results of the 2011 National Tsunami Hazard Mitigation Program (NTHMP) Model Benchmarking Workshop'' and the ``Proceedings and Results of the NTHMP 2015 Tsunami Current Modeling Workshop". The variations between the numerical solutions of these two models are evaluated through statistical error analysis.
Evaluation of the Pool Critical Assembly Benchmark with Explicitly-Modeled Geometry using MCNP6
Kulesza, Joel A.; Martz, Roger Lee
2017-03-01
Despite being one of the most widely used benchmarks for qualifying light water reactor (LWR) radiation transport methods and data, no benchmark calculation of the Oak Ridge National Laboratory (ORNL) Pool Critical Assembly (PCA) pressure vessel wall benchmark facility (PVWBF) using MCNP6 with explicitly modeled core geometry exists. As such, this paper provides results for such an analysis. First, a criticality calculation is used to construct the fixed source term. Next, ADVANTG-generated variance reduction parameters are used within the final MCNP6 fixed source calculations. These calculations provide unadjusted dosimetry results using three sets of dosimetry reaction cross sections of varyingmore » ages (those packaged with MCNP6, from the IRDF-2002 multi-group library, and from the ACE-formatted IRDFF v1.05 library). These results are then compared to two different sets of measured reaction rates. The comparison agrees in an overall sense within 2% and on a specific reaction- and dosimetry location-basis within 5%. Except for the neptunium dosimetry, the individual foil raw calculation-to-experiment comparisons usually agree within 10% but is typically greater than unity. Finally, in the course of developing these calculations, geometry that has previously not been completely specified is provided herein for the convenience of future analysts.« less
NASA Astrophysics Data System (ADS)
Prates, G.; Berrocoso, M.; Fernández-Ros, A.; García, A.; Ortiz, R.
2012-04-01
El Hierro Island (Canary Islands, Spain) has undergone a submarine eruption a few kilometers to its southeast, detected October 10, on the rift alignment that cuts across the island. However, the seismicity level suddenly increased around July 17 and ground deformation was detected by the only continuously observed GNSS-GPS (Global Navigation Satellite Systems - Global Positioning System) benchmark FRON in the El Golfo area. Based on that information several other GNSS-GPS benchmarks were installed, some of which continuously observed as well. A normal vector analysis was applied to these collected data. The normal vector magnitude variation identified local extension-compression regimes, while the normal vector inclination showed the relative height variation between the three benchmarks that define the plan to which normal vector is analyzed. To accomplish this analysis the data was previously processed to achieve positioning solutions every 30 minutes using the Bernese GPS Software 5.0, further enhanced by a Discrete Kalman Filter, giving an overall millimeter level precision. These solutions were reached using the IGS (International GNSS Service) ultra-rapid orbits and the double-differenced ionosphere-free combination. With this strategy the positioning solutions were attained in near real-time. Later with the IGS rapid orbits the data was reprocessed to provide added confidence to the solutions. Two triangles were then considered, a smaller one located in the El Golfo area within the historically collapsed caldera, and a larger one defined by benchmarks placed in Valverde, El Golfo and La Restinga, the town closest to the eruption's location, covering almost the entire Island's surface above sea level. With these two triangles the pre-eruption and post-eruption deformation suffered by El Hierro's surface will be further analyzed.
A simple next-best alternative to seasonal predictions in Europe
NASA Astrophysics Data System (ADS)
Buontempo, Carlo; De Felice, Matteo
2016-04-01
In order to build a climate proof society, we need to learn how to best use the climate information we have. Having spent time and resources in developing complex numerical models has often blinded us on the value some of this information really has in the eyes of a decision maker. An effective way to assess this is to check the quality of the forecast (and its cost) to the quality of the forecast from a prediction system based on simpler assumption (and thus cheaper to run). Such a practice is common in marketing analysis where it is often referred to as the next-best alternative. As a way to facilitate such an analysis, climate service providers should always provide alongside the predictions a set of skill scores. These are usually based on climatological means, anomaly persistence or more recently multiple linear regressions. We here present an equally simple benchmark based on a Markov chain process locally trained at a monthly or seasonal time-scale. We demonstrate that in spite of its simplicity the model easily outperforms not only the standard benchmark but also most of the seasonal predictions system at least in EUROPE. We suggest that a benchmark of this kind could represent a useful next-best alternative for a number of users.
Natto, S A; Lewis, D G; Ryde, S J
1998-01-01
The Monte Carlo computer code MCNP (version 4A) has been used to develop a personal computer-based model of the Swansea in vivo neutron activation analysis (IVNAA) system. The model included specification of the neutron source (252Cf), collimators, reflectors and shielding. The MCNP model was 'benchmarked' against fast neutron and thermal neutron fluence data obtained experimentally from the IVNAA system. The Swansea system allows two irradiation geometries using 'short' and 'long' collimators, which provide alternative dose rates for IVNAA. The data presented here relate to the short collimator, although results of similar accuracy were obtained using the long collimator. The fast neutron fluence was measured in air at a series of depths inside the collimator. The measurements agreed with the MCNP simulation within the statistical uncertainty (5-10%) of the calculations. The thermal neutron fluence was measured and calculated inside the cuboidal water phantom. The depth of maximum thermal fluence was 3.2 cm (measured) and 3.0 cm (calculated). The width of the 50% thermal fluence level across the phantom at its mid-depth was found to be the same by both MCNP and experiment. This benchmarking exercise has given us a high degree of confidence in MCNP as a tool for the design of IVNAA systems.
Stratification of unresponsive patients by an independently validated index of brain complexity
Casarotto, Silvia; Comanducci, Angela; Rosanova, Mario; Sarasso, Simone; Fecchio, Matteo; Napolitani, Martino; Pigorini, Andrea; G. Casali, Adenauer; Trimarchi, Pietro D.; Boly, Melanie; Gosseries, Olivia; Bodart, Olivier; Curto, Francesco; Landi, Cristina; Mariotti, Maurizio; Devalle, Guya; Laureys, Steven; Tononi, Giulio
2016-01-01
Objective Validating objective, brain‐based indices of consciousness in behaviorally unresponsive patients represents a challenge due to the impossibility of obtaining independent evidence through subjective reports. Here we address this problem by first validating a promising metric of consciousness—the Perturbational Complexity Index (PCI)—in a benchmark population who could confirm the presence or absence of consciousness through subjective reports, and then applying the same index to patients with disorders of consciousness (DOCs). Methods The benchmark population encompassed 150 healthy controls and communicative brain‐injured subjects in various states of conscious wakefulness, disconnected consciousness, and unconsciousness. Receiver operating characteristic curve analysis was performed to define an optimal cutoff for discriminating between the conscious and unconscious conditions. This cutoff was then applied to a cohort of noncommunicative DOC patients (38 in a minimally conscious state [MCS] and 43 in a vegetative state [VS]). Results We found an empirical cutoff that discriminated with 100% sensitivity and specificity between the conscious and the unconscious conditions in the benchmark population. This cutoff resulted in a sensitivity of 94.7% in detecting MCS and allowed the identification of a number of unresponsive VS patients (9 of 43) with high values of PCI, overlapping with the distribution of the benchmark conscious condition. Interpretation Given its high sensitivity and specificity in the benchmark and MCS population, PCI offers a reliable, independently validated stratification of unresponsive patients that has important physiopathological and therapeutic implications. In particular, the high‐PCI subgroup of VS patients may retain a capacity for consciousness that is not expressed in behavior. Ann Neurol 2016;80:718–729 PMID:27717082
Rahman, Sajjad; Salameh, Khalil; Al-Rifai, Hilal; Masoud, Ahmed; Lutfi, Samawal; Salama, Husam; Abdoh, Ghassan; Omar, Fahmi; Bener, Abdulbari
2011-09-01
To analyze and compare the current gestational age specific neonatal survival rates between Qatar and international benchmarks. An analytical comparative study. Women's Hospital, Hamad Medical Corporation, Doha, Qatar, from 2003-2008. Six year's (2003-2008) gestational age specific neonatal mortality data was stratified for each completed week of gestation at birth from 24 weeks till term. The data from World Health Statistics by WHO (2010), Vermont Oxford Network (VON, 2007) and National Statistics United Kingdom (2006) were used as international benchmarks for comparative analysis. A total of 82,002 babies were born during the study period. Qatar's neonatal mortality rate (NMR) dropped from 6/1000 in 2003 to 4.3/1000 in 2008 (p < 0.05). The overall and gestational age specific neonatal mortality rates of Qatar were comparable with international benchmarks. The survival of < 27 weeks and term babies was better in Qatar (p=0.01 and p < 0.001 respectively) as compared to VON. The survival of > 32 weeks babies was better in UK (p=0.01) as compared to Qatar. The relative risk (RR) of death decreased with increasing gestational age (p < 0.0001). Preterm babies (45%) followed by lethal chromosomal and congenital anomalies (26.5%) were the two leading causes of neonatal deaths in Qatar. The current total and gestational age specific neonatal survival rates in the State of Qatar are comparable with international benchmarks. In Qatar, persistently high rates of low birth weight and lethal chromosomal and congenital anomalies significantly contribute towards neonatal mortality.
Talaminos-Barroso, Alejandro; Estudillo-Valderrama, Miguel A; Roa, Laura M; Reina-Tosina, Javier; Ortega-Ruiz, Francisco
2016-06-01
M2M (Machine-to-Machine) communications represent one of the main pillars of the new paradigm of the Internet of Things (IoT), and is making possible new opportunities for the eHealth business. Nevertheless, the large number of M2M protocols currently available hinders the election of a suitable solution that satisfies the requirements that can demand eHealth applications. In the first place, to develop a tool that provides a benchmarking analysis in order to objectively select among the most relevant M2M protocols for eHealth solutions. In the second place, to validate the tool with a particular use case: the respiratory rehabilitation. A software tool, called Distributed Computing Framework (DFC), has been designed and developed to execute the benchmarking tests and facilitate the deployment in environments with a large number of machines, with independence of the protocol and performance metrics selected. DDS, MQTT, CoAP, JMS, AMQP and XMPP protocols were evaluated considering different specific performance metrics, including CPU usage, memory usage, bandwidth consumption, latency and jitter. The results obtained allowed to validate a case of use: respiratory rehabilitation of chronic obstructive pulmonary disease (COPD) patients in two scenarios with different types of requirement: Home-Based and Ambulatory. The results of the benchmark comparison can guide eHealth developers in the choice of M2M technologies. In this regard, the framework presented is a simple and powerful tool for the deployment of benchmark tests under specific environments and conditions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
McDermott, Imelda; Checkland, Kath; Harrison, Stephen; Snow, Stephanie; Coleman, Anna
2013-01-01
The language used by National Health Service (NHS) "commissioning" managers when discussing their roles and responsibilities can be seen as a manifestation of "identity work", defined as a process of identifying. This paper aims to offer a novel approach to analysing "identity work" by triangulation of multiple analytical methods, combining analysis of the content of text with analysis of its form. Fairclough's discourse analytic methodology is used as a framework. Following Fairclough, the authors use analytical methods associated with Halliday's systemic functional linguistics. While analysis of the content of interviews provides some information about NHS Commissioners' perceptions of their roles and responsibilities, analysis of the form of discourse that they use provides a more detailed and nuanced view. Overall, the authors found that commissioning managers have a higher level of certainty about what commissioning is not rather than what commissioning is; GP managers have a high level of certainty of their identity as a GP rather than as a manager; and both GP managers and non-GP managers oscillate between multiple identities depending on the different situations they are in. This paper offers a novel approach to triangulation, based not on the usual comparison of multiple data sources, but rather based on the application of multiple analytical methods to a single source of data. This paper also shows the latent uncertainty about the nature of commissioning enterprise in the English NHS.
Lattice Commissioning Stretgy Simulation for the B Factory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, M.; Whittum, D.; Yan, Y.
2011-08-26
To prepare for the PEP-II turn on, we have studied one commissioning strategy with simulated lattice errors. Features such as difference and absolute orbit analysis and correction are discussed. To prepare for the commissioning of the PEP-II injection line and high energy ring (HER), we have developed a system for on-line orbit analysis by merging two existing codes: LEGO and RESOLVE. With the LEGO-RESOLVE system, we can study the problem of finding quadrupole alignment and beam position (BPM) offset errors with simulated data. We have increased the speed and versatility of the orbit analysis process by using a command filemore » written in a script language designed specifically for RESOLVE. In addition, we have interfaced the LEGO-RESOLVE system to the control system of the B-Factory. In this paper, we describe online analysis features of the LEGO-RESOLVE system and present examples of practical applications.« less
NASA Astrophysics Data System (ADS)
Stern, Luli
2002-11-01
Assessment influences every level of the education system and is one of the most crucial catalysts for reform in science curriculum and instruction. Teachers, administrators, and others who choose, assemble, or develop assessments face the difficulty of judging whether tasks are truly aligned with national or state standards and whether they are effective in revealing what students actually know. Project 2061 of the American Association for the Advancement of Science has developed and field-tested a procedure for analyzing curriculum materials, including their assessments, in terms of how well they are likely to contribute to the attainment of benchmarks and standards. With respect to assessment in curriculum materials, this procedure evaluates whether this assessment has the potential to reveal whether students have attained specific ideas in benchmarks and standards and whether information gained from students' responses can be used to inform subsequent instruction. Using this procedure, Project 2061 had produced a database of analytical reports on nine widely used science middle school curriculum materials. The analysis of assessments included in these materials shows that whereas currently available materials devote significant sections in their instruction to ideas included in national standards documents, students are typically not assessed on these ideas. The analysis results described in the report point to strengths and limitations of these widely used assessments and identify a range of good and poor assessment tasks that can shed light on important characteristics of good assessment.
HS06 Benchmark for an ARM Server
NASA Astrophysics Data System (ADS)
Kluth, Stefan
2014-06-01
We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.
PMLB: a large benchmark suite for machine learning evaluation and comparison.
Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H
2017-01-01
The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.
The General Concept of Benchmarking and Its Application in Higher Education in Europe
ERIC Educational Resources Information Center
Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna
2009-01-01
The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…
Hendricson, William D; Andrieu, Sandra C; Chadwick, D Gregory; Chmar, Jacqueline E; Cole, James R; George, Mary C; Glickman, Gerald N; Glover, Joel F; Goldberg, Jerold S; Haden, N Karl; Meyerowitz, Cyril; Neumann, Laura; Pyle, Marsha; Tedesco, Lisa A; Valachovic, Richard W; Weaver, Richard G; Winder, Ronald L; Young, Stephen K; Kalkwarf, Kenneth L
2006-09-01
This article was developed for the Commission on Change and Innovation in Dental Education (CCI), established by the American Dental Education Association. CCI was created because numerous organizations within organized dentistry and the educational community have initiated studies or proposed modifications to the process of dental education, often working to achieve positive and desirable goals but without coordination or communication. The fundamental mission of CCI is to serve as a focal meeting place where dental educators and administrators, representatives from organized dentistry, the dental licensure community, the Commission on Dental Accreditation, the ADA Council on Dental Education and Licensure, and the Joint Commission on National Dental Examinations can meet and coordinate efforts to improve dental education and the nation's oral health. One of the objectives of the CCI is to provide guidance to dental schools related to curriculum design. In pursuit of that objective, this article summarizes the evidence related to this question: What are educational best practices for helping dental students acquire the capacity to function as an entry-level general dentist or to be a better candidate to begin advanced studies? Three issues are addressed, with special emphasis on the third: 1) What constitutes expertise, and when does an individual become an expert? 2) What are the differences between novice and expert thinking? and 3) What educational best practices can help our students acquire mental capacities associated with expert function, including critical thinking and self-directed learning? The purpose of this review is to provide a benchmark that faculty and academic planners can use to assess the degree to which their curricula include learning experiences associated with development of problem-solving, critical thinking, self-directed learning, and other cognitive skills necessary for dental school graduates to ultimately become expert performers as they develop professionally in the years after graduation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana
2017-02-01
In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount.more » Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next-generation reactor design, safety analysis requirements, and all other front- and back-end activities contributing to the overall nuclear fuel cycle where quality neutronics calculations are paramount.« less
Wirtz, Veronika J; Santa-Ana-Tellez, Yared; Trout, Clinton H; Kaplan, Warren A
2012-12-01
Public sector price analyses of antiretroviral (ARV) medicines can provide relevant information to detect ARV procurement procedures that do not obtain competitive market prices. Price benchmarks provide a useful tool for programme managers and policy makers to support such planning and policy measures. The aim of the study was to develop regional and global price benchmarks which can be used to analyse public-sector price variability of ARVs in low- and middle-income countries using the procurement prices of Latin America and the Caribbean (LAC) countries in 2008 as an example. We used the Global Price Reporting Mechanism (GPRM) data base, provided by the World Health Organization (WHO), for 13 LAC countries' ARV procurements to analyse the procurement prices of four first-line and three second-line ARV combinations in 2008. First, a cross-sectional analysis was conducted to compare ARV combination prices. Second, four different price 'benchmarks' were created and we estimated the additional number of patients who could have been treated in each country if the ARV combinations studied were purchased at the various reference ('benchmark') prices. Large price variations exist for first- and second-line ARV combinations between countries in the LAC region. Most countries in the LAC region could be treating between 1.17 and 3.8 times more patients if procurement prices were closer to the lowest regional generic price. For all second-line combinations, a price closer to the lowest regional innovator prices or to the global median transaction price for lower-middle-income countries would also result in treating up to nearly five times more patients. Some rational allocation of financial resources due, in part, to price benchmarking and careful planning by policy makers and programme managers can assist a country in negotiating lower ARV procurement prices and should form part of a sustainable procurement policy.
A Simulation Environment for Benchmarking Sensor Fusion-Based Pose Estimators.
Ligorio, Gabriele; Sabatini, Angelo Maria
2015-12-19
In-depth analysis and performance evaluation of sensor fusion-based estimators may be critical when performed using real-world sensor data. For this reason, simulation is widely recognized as one of the most powerful tools for algorithm benchmarking. In this paper, we present a simulation framework suitable for assessing the performance of sensor fusion-based pose estimators. The systems used for implementing the framework were magnetic/inertial measurement units (MIMUs) and a camera, although the addition of further sensing modalities is straightforward. Typical nuisance factors were also included for each sensor. The proposed simulation environment was validated using real-life sensor data employed for motion tracking. The higher mismatch between real and simulated sensors was about 5% of the measured quantity (for the camera simulation), whereas a lower correlation was found for an axis of the gyroscope (0.90). In addition, a real benchmarking example of an extended Kalman filter for pose estimation from MIMU and camera data is presented.
The EB Factory: Fundamental Stellar Astrophysics with Eclipsing Binary Stars Discovered by Kepler
NASA Astrophysics Data System (ADS)
Stassun, Keivan
Eclipsing binaries (EBs) are key laboratories for determining the fundamental properties of stars. EBs are therefore foundational objects for constraining stellar evolution models, which in turn are central to determinations of stellar mass functions, of exoplanet properties, and many other areas. The primary goal of this proposal is to mine the Kepler mission light curves for: (1) EBs that include a subgiant star, from which precise ages can be derived and which can thus serve as critically needed age benchmarks; and within these, (2) long-period EBs that include low-mass M stars or brown dwarfs, which are increa-singly becoming the focus of exoplanet searches, but for which there are the fewest available fundamental mass- radius-age benchmarks. A secondary goal of this proposal is to develop an end-to-end computational pipeline -- the Kepler EB Factory -- that allows automatic processing of Kepler light curves for EBs, from period finding, to object classification, to determination of EB physical properties for the most scientifically interesting EBs, and finally to accurate modeling of these EBs for detailed tests and benchmarking of theoretical stellar evolution models. We will integrate the most successful algorithms into a single, cohesive workflow environment, and apply this 'Kepler EB Factory' to the full public Kepler dataset to find and characterize new "benchmark grade" EBs, and will disseminate both the enhanced data products from this pipeline and the pipeline itself to the broader NASA science community. The proposed work responds directly to two of the defined Research Areas of the NASA Astrophysics Data Analysis Program (ADAP), specifically Research Area #2 (Stellar Astrophysics) and Research Area #9 (Astrophysical Databases). To be clear, our primary goal is the fundamental stellar astrophysics that will be enabled by the discovery and analysis of relatively rare, benchmark-grade EBs in the Kepler dataset. At the same time, to enable this goal will require bringing a suite of extant and new custom algorithms to bear on the Kepler data, and thus our development of the Kepler EB Factory represents a value-added product that will allow the widest scientific impact of the in-formation locked within the vast reservoir of the Kepler light curves.
ERIC Educational Resources Information Center
Zheng, Henry Y.; Stewart, Alice A.
This study explores data envelopment analysis (DEA) as a tool for assessing and benchmarking the performance of public research universities. Using of national databases such as those conducted by the National Science Foundation and the National Center for Education Statistics, DEA analysis was conducted of the research and instructional outcomes…
Benchmarking reference services: an introduction.
Marshall, J G; Buchanan, H S
1995-01-01
Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.
Experimental validation of the TOPAS Monte Carlo system for passive scattering proton therapy
Testa, M.; Schümann, J.; Lu, H.-M.; Shin, J.; Faddegon, B.; Perl, J.; Paganetti, H.
2013-01-01
Purpose: TOPAS (TOol for PArticle Simulation) is a particle simulation code recently developed with the specific aim of making Monte Carlo simulations user-friendly for research and clinical physicists in the particle therapy community. The authors present a thorough and extensive experimental validation of Monte Carlo simulations performed with TOPAS in a variety of setups relevant for proton therapy applications. The set of validation measurements performed in this work represents an overall end-to-end testing strategy recommended for all clinical centers planning to rely on TOPAS for quality assurance or patient dose calculation and, more generally, for all the institutions using passive-scattering proton therapy systems. Methods: The authors systematically compared TOPAS simulations with measurements that are performed routinely within the quality assurance (QA) program in our institution as well as experiments specifically designed for this validation study. First, the authors compared TOPAS simulations with measurements of depth-dose curves for spread-out Bragg peak (SOBP) fields. Second, absolute dosimetry simulations were benchmarked against measured machine output factors (OFs). Third, the authors simulated and measured 2D dose profiles and analyzed the differences in terms of field flatness and symmetry and usable field size. Fourth, the authors designed a simple experiment using a half-beam shifter to assess the effects of multiple Coulomb scattering, beam divergence, and inverse square attenuation on lateral and longitudinal dose profiles measured and simulated in a water phantom. Fifth, TOPAS’ capabilities to simulate time dependent beam delivery was benchmarked against dose rate functions (i.e., dose per unit time vs time) measured at different depths inside an SOBP field. Sixth, simulations of the charge deposited by protons fully stopping in two different types of multilayer Faraday cups (MLFCs) were compared with measurements to benchmark the nuclear interaction models used in the simulations. Results: SOBPs’ range and modulation width were reproduced, on average, with an accuracy of +1, −2 and ±3 mm, respectively. OF simulations reproduced measured data within ±3%. Simulated 2D dose-profiles show field flatness and average field radius within ±3% of measured profiles. The field symmetry resulted, on average in ±3% agreement with commissioned profiles. TOPAS accuracy in reproducing measured dose profiles downstream the half beam shifter is better than 2%. Dose rate function simulation reproduced the measurements within ∼2% showing that the four-dimensional modeling of the passively modulation system was implement correctly and millimeter accuracy can be achieved in reproducing measured data. For MLFCs simulations, 2% agreement was found between TOPAS and both sets of experimental measurements. The overall results show that TOPAS simulations are within the clinical accepted tolerances for all QA measurements performed at our institution. Conclusions: Our Monte Carlo simulations reproduced accurately the experimental data acquired through all the measurements performed in this study. Thus, TOPAS can reliably be applied to quality assurance for proton therapy and also as an input for commissioning of commercial treatment planning systems. This work also provides the basis for routine clinical dose calculations in patients for all passive scattering proton therapy centers using TOPAS. PMID:24320505
Quantitative phenotyping via deep barcode sequencing
Smith, Andrew M.; Heisler, Lawrence E.; Mellor, Joseph; Kaper, Fiona; Thompson, Michael J.; Chee, Mark; Roth, Frederick P.; Giaever, Guri; Nislow, Corey
2009-01-01
Next-generation DNA sequencing technologies have revolutionized diverse genomics applications, including de novo genome sequencing, SNP detection, chromatin immunoprecipitation, and transcriptome analysis. Here we apply deep sequencing to genome-scale fitness profiling to evaluate yeast strain collections in parallel. This method, Barcode analysis by Sequencing, or “Bar-seq,” outperforms the current benchmark barcode microarray assay in terms of both dynamic range and throughput. When applied to a complex chemogenomic assay, Bar-seq quantitatively identifies drug targets, with performance superior to the benchmark microarray assay. We also show that Bar-seq is well-suited for a multiplex format. We completely re-sequenced and re-annotated the yeast deletion collection using deep sequencing, found that ∼20% of the barcodes and common priming sequences varied from expectation, and used this revised list of barcode sequences to improve data quality. Together, this new assay and analysis routine provide a deep-sequencing-based toolkit for identifying gene–environment interactions on a genome-wide scale. PMID:19622793
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1998-01-01
This report describes the formulation of a model of the dynamic behavior of the Benchmark Active Controls Technology (BACT) wind tunnel model for active control design and analysis applications. The model is formed by combining the equations of motion for the BACT wind tunnel model with actuator models and a model of wind tunnel turbulence. The primary focus of this report is the development of the equations of motion from first principles by using Lagrange's equations and the principle of virtual work. A numerical form of the model is generated by making use of parameters obtained from both experiment and analysis. Comparisons between experimental and analytical data obtained from the numerical model show excellent agreement and suggest that simple coefficient-based aerodynamics are sufficient to accurately characterize the aeroelastic response of the BACT wind tunnel model. The equations of motion developed herein have been used to aid in the design and analysis of a number of flutter suppression controllers that have been successfully implemented.
The MCUCN simulation code for ultracold neutron physics
NASA Astrophysics Data System (ADS)
Zsigmond, G.
2018-02-01
Ultracold neutrons (UCN) have very low kinetic energies 0-300 neV, thereby can be stored in specific material or magnetic confinements for many hundreds of seconds. This makes them a very useful tool in probing fundamental symmetries of nature (for instance charge-parity violation by neutron electric dipole moment experiments) and contributing important parameters for the Big Bang nucleosynthesis (neutron lifetime measurements). Improved precision experiments are in construction at new and planned UCN sources around the world. MC simulations play an important role in the optimization of such systems with a large number of parameters, but also in the estimation of systematic effects, in benchmarking of analysis codes, or as part of the analysis. The MCUCN code written at PSI has been extensively used for the optimization of the UCN source optics and in the optimization and analysis of (test) experiments within the nEDM project based at PSI. In this paper we present the main features of MCUCN and interesting benchmark and application examples.
Taking the Battle Upstream: Towards a Benchmarking Role for NATO
2012-09-01
Benchmark.........................................................................................14 Figure 8. World Bank Benchmarking Work on Quality...Search of a Benchmarking Theory for the Public Sector.” 16 Figure 8. World Bank Benchmarking Work on Quality of Governance One of the most...the Ministries of Defense in the countries in which it works ). Another interesting innovation is that for comparison purposes, McKinsey categorized
ERIC Educational Resources Information Center
Kent State Univ., OH. Ohio Literacy Resource Center.
This document is intended to show the relationship between Ohio's Standards and Competencies, Equipped for the Future's (EFF's) Standards and Components of Performance, and Ohio's Revised Benchmarks. The document is divided into three parts, with Part 1 covering mathematics instruction, Part 2 covering reading instruction, and Part 3 covering…
How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction
NASA Astrophysics Data System (ADS)
Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.
2015-03-01
The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faillace, E.R.; Cheng, J.J.; Yu, C.
A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, inputmore » parameters such as occupancy, shielding, and consumption factors.« less
General squark flavour mixing: constraints, phenomenology and benchmarks
De Causmaecker, Karen; Fuks, Benjamin; Herrmann, Bjorn; ...
2015-11-19
Here, we present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.
(U) Analytic First and Second Derivatives of the Uncollided Leakage for a Homogeneous Sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favorite, Jeffrey A.
2017-04-26
The second-order adjoint sensitivity analysis methodology (2nd-ASAM), developed by Cacuci, has been applied by Cacuci to derive second derivatives of a response with respect to input parameters for uncollided particles in an inhomogeneous transport problem. In this memo, we present an analytic benchmark for verifying the derivatives of the 2nd-ASAM. The problem is a homogeneous sphere, and the response is the uncollided total leakage. This memo does not repeat the formulas given in Ref. 2. We are preparing a journal article that will include the derivation of Ref. 2 and the benchmark of this memo.
The Genre of Instructor Feedback in Doctoral Programs: A Corpus Linguistic Analysis
ERIC Educational Resources Information Center
Walters, Kelley Jo; Henry, Patricia; Vinella, Michael; Wells, Steve; Shaw, Melanie; Miller, James
2015-01-01
Providing transparent written feedback to doctoral students is essential to the learning process and preparation for the capstone. The purpose of this study was to conduct a qualitative exploration of faculty feedback on benchmark written assignments across multiple, online doctoral programs. The Corpus for this analysis included 236 doctoral…
MacLachlan, Malcolm; Amin, Mutamad; Mannan, Hasheem; El Tayeb, Shahla; Bedri, Nafisa; Swartz, Leslie; Munthali, Alister; Van Rooy, Gert; McVeigh, Joanne
2012-01-01
While many health services strive to be equitable, accessible and inclusive, peoples’ right to health often goes unrealized, particularly among vulnerable groups. The extent to which health policies explicitly seek to achieve such goals sets the policy context in which services are delivered and evaluated. An analytical framework was developed – EquiFrame – to evaluate 1) the extent to which 21 Core Concepts of human rights were addressed in policy documents, and 2) coverage of 12 Vulnerable Groups who might benefit from such policies. Using this framework, analysis of 51 policies across Malawi, Namibia, South Africa and Sudan, confirmed the relevance of all Core Concepts and Vulnerable Groups. Further, our analysis highlighted some very strong policies, serious shortcomings in others as well as country-specific patterns. If social inclusion and human rights do not underpin policy formation, it is unlikely they will be inculcated in service delivery. EquiFrame facilitates policy analysis and benchmarking, and provides a means for evaluating policy revision and development. PMID:22649488
Morriss, Richard; Marttunnen, Sarah; Garland, Anne; Nixon, Neil; McDonald, Ruth; Sweeney, Tim; Flambert, Heather; Fox, Richard; Kaylor-Hughes, Catherine; James, Marilyn; Yang, Min
2010-11-29
Around 40 per cent of patients with unipolar depressive disorder who are treated in secondary care mental health services do not respond to first or second line treatments for depression. Such patients have 20 times the suicide rate of the general population and treatment response becomes harder to achieve and sustain the longer they remain depressed. Despite this there are no randomised controlled trials of community based service delivery interventions delivering both algorithm based pharmacotherapy and psychotherapy for patients with chronic depressive disorder in secondary care mental health services who remain moderately or severely depressed after six months treatment. Without such trials evidence based guidelines on services for such patients cannot be derived. Single blind individually randomised controlled trial of a specialist depression disorder team (psychiatrist and psychotherapist jointly assessing and providing algorithm based drug and psychological treatment) versus usual secondary care treatment. We will recruit 174 patients with unipolar depressive disorder in secondary mental health services with a Hamilton Depression Rating Scale (HDRS) score ≥ 16 and global assessment of function (GAF) ≤ 60 after ≥ 6 months treatment. The primary outcome measures will be the HDRS and GAF supplemented by economic analysis including the EQ5 D and analysis of barriers to care, implementation and the process of care. Audits to benchmark both treatment arms against national standards of care will aid the interpretation of the results of the study. This trial will be the first to assess the effectiveness and implementation of a community based specialist depression disorder team. The study has been specially designed as part of the CLAHRC Nottinghamshire, Derbyshire and Lincolnshire joint collaboration between university, health and social care organisations to provide information of direct relevance to decisions on commissioning, service provision and implementation.
Hopewood, Ian
2011-01-01
As the quantity of elderly Americans requiring oncologic care grows, and as cancer treatment and medicine become more advanced, assessing quality of cancer care becomes a necessary and advantageous practice for any facility.' Such analysis is especially practical in small community hospitals, which may not have the resources of their larger academic counterparts to ensure that the care being provided is current and competitive in terms of both technique and outcome. This study is a comparison of the colorectal cancer care at one such center, Falmouth Community Hospital (FCH)--located in Falmouth, Massachusetts, about an hour and a half away from the nearest metropolitan center--to the care provided at a major nearby Boston Tertiary Center (BTC) and at teaching and research facilities across New England and the United States. The metrics used to measure performance encompass both outcome (survival rate data) as well as technique, including quality of surgery (number of lymph nodes removed) and the administration of adjuvant treatments, chemotherapy, and radiation therapy, as per national guidelines. All data for comparison between FCH and BTC were culled from those hospitals' tumor registries. Data for the comparison between FCH and national tertiary/referral centers were taken from the American College of Surgeons' Commission on Cancer, namely National Cancer Data Base (NCDB) statistics, Hospital Benchmark Reports and Practice Profile Reports. The results showed that, while patients at FCH were diagnosed at both a higher age and at a more advanced stage of colorectal cancer than their BTC counterparts, FCH stands up favorably to BTC and other large centers in terms of the metrics referenced above. Quality assessment such as the analysis conducted here can be used at other community facilities to spotlight, and ultimately eliminate, deficiencies in cancer programs.
A benchmarking method to measure dietary absorption efficiency of chemicals by fish.
Xiao, Ruiyang; Adolfsson-Erici, Margaretha; Åkerman, Gun; McLachlan, Michael S; MacLeod, Matthew
2013-12-01
Understanding the dietary absorption efficiency of chemicals in the gastrointestinal tract of fish is important from both a scientific and a regulatory point of view. However, reported fish absorption efficiencies for well-studied chemicals are highly variable. In the present study, the authors developed and exploited an internal chemical benchmarking method that has the potential to reduce uncertainty and variability and, thus, to improve the precision of measurements of fish absorption efficiency. The authors applied the benchmarking method to measure the gross absorption efficiency for 15 chemicals with a wide range of physicochemical properties and structures. They selected 2,2',5,6'-tetrachlorobiphenyl (PCB53) and decabromodiphenyl ethane as absorbable and nonabsorbable benchmarks, respectively. Quantities of chemicals determined in fish were benchmarked to the fraction of PCB53 recovered in fish, and quantities of chemicals determined in feces were benchmarked to the fraction of decabromodiphenyl ethane recovered in feces. The performance of the benchmarking procedure was evaluated based on the recovery of the test chemicals and precision of absorption efficiency from repeated tests. Benchmarking did not improve the precision of the measurements; after benchmarking, however, the median recovery for 15 chemicals was 106%, and variability of recoveries was reduced compared with before benchmarking, suggesting that benchmarking could account for incomplete extraction of chemical in fish and incomplete collection of feces from different tests. © 2013 SETAC.
Bess, John D.; Fujimoto, Nozomu
2014-10-09
Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less
Bertzbach, F; Franz, T; Möller, K
2012-01-01
This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.
Benchmarking clinical photography services in the NHS.
Arbon, Giles
2015-01-01
Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.
A Seafloor Benchmark for 3-dimensional Geodesy
NASA Astrophysics Data System (ADS)
Chadwell, C. D.; Webb, S. C.; Nooner, S. L.
2014-12-01
We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone. Using a ROV to place and remove sensors on the benchmarks will significantly reduce the number of sensors required by the community to monitor offshore strain in subduction zones.
NASA Astrophysics Data System (ADS)
Kalyanapu, A. J.; Thames, B. A.
2013-12-01
Dam breach modeling often includes application of models that are sophisticated, yet computationally intensive to compute flood propagation at high temporal and spatial resolutions. This results in a significant need for computational capacity that requires development of newer flood models using multi-processor and graphics processing techniques. Recently, a comprehensive benchmark exercise titled the 12th Benchmark Workshop on Numerical Analysis of Dams, is organized by the International Commission on Large Dams (ICOLD) to evaluate the performance of these various tools used for dam break risk assessment. The ICOLD workshop is focused on estimating the consequences of failure of a hypothetical dam near a hypothetical populated area with complex demographics, and economic activity. The current study uses this hypothetical case study and focuses on evaluating the effects of dam breach methodologies on consequence estimation and analysis. The current study uses ICOLD hypothetical data including the topography, dam geometric and construction information, land use/land cover data along with socio-economic and demographic data. The objective of this study is to evaluate impacts of using four different dam breach methods on the consequence estimates used in the risk assessments. The four methodologies used are: i) Froehlich (1995), ii) MacDonald and Langridge-Monopolis 1984 (MLM), iii) Von Thun and Gillete 1990 (VTG), and iv) Froehlich (2008). To achieve this objective, three different modeling components were used. First, using the HEC-RAS v.4.1, dam breach discharge hydrographs are developed. These hydrographs are then provided as flow inputs into a two dimensional flood model named Flood2D-GPU, which leverages the computer's graphics card for much improved computational capabilities of the model input. Lastly, outputs from Flood2D-GPU, including inundated areas, depth grids, velocity grids, and flood wave arrival time grids, are input into HEC-FIA, which provides the consequence assessment for the solution to the problem statement. For the four breach methodologies, a sensitivity analysis of four breach parameters, breach side slope (SS), breach width (Wb), breach invert elevation (Elb), and time of failure (tf), is conducted. Up to, 68 simulations are computed to produce breach hydrographs in HEC-RAS for input into Flood2D-GPU. The Flood2D-GPU simulation results were then post-processed in HEC-FIA to evaluate: Total Population at Risk (PAR), 14-yr and Under PAR (PAR14-), 65-yr and Over PAR (PAR65+), Loss of Life (LOL) and Direct Economic Impact (DEI). The MLM approach resulted in wide variability in simulated minimum and maximum values of PAR, PAR 65+ and LOL estimates. For PAR14- and DEI, Froehlich (1995) resulted in lower values while MLM resulted in higher estimates. This preliminary study demonstrated the relative performance of four commonly used dam breach methodologies and their impacts on consequence estimation.
The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool
Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.
Benchmarking--Measuring and Comparing for Continuous Improvement.
ERIC Educational Resources Information Center
Henczel, Sue
2002-01-01
Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…
78 FR 59982 - Revisions to Radiation Protection
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-30
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0268] Revisions to Radiation Protection AGENCY: Nuclear Regulatory Commission. ACTION: Standard review plan section; issuance. SUMMARY: The U.S. Nuclear Regulatory... for the Review of Safety Analysis Reports for Nuclear Power Plants: LWR Edition'': Section 12.1...
46 CFR 504.5 - Environmental assessments.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 9 2010-10-01 2010-10-01 false Environmental assessments. 504.5 Section 504.5 Shipping FEDERAL MARITIME COMMISSION GENERAL AND ADMINISTRATIVE PROVISIONS PROCEDURES FOR ENVIRONMENTAL POLICY ANALYSIS § 504.5 Environmental assessments. (a) Every Commission action not specifically excluded under...
46 CFR 504.5 - Environmental assessments.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 9 2011-10-01 2011-10-01 false Environmental assessments. 504.5 Section 504.5 Shipping FEDERAL MARITIME COMMISSION GENERAL AND ADMINISTRATIVE PROVISIONS PROCEDURES FOR ENVIRONMENTAL POLICY ANALYSIS § 504.5 Environmental assessments. (a) Every Commission action not specifically excluded under...
78 FR 17205 - Notice of Availability of Service Contract Inventories
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-20
... FEDERAL MARITIME COMMISSION Notice of Availability of Service Contract Inventories AGENCY: Federal Maritime Commission. ACTION: Notice of availability of service contract inventories. FOR FURTHER... Service Contract Inventory Analysis, the FY 2012 Service Contract Inventory, and the FY 2012 Service...
Checkland, Kath; Coleman, Anna; McDermott, Imelda; Segar, Julia; Miller, Rosalind; Petsoulas, Christina; Wallace, Andrew; Harrison, Stephen; Peckham, Stephen
2013-09-01
The current reorganisation of the English NHS is one of the most comprehensive ever seen. This study reports early evidence from the development of clinical commissioning groups (CCGs), a key element in the new structures. To explore the development of CCGs in the context of what is known from previous studies of GP involvement in commissioning. Case study analysis from sites chosen to provide maximum variety across a number of dimensions, from September 2011 to June 2012. A case study analysis was conducted using eight detailed qualitative case studies supplemented by descriptive information from web surveys at two points in time. Data collection involved observation of a variety of meetings, and interviews with key participants. Previous research shows that clinical involvement in commissioning is most effective when GPs feel able to act autonomously. Complicated internal structures, alongside developing external accountability relationships mean that CCGs' freedom to act may be subject to considerable constraint. Effective GP engagement is also important in determining outcomes of clinical commissioning, and there are a number of outstanding issues for CCGs, including: who feels 'ownership' of the CCG; how internal communication is conceptualised and realised; and the role and remit of locality groups. Previous incarnations of GP-led commissioning have tended to focus on local and primary care services. CCGs are keen to act to improve quality in their constituent practices, using approaches that many developed under practice-based commissioning. Constrained managerial support and the need to maintain GP engagement may have an impact. CCGs are new organisations, faced with significant new responsibilities. This study provides early evidence of issues that CCGs and those responsible for CCG development may wish to address.
Checkland, Kath; Coleman, Anna; McDermott, Imelda; Segar, Julia; Miller, Rosalind; Petsoulas, Christina; Wallace, Andrew; Harrison, Stephen; Peckham, Stephen
2013-01-01
Background The current reorganisation of the English NHS is one of the most comprehensive ever seen. This study reports early evidence from the development of clinical commissioning groups (CCGs), a key element in the new structures. Aim To explore the development of CCGs in the context of what is known from previous studies of GP involvement in commissioning. Design and setting Case study analysis from sites chosen to provide maximum variety across a number of dimensions, from September 2011 to June 2012. Method A case study analysis was conducted using eight detailed qualitative case studies supplemented by descriptive information from web surveys at two points in time. Data collection involved observation of a variety of meetings, and interviews with key participants. Results Previous research shows that clinical involvement in commissioning is most effective when GPs feel able to act autonomously. Complicated internal structures, alongside developing external accountability relationships mean that CCGs’ freedom to act may be subject to considerable constraint. Effective GP engagement is also important in determining outcomes of clinical commissioning, and there are a number of outstanding issues for CCGs, including: who feels ‘ownership’ of the CCG; how internal communication is conceptualised and realised; and the role and remit of locality groups. Previous incarnations of GP-led commissioning have tended to focus on local and primary care services. CCGs are keen to act to improve quality in their constituent practices, using approaches that many developed under practice-based commissioning. Constrained managerial support and the need to maintain GP engagement may have an impact. Conclusion CCGs are new organisations, faced with significant new responsibilities. This study provides early evidence of issues that CCGs and those responsible for CCG development may wish to address. PMID:23998841
An Instructional Management Guide to Cost Effective Decision Making,
1974-01-01
Journal , 1968, 34. United States Congress. To improve learning. Report by the Commission on Instructional Technology of the Commis- sion on Education...of cost-benefit analysis and cost-effectiveness analysis has been recognized in public and private education. It was reported by the Commission on...instructional decision-making, however, is a more formidable task. One difficulty is that although much has been reported on the advantages of
ERIC Educational Resources Information Center
Association for Education in Journalism and Mass Communication.
The Commission on the Status of Women section of the proceedings contains the following 8 selected papers: "Perverse Public Panoptican: Content Analysis of Male and Female Patient Images in Reproductive Health News Reports" (Marie Dick); "The Olympic Ideal: A Content Analysis of the Coverage of Olympic Women's Sports in San…
Developing Benchmarks for Solar Radio Bursts
NASA Astrophysics Data System (ADS)
Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Domm, P.; Love, J. J.; Pierson, J.
2016-12-01
Solar radio bursts can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan has asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The solar radio benchmark team was also asked to define the wavelength/frequency bands of interest. The benchmark team developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks and the basis used to derive them. We will also present the work that needs to be done in order to complete the final, or phase 2 benchmarks.
Benchmarking in national health service procurement in Scotland.
Walker, Scott; Masson, Ron; Telford, Ronnie; White, David
2007-11-01
The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland.
Blecher, Evan
2010-08-01
To investigate the appropriateness of tax incidence (the percentage of the retail price occupied by taxes) benchmarking in low-income and-middle-income countries (LMICs) with rapidly growing economies and to explore the viability of an alternative tax policy rule based on the affordability of cigarettes. The paper outlines criticisms of tax incidence benchmarking, particularly in the context of LMICs. It then considers an affordability-based benchmark using relative income price (RIP) as a measure of affordability. The RIP measures the percentage of annual per capita GDP required to purchase 100 packs of cigarettes. Using South Africa as a case study of an LMIC, future consumption is simulated using both tax incidence benchmarks and affordability benchmarks. I show that a tax incidence benchmark is not an optimal policy tool in South Africa and that an affordability benchmark could be a more effective means of reducing tobacco consumption in the future. Although a tax incidence benchmark was successful in increasing prices and reducing tobacco consumption in South Africa in the past, this approach has drawbacks, particularly in the context of a rapidly growing LMIC economy. An affordability benchmark represents an appropriate alternative that would be more effective in reducing future cigarette consumption.
Numerical Analysis of Base Flowfield for a Four-Engine Clustered Nozzle Configuration
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1995-01-01
Excessive base heating has been a problem for many launch vehicles. For certain designs such as the direct dump of turbine exhaust inside and at the lip of the nozzle, the potential burning of the turbine exhaust in the base region can be of great concern. Accurate prediction of the base environment at altitudes is therefore very important during the vehicle design phase. Otherwise, undesirable consequences may occur. In this study, the turbulent base flowfield of a cold flow experimental investigation for a four-engine clustered nozzle was numerically benchmarked using a pressure-based computational fluid dynamics (CFD) method. This is a necessary step before the benchmarking of hot flow and combustion flow tests can be considered. Since the medium was unheated air, reasonable prediction of the base pressure distribution at high altitude was the main goal. Several physical phenomena pertaining to the multiengine clustered nozzle base flow physics were deduced from the analysis.
Benchmarking of Improved DPAC Transient Deflagration Analysis Code
Laurinat, James E.; Hensel, Steve J.
2017-09-27
The deflagration pressure analysis code (DPAC) has been upgraded for use in modeling hydrogen deflagration transients. The upgraded code is benchmarked using data from vented hydrogen deflagration tests conducted at the HYDRO-SC Test Facility at the University of Pisa. DPAC originally was written to calculate peak pressures for deflagrations in radioactive waste storage tanks and process facilities at the Savannah River Site. Upgrades include the addition of a laminar flame speed correlation for hydrogen deflagrations and a mechanistic model for turbulent flame propagation, incorporation of inertial effects during venting, and inclusion of the effect of water vapor condensation on vesselmore » walls. In addition, DPAC has been coupled with chemical equilibrium with applications (CEA), a NASA combustion chemistry code. The deflagration tests are modeled as end-to-end deflagrations. As a result, the improved DPAC code successfully predicts both the peak pressures during the deflagration tests and the times at which the pressure peaks.« less
An evaluation of the accuracy and speed of metagenome analysis tools
Lindgreen, Stinus; Adair, Karen L.; Gardner, Paul P.
2016-01-01
Metagenome studies are becoming increasingly widespread, yielding important insights into microbial communities covering diverse environments from terrestrial and aquatic ecosystems to human skin and gut. With the advent of high-throughput sequencing platforms, the use of large scale shotgun sequencing approaches is now commonplace. However, a thorough independent benchmark comparing state-of-the-art metagenome analysis tools is lacking. Here, we present a benchmark where the most widely used tools are tested on complex, realistic data sets. Our results clearly show that the most widely used tools are not necessarily the most accurate, that the most accurate tool is not necessarily the most time consuming, and that there is a high degree of variability between available tools. These findings are important as the conclusions of any metagenomics study are affected by errors in the predicted community composition and functional capacity. Data sets and results are freely available from http://www.ucbioinformatics.org/metabenchmark.html PMID:26778510
Characteristic and factors of competitive maritime industry clusters in Indonesia
NASA Astrophysics Data System (ADS)
Marlyana, N.; Tontowi, A. E.; Yuniarto, H. A.
2017-12-01
Indonesia is situated in the strategic position between two oceans therefore is identified as a maritime state. The fact opens big opportunity to build a competitive maritime industry. However, potential factors to boost the competitive maritime industry still need to be explored. The objective of this paper is then to determine the main characteristics and potential factors of competitive maritime industry cluster. Qualitative analysis based on literature review has been carried out in two aspects. First, benchmarking analysis conducted to distinguish the most relevant factors of maritime clusters in several countries in Europe (Norway, Spain, South West of England) and Asia (China, South Korea, Malaysia). Seven key dimensions are used for this benchmarking. Secondly, the competitiveness of maritime clusters in Indonesia was diagnosed through a reconceptualization of Porter’s Diamond model. There were four interlinked of advanced factors in and between companies within clusters, which can be influenced in a proactive way by government.
Benchmarking of Improved DPAC Transient Deflagration Analysis Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurinat, James E.; Hensel, Steve J.
The deflagration pressure analysis code (DPAC) has been upgraded for use in modeling hydrogen deflagration transients. The upgraded code is benchmarked using data from vented hydrogen deflagration tests conducted at the HYDRO-SC Test Facility at the University of Pisa. DPAC originally was written to calculate peak pressures for deflagrations in radioactive waste storage tanks and process facilities at the Savannah River Site. Upgrades include the addition of a laminar flame speed correlation for hydrogen deflagrations and a mechanistic model for turbulent flame propagation, incorporation of inertial effects during venting, and inclusion of the effect of water vapor condensation on vesselmore » walls. In addition, DPAC has been coupled with chemical equilibrium with applications (CEA), a NASA combustion chemistry code. The deflagration tests are modeled as end-to-end deflagrations. As a result, the improved DPAC code successfully predicts both the peak pressures during the deflagration tests and the times at which the pressure peaks.« less
Simulated annealing with probabilistic analysis for solving traveling salesman problems
NASA Astrophysics Data System (ADS)
Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan
2013-09-01
Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.
Identifying key genes in glaucoma based on a benchmarked dataset and the gene regulatory network.
Chen, Xi; Wang, Qiao-Ling; Zhang, Meng-Hui
2017-10-01
The current study aimed to identify key genes in glaucoma based on a benchmarked dataset and gene regulatory network (GRN). Local and global noise was added to the gene expression dataset to produce a benchmarked dataset. Differentially-expressed genes (DEGs) between patients with glaucoma and normal controls were identified utilizing the Linear Models for Microarray Data (Limma) package based on benchmarked dataset. A total of 5 GRN inference methods, including Zscore, GeneNet, context likelihood of relatedness (CLR) algorithm, Partial Correlation coefficient with Information Theory (PCIT) and GEne Network Inference with Ensemble of Trees (Genie3) were evaluated using receiver operating characteristic (ROC) and precision and recall (PR) curves. The interference method with the best performance was selected to construct the GRN. Subsequently, topological centrality (degree, closeness and betweenness) was conducted to identify key genes in the GRN of glaucoma. Finally, the key genes were validated by performing reverse transcription-quantitative polymerase chain reaction (RT-qPCR). A total of 176 DEGs were detected from the benchmarked dataset. The ROC and PR curves of the 5 methods were analyzed and it was determined that Genie3 had a clear advantage over the other methods; thus, Genie3 was used to construct the GRN. Following topological centrality analysis, 14 key genes for glaucoma were identified, including IL6 , EPHA2 and GSTT1 and 5 of these 14 key genes were validated by RT-qPCR. Therefore, the current study identified 14 key genes in glaucoma, which may be potential biomarkers to use in the diagnosis of glaucoma and aid in identifying the molecular mechanism of this disease.
Benchmarking: applications to transfusion medicine.
Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M
2012-10-01
Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Liang, Xiuying; Zhu, Chunyan
2017-11-01
With rising global emphasizes on climate change and sustainable development, how to accelerate the transformation of energy efficiency has become an important question. Designing and implementing energy-efficiency policies for super-efficient products represents an important direction to achieve breakthroughs in the field of energy conservation. On December 31, 2014, China’s National Development and Reform Commission (NDRC) jointly six other ministerial agencies launched China Leading Energy Efficiency Program (LEP), which identifies top efficiency models for selected product categories. LEP sets the highest energy efficiency benchmark. Design of LEP took into consideration of how to best motivate manufacturers to accelerate technical innovation, promote high efficiency products. This paper explains core elements of LEP, such as objectives, selection criteria, implementation method and supportive policies. It also proposes recommendations to further improve LEP through international policy comparison with Japan’s Top Runner Program, U.S. Energy Star Most Efficient, and SEAD Global Efficiency Medal.
Standard Energy Efficiency Data Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheifetz, D. Magnus
2014-07-15
The SEED platform is expected to be a building energy performance data management tool that provides federal, state and local governments, building owners and operators with an easy, flexible and cost-effective method to collect information about groups of buildings, oversee compliance with energy disclosure laws and demonstrate the economic and environmental benefits of energy efficiency. It will allow users to leverage a local application to manage data disclosure and large data sets without the IT investment of developing custom applications. The first users of SEED will be agencies that need to collect, store, and report/share large data sets generated bymore » benchmarking, energy auditing, retro-commissioning or retrofitting of many buildings. Similarly, building owners and operators will use SEED to manage their own energy data in a common format and centralized location. SEED users will also control the disclosure of their information for compliance requirements, recognition programs such as ENERGY STAR, or data sharing with the Buildings Performance Database and/or other third parties at their discretion.« less
Simulations of RF capture with barrier bucket in booster at injection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, C.J.
2012-01-23
As part of the effort to increase the number of ions per bunch in RHIC, a new scheme for RF capture of EBIS ions in Booster at injection has been developed. The scheme was proposed by M. Blaskiewicz and J.M. Brennan. It employs a barrier bucket to hold a half turn of beam in place during capture into two adjacent harmonic 4 buckets. After acceleration, this allows for 8 transfers of 2 bunches from Booster into 16 buckets on the AGS injection porch. During the Fall of 2011 the necessary hardware was developed and implemented by the RF and Controlsmore » groups. The scheme is presently being commissioned by K.L. Zeno with Au32+ ions from EBIS. In this note we carry out simulations of the RF capture. These are meant to serve as benchmarks for what can be achieved in practice. They also allow for an estimate of the longitudinal emittance of the bunches on the AGS injection porch.« less
Stroetmann, Karl A; Middleton, Blackford
2011-01-01
The European Union (EU) sponsored ARGOS project analysed current eHealth policy thinking in both the EU and the USA, compared strategic challenges and outcomes in selected fields, and drafted roadmaps towards developing advanced global approaches for these issues. This policy brief focuses on better understanding the benefits and costs of eHealth investments, assessing their overall socio-economic impact, identifying challenges and success factors, as well as measuring and globally benchmarking the concrete usage of eHealth solutions. These are by now key policy priorities not only of national governments and the European Commission, but also of international institutions like WHO or OECD. There is a strong felt transatlantic need for stocktaking, identifying lessons learned, sharing of experience, and working together to advance these issues for the benefit of health systems. A growing number of national and international activities can be taken advantage of. Recommendations on how to proceed with such transatlantic activities are proposed.
Cairo consensus on the IVF laboratory environment and air quality: report of an expert meeting.
Mortimer, D; Cohen, J; Mortimer, S T; Fawzy, M; McCulloh, D H; Morbeck, D E; Pollet-Villard, X; Mansour, R T; Brison, D R; Doshi, A; Harper, J C; Swain, J E; Gilligan, A V
2018-03-02
This proceedings report presents the outcomes from an international Expert Meeting to establish a consensus on the recommended technical and operational requirements for air quality within modern assisted reproduction technology (ART) laboratories. Topics considered included design and construction of the facility, as well as its heating, ventilation and air conditioning system; control of particulates, micro-organisms (bacteria, fungi and viruses) and volatile organic compounds (VOCs) within critical areas; safe cleaning practices; operational practices to optimize air quality while minimizing physicochemical risks to gametes and embryos (temperature control versus air flow); and appropriate infection-control practices that minimize exposure to VOC. More than 50 consensus points were established under the general headings of assessing site suitability, basic design criteria for new construction, and laboratory commissioning and ongoing VOC management. These consensus points should be considered as aspirational benchmarks for existing ART laboratories, and as guidelines for the construction of new ART laboratories. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Ice-volume-forced erosion of the Chinese Loess Plateau global Quaternary stratotype site.
Stevens, T; Buylaert, J-P; Thiel, C; Újvári, G; Yi, S; Murray, A S; Frechen, M; Lu, H
2018-03-07
The International Commission on Stratigraphy (ICS) utilises benchmark chronostratigraphies to divide geologic time. The reliability of these records is fundamental to understand past global change. Here we use the most detailed luminescence dating age model yet published to show that the ICS chronology for the Quaternary terrestrial type section at Jingbian, desert marginal Chinese Loess Plateau, is inaccurate. There are large hiatuses and depositional changes expressed across a dynamic gully landform at the site, which demonstrates rapid environmental shifts at the East Asian desert margin. We propose a new independent age model and reconstruct monsoon climate and desert expansion/contraction for the last ~250 ka. Our record demonstrates the dominant influence of ice volume on desert expansion, dust dynamics and sediment preservation, and further shows that East Asian Summer Monsoon (EASM) variation closely matches that of ice volume, but lags insolation by ~5 ka. These observations show that the EASM at the monsoon margin does not respond directly to precessional forcing.
The real time multi point Thomson scattering diagnostic at NSTX-U
NASA Astrophysics Data System (ADS)
Laggner, Florian; Kolemen, Egemen; Diallo, Ahmed; Leblanc, Benoit; Rozenblat, Roman; Tchilinguirian, Greg; NSTX-U Team Team
2017-10-01
This contribution presents the upgrade of the multi point Thomson scattering (MPTS) diagnostic for real time application. As a key diagnostic at NSTX-U, the MPTS diagnostic simultaneously measures the electron density (ne) and electron temperature (Te) profiles of a plasma discharge. Therefore, this powerful diagnostic can directly access the electron pressure of the plasma. Currently, only post-discharge evaluation of the data is available, however, since the plasma pressure is one important drive for instabilities, real time measurements of this quantities would be beneficial for plasma control. In a first step, ten MPTS channels were equipped with real time electronics, which improve the data acquisition rate by five orders of magnitude. The commissioning of the system is ongoing and first benchmarks of the real time evaluation routines against the standard, post-discharge evaluation show promising results: The Te as well as ne profiles of both types of analyses agree within their uncertainties. This work was supported by the US Department of Energy under DE-SC0015878 and DE-SC0015480.
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...
Edwards, Roger A; Dee, Deborah; Umer, Amna; Perrine, Cria G; Shealy, Katherine R; Grummer-Strawn, Laurence M
2014-02-01
A substantial proportion of US maternity care facilities engage in practices that are not evidence-based and that interfere with breastfeeding. The CDC Survey of Maternity Practices in Infant Nutrition and Care (mPINC) showed significant variation in maternity practices among US states. The purpose of this article is to use benchmarking techniques to identify states within relevant peer groups that were top performers on mPINC survey indicators related to breastfeeding support. We used 11 indicators of breastfeeding-related maternity care from the 2011 mPINC survey and benchmarking techniques to organize and compare hospital-based maternity practices across the 50 states and Washington, DC. We created peer categories for benchmarking first by region (grouping states by West, Midwest, South, and Northeast) and then by size (grouping states by the number of maternity facilities and dividing each region into approximately equal halves based on the number of facilities). Thirty-four states had scores high enough to serve as benchmarks, and 32 states had scores low enough to reflect the lowest score gap from the benchmark on at least 1 indicator. No state served as the benchmark on more than 5 indicators and no state was furthest from the benchmark on more than 7 indicators. The small peer group benchmarks in the South, West, and Midwest were better than the large peer group benchmarks on 91%, 82%, and 36% of the indicators, respectively. In the West large, the Midwest large, the Midwest small, and the South large peer groups, 4-6 benchmarks showed that less than 50% of hospitals have ideal practice in all states. The evaluation presents benchmarks for peer group state comparisons that provide potential and feasible targets for improvement.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., including, but not limited to, those that both before and after the transaction remain under the functional...) Transactions that do not require an Appendix A analysis; 1 and 1 Inquiry Concerning the Commission's Merger...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madduri, Kamesh; Ediger, David; Jiang, Karl
2009-05-29
We present a new lock-free parallel algorithm for computing betweenness centrality of massive small-world networks. With minor changes to the data structures, our algorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in the HPCS SSCA#2 Graph Analysis benchmark, which has been extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the ThreadStorm processor, and a single-socket Sun multicore server with the UltraSparc T2 processor.more » For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less
Source-term development for a contaminant plume for use by multimedia risk assessment models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whelan, Gene; McDonald, John P.; Taira, Randal Y.
1999-12-01
Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equalmore » importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool.« less
Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides
Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.
2016-01-01
Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics of sediment, and uncertainty in TEB values. Additional evaluations of benchmarks in relation to sediment chemistry and toxicity are ongoing.
1987-12-01
occupation group, category (i.e., strength, loss, etc.), years of commissioned service (YCS), grade, occupation, source of commission, education, sex ...OF MCORP OUTPUT OCCUPATION GROUP: All CAT: Strength YCS: 01 - 09 GRADE: All Unrestricted Officers OCCUPATION: All SOURCE: All EDUCATION: All SEX : All...source of commission, sex , MOS, GCT, and other pertinent variables such as the performance index. A Probit or Logit model could be utilized. The variables
40 CFR 141.172 - Disinfection profiling and benchmarking.
Code of Federal Regulations, 2011 CFR
2011-07-01
... benchmarking. 141.172 Section 141.172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving 10,000 or More People § 141.172 Disinfection profiling and benchmarking. (a... sanitary surveys conducted by the State. (c) Disinfection benchmarking. (1) Any system required to develop...
42 CFR 440.390 - Assurance of transportation.
Code of Federal Regulations, 2014 CFR
2014-10-01
...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...
42 CFR 440.390 - Assurance of transportation.
Code of Federal Regulations, 2012 CFR
2012-10-01
...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...
42 CFR 440.390 - Assurance of transportation.
Code of Federal Regulations, 2011 CFR
2011-10-01
...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...
42 CFR 440.390 - Assurance of transportation.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...
42 CFR 440.390 - Assurance of transportation.
Code of Federal Regulations, 2013 CFR
2013-10-01
...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...
The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.
ERIC Educational Resources Information Center
2002
This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)
Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Ye; Ma, Xiaosong; Liu, Qing Gary
2015-01-01
Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less
Benchmarking in Academic Pharmacy Departments
Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O.; Ross, Leigh Ann
2010-01-01
Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation. PMID:21179251
Benchmarking in academic pharmacy departments.
Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann
2010-10-11
Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.
Benchmarking Commercial Reliability Practices.
1995-07-01
companies (70% of total), and to actually receive completed survey forms from 40 companies ( 60 % of participants, 40% of total identified). Reliability...E -20 -30 - A B C D E F G H I J KL MN OP Q R -40 - A = FMEA , B = FTA, C =Thermal Analysis, D = Sneak Circuit Analysis, E = Worst-Case Circuit Analysis...Failure Modes and Effects Analysis ( FMEA ), will be conducted. c. Commercial companies specify the environmental conditions for their products. In doing
Dong, Nianbo; Lipsey, Mark W
2017-01-01
It is unclear whether propensity score analysis (PSA) based on pretest and demographic covariates will meet the ignorability assumption for replicating the results of randomized experiments. This study applies within-study comparisons to assess whether pre-Kindergarten (pre-K) treatment effects on achievement outcomes estimated using PSA based on a pretest and demographic covariates can approximate those found in a randomized experiment. Data-Four studies with samples of pre-K children each provided data on two math achievement outcome measures with baseline pretests and child demographic variables that included race, gender, age, language spoken at home, and mother's highest education. Research Design and Data Analysis-A randomized study of a pre-K math curriculum provided benchmark estimates of effects on achievement measures. Comparison samples from other pre-K studies were then substituted for the original randomized control and the effects were reestimated using PSA. The correspondence was evaluated using multiple criteria. The effect estimates using PSA were in the same direction as the benchmark estimates, had similar but not identical statistical significance, and did not differ from the benchmarks at statistically significant levels. However, the magnitude of the effect sizes differed and displayed both absolute and relative bias larger than required to show statistical equivalence with formal tests, but those results were not definitive because of the limited statistical power. We conclude that treatment effect estimates based on a single pretest and demographic covariates in PSA correspond to those from a randomized experiment on the most general criteria for equivalence.
Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen
2017-01-01
Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children’s strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders’ NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children’s NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders’ NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children’s benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children’s age and familiarity with the number range, these additional external benchmarks might need to be labeled. PMID:28713302
Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen
2017-01-01
Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children's age and familiarity with the number range, these additional external benchmarks might need to be labeled.
ERIC Educational Resources Information Center
Leitner, Karl-Heinz; Prikoszovits, Julia; Schaffhauser-Linzatti, Michaela; Stowasser, Rainer; Wagner, Karin
2007-01-01
This paper explores the performance efficiency of natural and technical science departments at Austrian universities using Data Envelopment Analysis (DEA). We present DEA as an alternative tool for benchmarking and ranking the assignment of decision-making units (organisations and organisational units). The method applies a multiple input and…
Medical school benchmarking - from tools to programmes.
Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T
2015-02-01
Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.
42 CFR 457.430 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.430 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has...
42 CFR 457.430 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.430 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has...
42 CFR 457.430 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark-equivalent health benefits coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.430 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has...
42 CFR 440.335 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark-equivalent health benefits coverage. 440.335 Section 440.335 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a...
42 CFR 440.335 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 4 2014-10-01 2014-10-01 false Benchmark-equivalent health benefits coverage. 440.335 Section 440.335 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a...
NASA Technical Reports Server (NTRS)
Stewart, H. E.; Blom, R.; Abrams, M.; Daily, M.
1980-01-01
Satellite synthetic aperture radar (SAR) images is evaluated in terms of its geologic applications. The benchmark to which the SAR images are compared is LANDSAT, used both for structural and lithologic interpretations.
Uncertainty Quantification Techniques of SCALE/TSUNAMI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Mueller, Don
2011-01-01
The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification ismore » useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.« less
Public Utility Commission manual for Section 210 of PURPA for Vermont
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Public Utility Regulatory Policies Act of 1978 (PURPA) places obligations on both electric utilities and state regulatory commissions. PURPA requires every electric utility to purchase all energy and capacity made available to it, by a qualifying facility, and to sell energy and capacity to a qualifying facility upon the qualifying facility's request. State regulatory commissions must implement and administer these utility obligations and other requirements that were implemented by the Federal Energy Regulatory Commission's (FERC) final rules, which became effective March 20, 1981, and must set fair rates for electric power purchases and sales between utilities and small powermore » producers. This manual provides a concise, annotated explanation of the final FERC rules, a description of federal and state statutory authorizations, court challenges to these authorizations, analysis of the relationship between federal and state laws, analysis of Vermont's implementation of section 210 of PURPA and for comparison, annotations of selected state regulatory authority decisions.« less
Public Utility Commission manual for Section 210 of PURPA for Montana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Public Utility Regulatory Policies Act of 1978 (PURPA) places obligations on both electric utilities and state regulatory commissions. PURPA requires every electric utility to purchase all energy and capacity made available to it, by a qualifying facility, and to sell energy and capacity to a qualifying facility upon the qualifying facility's request. State regulatory commissions must implement and administer these utility obligations and other requirements that were implemented by the Federal Energy Regulatory Commission's (FERC) final rules, which became effective March 20, 1981; and must set fair rates for electric power purchases and sales between utilities and small powermore » producers. This manual provides a concise, annotated explanation of the final FERC rules, a description of federal and state statutory authorizations, court challenges to these authorizations analysis of the relationship between federal and state laws, analysis of Montana's implementation of section 210 of PURPA and for comparison, annotations of selected state regulatory authority decisions.« less
Public Utility Commission manual for Section 210 of PURPA for Arkansas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Public Utility Regulatory Policies Act of 1978 (PURPA) places obligations on both electric utilities and state regulatory commissions. PURPA requires every electric utility to purchase all energy and capacity made available to it, by a qualifying facility, and to sell energy and capacity to a qualifying facility upon the qualifying facility's request. State regulatory commissions must implement and administer these utility obligations and other requirements that were implemented by the Federal Energy Regulatory Commission's (FERC) final rules, which became effective March 20, 1981; and must set fair rates for electric power purchases and sales between utilities and small powermore » producers. This manual provides a concise, annotated explanation of the final FERC rules, a description of federal and state statutory authorizations, court challenges to these authorizations, analysis of the relationship between federal and state laws, analysis of Arkansas' implementation of section 210 of PURPA and for comparison, annotations of selected state regulatory authority decisions.« less
Benchmarking for Higher Education.
ERIC Educational Resources Information Center
Jackson, Norman, Ed.; Lund, Helen, Ed.
The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…
How Benchmarking and Higher Education Came Together
ERIC Educational Resources Information Center
Levy, Gary D.; Ronco, Sharron L.
2012-01-01
This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…
Benchmark Study of Global Clean Energy Manufacturing | Advanced
Manufacturing Research | NREL Benchmark Study of Global Clean Energy Manufacturing Benchmark Study of Global Clean Energy Manufacturing Through a first-of-its-kind benchmark study, the Clean Energy Technology End Product.' The study examined four clean energy technologies: wind turbine components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zylstra, A. B.; Frenje, J. A.; Gatu Johnson, M.
Few-body nuclear physics often relies upon phenomenological models, with new efforts at the ab initio theory reported recently; both need high-quality benchmark data, particularly at low center-of-mass energies. We use high-energy-density plasmas to measure the proton spectra from 3He + T and 3He + 3He fusion. The data disagree with R -matrix predictions constrained by neutron spectra from T + T fusion. Here, we present a new analysis of the 3He + 3He proton spectrum; these benchmarked spectral shapes should be used for interpreting low-resolution data, such as solar fusion cross-section measurements.
NASA Astrophysics Data System (ADS)
Zylstra, A. B.; Frenje, J. A.; Gatu Johnson, M.; Hale, G. M.; Brune, C. R.; Bacher, A.; Casey, D. T.; Li, C. K.; McNabb, D.; Paris, M.; Petrasso, R. D.; Sangster, T. C.; Sayre, D. B.; Séguin, F. H.
2017-12-01
Few-body nuclear physics often relies upon phenomenological models, with new efforts at the ab initio theory reported recently; both need high-quality benchmark data, particularly at low center-of-mass energies. We use high-energy-density plasmas to measure the proton spectra from 3He +T and 3He + 3He fusion. The data disagree with R -matrix predictions constrained by neutron spectra from T +T fusion. We present a new analysis of the 3He + 3He 3 proton spectrum; these benchmarked spectral shapes should be used for interpreting low-resolution data, such as solar fusion cross-section measurements.
NASA Astrophysics Data System (ADS)
Alloui, Mebarka; Belaidi, Salah; Othmani, Hasna; Jaidane, Nejm-Eddine; Hochlaf, Majdi
2018-03-01
We performed benchmark studies on the molecular geometry, electron properties and vibrational analysis of imidazole using semi-empirical, density functional theory and post Hartree-Fock methods. These studies validated the use of AM1 for the treatment of larger systems. Then, we treated the structural, physical and chemical relationships for a series of imidazole derivatives acting as angiotensin II AT1 receptor blockers using AM1. QSAR studies were done for these imidazole derivatives using a combination of various physicochemical descriptors. A multiple linear regression procedure was used to design the relationships between molecular descriptor and the activity of imidazole derivatives. Results validate the derived QSAR model.
Zylstra, A. B.; Frenje, J. A.; Gatu Johnson, M.; ...
2017-11-29
Few-body nuclear physics often relies upon phenomenological models, with new efforts at the ab initio theory reported recently; both need high-quality benchmark data, particularly at low center-of-mass energies. We use high-energy-density plasmas to measure the proton spectra from 3He + T and 3He + 3He fusion. The data disagree with R -matrix predictions constrained by neutron spectra from T + T fusion. Here, we present a new analysis of the 3He + 3He proton spectrum; these benchmarked spectral shapes should be used for interpreting low-resolution data, such as solar fusion cross-section measurements.
The benchmark aeroelastic models program: Description and highlights of initial results
NASA Technical Reports Server (NTRS)
Bennett, Robert M.; Eckstrom, Clinton V.; Rivera, Jose A., Jr.; Dansberry, Bryan E.; Farmer, Moses G.; Durham, Michael H.
1991-01-01
An experimental effort was implemented in aeroelasticity called the Benchmark Models Program. The primary purpose of this program is to provide the necessary data to evaluate computational fluid dynamic codes for aeroelastic analysis. It also focuses on increasing the understanding of the physics of unsteady flows and providing data for empirical design. An overview is given of this program and some results obtained in the initial tests are highlighted. The tests that were completed include measurement of unsteady pressures during flutter of rigid wing with a NACA 0012 airfoil section and dynamic response measurements of a flexible rectangular wing with a thick circular arc airfoil undergoing shock boundary layer oscillations.
A Benchmark Problem for Development of Autonomous Structural Modal Identification
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Woodard, Stanley E.; Juang, Jer-Nan
1996-01-01
This paper summarizes modal identification results obtained using an autonomous version of the Eigensystem Realization Algorithm on a dynamically complex, laboratory structure. The benchmark problem uses 48 of 768 free-decay responses measured in a complete modal survey test. The true modal parameters of the structure are well known from two previous, independent investigations. Without user involvement, the autonomous data analysis identified 24 to 33 structural modes with good to excellent accuracy in 62 seconds of CPU time (on a DEC Alpha 4000 computer). The modal identification technique described in the paper is the baseline algorithm for NASA's Autonomous Dynamics Determination (ADD) experiment scheduled to fly on International Space Station assembly flights in 1997-1999.
Benchmarking and performance analysis of the CM-2. [SIMD computer
NASA Technical Reports Server (NTRS)
Myers, David W.; Adams, George B., II
1988-01-01
A suite of benchmarking routines testing communication, basic arithmetic operations, and selected kernel algorithms written in LISP and PARIS was developed for the CM-2. Experiment runs are automated via a software framework that sequences individual tests, allowing for unattended overnight operation. Multiple measurements are made and treated statistically to generate well-characterized results from the noisy values given by cm:time. The results obtained provide a comparison with similar, but less extensive, testing done on a CM-1. Tests were chosen to aid the algorithmist in constructing fast, efficient, and correct code on the CM-2, as well as gain insight into what performance criteria are needed when evaluating parallel processing machines.
NASA Astrophysics Data System (ADS)
Emde, Claudia; Barlakas, Vasileios; Cornet, Céline; Evans, Frank; Wang, Zhen; Labonotte, Laurent C.; Macke, Andreas; Mayer, Bernhard; Wendisch, Manfred
2018-04-01
Initially unpolarized solar radiation becomes polarized by scattering in the Earth's atmosphere. In particular molecular scattering (Rayleigh scattering) polarizes electromagnetic radiation, but also scattering of radiation at aerosols, cloud droplets (Mie scattering) and ice crystals polarizes. Each atmospheric constituent produces a characteristic polarization signal, thus spectro-polarimetric measurements are frequently employed for remote sensing of aerosol and cloud properties. Retrieval algorithms require efficient radiative transfer models. Usually, these apply the plane-parallel approximation (PPA), assuming that the atmosphere consists of horizontally homogeneous layers. This allows to solve the vector radiative transfer equation (VRTE) efficiently. For remote sensing applications, the radiance is considered constant over the instantaneous field-of-view of the instrument and each sensor element is treated independently in plane-parallel approximation, neglecting horizontal radiation transport between adjacent pixels (Independent Pixel Approximation, IPA). In order to estimate the errors due to the IPA approximation, three-dimensional (3D) vector radiative transfer models are required. So far, only a few such models exist. Therefore, the International Polarized Radiative Transfer (IPRT) working group of the International Radiation Commission (IRC) has initiated a model intercomparison project in order to provide benchmark results for polarized radiative transfer. The group has already performed an intercomparison for one-dimensional (1D) multi-layer test cases [phase A, 1]. This paper presents the continuation of the intercomparison project (phase B) for 2D and 3D test cases: a step cloud, a cubic cloud, and a more realistic scenario including a 3D cloud field generated by a Large Eddy Simulation (LES) model and typical background aerosols. The commonly established benchmark results for 3D polarized radiative transfer are available at the IPRT website (http://www.meteo.physik.uni-muenchen.de/ iprt).
Cross-industry benchmarking: is it applicable to the operating room?
Marco, A P; Hart, S
2001-01-01
The use of benchmarking has been growing in nonmedical industries. This concept is being increasingly applied to medicine as the industry strives to improve quality and improve financial performance. Benchmarks can be either internal (set by the institution) or external (use other's performance as a goal). In some industries, benchmarking has crossed industry lines to identify breakthroughs in thinking. In this article, we examine whether the airline industry can be used as a source of external process benchmarking for the operating room.
Overview of TPC Benchmark E: The Next Generation of OLTP Benchmarks
NASA Astrophysics Data System (ADS)
Hogan, Trish
Set to replace the aging TPC-C, the TPC Benchmark E is the next generation OLTP benchmark, which more accurately models client database usage. TPC-E addresses the shortcomings of TPC-C. It has a much more complex workload, requires the use of RAID-protected storage, generates much less I/O, and is much cheaper and easier to set up, run, and audit. After a period of overlap, it is expected that TPC-E will become the de facto OLTP benchmark.
2011-03-01
for operating surface ships at sea, including managing onboard systems and personnel. They are the “ship drivers ” of the fleet. The peak point of a...all healthy graduates are commissioned into the Navy’s unrestricted line (URL), unhealthy graduates are generally commissioned into the restricted...and professional development are major drivers of retention in the Navy. In order to understand the correlation between compensation and retention
Comparative study on gene set and pathway topology-based enrichment methods.
Bayerlová, Michaela; Jung, Klaus; Kramer, Frank; Klemm, Florian; Bleckmann, Annalen; Beißbarth, Tim
2015-10-22
Enrichment analysis is a popular approach to identify pathways or sets of genes which are significantly enriched in the context of differentially expressed genes. The traditional gene set enrichment approach considers a pathway as a simple gene list disregarding any knowledge of gene or protein interactions. In contrast, the new group of so called pathway topology-based methods integrates the topological structure of a pathway into the analysis. We comparatively investigated gene set and pathway topology-based enrichment approaches, considering three gene set and four topological methods. These methods were compared in two extensive simulation studies and on a benchmark of 36 real datasets, providing the same pathway input data for all methods. In the benchmark data analysis both types of methods showed a comparable ability to detect enriched pathways. The first simulation study was conducted with KEGG pathways, which showed considerable gene overlaps between each other. In this study with original KEGG pathways, none of the topology-based methods outperformed the gene set approach. Therefore, a second simulation study was performed on non-overlapping pathways created by unique gene IDs. Here, methods accounting for pathway topology reached higher accuracy than the gene set methods, however their sensitivity was lower. We conducted one of the first comprehensive comparative works on evaluating gene set against pathway topology-based enrichment methods. The topological methods showed better performance in the simulation scenarios with non-overlapping pathways, however, they were not conclusively better in the other scenarios. This suggests that simple gene set approach might be sufficient to detect an enriched pathway under realistic circumstances. Nevertheless, more extensive studies and further benchmark data are needed to systematically evaluate these methods and to assess what gain and cost pathway topology information introduces into enrichment analysis. Both types of methods for enrichment analysis require further improvements in order to deal with the problem of pathway overlaps.
76 FR 2726 - Withdrawal of Regulatory Guide 1.154
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-14
... NUCLEAR REGULATORY COMMISSION [NRC-2011-0010] Withdrawal of Regulatory Guide 1.154 AGENCY: Nuclear Regulatory Commission. ACTION: Withdrawal of Regulatory Guide 1.154, ``Format and Content of Plant-Specific Pressurized Thermal Shock Safety Analysis Reports for Pressurized Water Reactors.'' FOR FURTHER INFORMATION CONTACT: Mekonen M. Bayssie,...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-18
... NUCLEAR REGULATORY COMMISSION [NRC-2011-0109] NUREG/CR-XXXX, Development of Quantitative Software..., ``Development of Quantitative Software Reliability Models for Digital Protection Systems of Nuclear Power Plants... of Risk Analysis, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Will, M.E.; Suter, G.W. II
1995-09-01
An important step in ecological risk assessments is screening the chemicals occur-ring on a site for contaminants of potential concern. Screening may be accomplished by comparing reported ambient concentrations to a set of toxicological benchmarks. Multiple endpoints for assessing risks posed by soil-borne contaminants to organisms directly impacted by them have been established. This report presents benchmarks for soil invertebrates and microbial processes and addresses only chemicals found at United States Department of Energy (DOE) sites. No benchmarks for pesticides are presented. After discussing methods, this report presents the results of the literature review and benchmark derivation for toxicity tomore » earthworms (Sect. 3), heterotrophic microbes and their processes (Sect. 4), and other invertebrates (Sect. 5). The final sections compare the benchmarks to other criteria and background and draw conclusions concerning the utility of the benchmarks.« less