Science.gov

Sample records for aer benchmark specification

  1. Simple Benchmark Specifications for Space Radiation Protection

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  2. Benchmark Generation using Domain Specific Modeling

    SciTech Connect

    Bui, Ngoc B.; Zhu, Liming; Gorton, Ian; Liu, Yan

    2007-08-01

    Performance benchmarks are domain specific applications that are specialized to a certain set of technologies and platforms. The development of a benchmark application requires mapping the performance specific domain concepts to an implementation and producing complex technology and platform specific code. Domain Specific Modeling (DSM) promises to bridge the gap between application domains and implementations by allowing designers to specify solutions in domain-specific abstractions and semantics through Domain Specific Languages (DSL). This allows generation of a final implementation automatically from high level models. The modeling and task automation benefits obtained from this approach usually justify the upfront cost involved. This paper employs a DSM based approach to invent a new DSL, DSLBench, for benchmark generation. DSLBench and its associated code generation facilities allow the design and generation of a completely deployable benchmark application for performance testing from a high level model. DSLBench is implemented using Microsoft Domain Specific Language toolkit. It is integrated with the Visual Studio 2005 Team Suite as a plug-in to provide extra modeling capabilities for performance testing. We illustrate the approach using a case study based on .Net and C#.

  3. Specification for the VERA Depletion Benchmark Suite

    SciTech Connect

    Kim, Kang Seog

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  4. Benchmark specifications for EBR-II shutdown heat removal tests

    SciTech Connect

    Sofu, T.; Briggs, L. L.

    2012-07-01

    Argonne National Laboratory (ANL) is hosting an IAEA-coordinated research project on benchmark analyses of sodium-cooled fast reactor passive safety tests performed at the Experimental Breeder Reactor-II (EBR-II). The benchmark project involves analysis of a protected and an unprotected loss of flow tests conducted during an extensive testing program within the framework of the U.S. Integral Fast Reactor program to demonstrate the inherently safety features of EBR-II as a pool-type, sodium-cooled fast reactor prototype. The project is intended to improve the participants' design and safety analysis capabilities for sodium-cooled fast reactors through validation and qualification of safety analysis codes and methods. This paper provides a description of the EBR-II tests included in the program, and outlines the benchmark specifications being prepared to support the IAEA-coordinated research project. (authors)

  5. Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms

    PubMed Central

    Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas

    2016-01-01

    Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640

  6. Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms.

    PubMed

    Pacheco, Maria P; Pfau, Thomas; Sauter, Thomas

    2015-01-01

    Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms.

  7. Benchmark characterization

    NASA Technical Reports Server (NTRS)

    Conte, Thomas M.; Hwu, Wen-Mei W.

    1991-01-01

    An abstract system of benchmark characteristics that makes it possible, in the beginning of the design stage, to design with benchmark performance in mind is presented. The benchmark characteristics for a set of commonly used benchmarks are then shown. The benchmark set used includes some benchmarks from the Systems Performance Evaluation Cooperative (SPEC). The SPEC programs are industry-standard applications that use specific inputs. Processor, memory-system, and operating-system characteristics are addressed.

  8. Benchmark specifications and data requirements for initial modeling of the China experimental fast reactor.

    SciTech Connect

    Fanning, T. H.; Nuclear Engineering Division

    2010-06-04

    A specification is proposed for an initial transient benchmark analysis of the China Experimental Fast Reactor design based on the analysis capabilities of the SAS4A/SASSYS-1 code. For the initial benchmark, a single-channel protected transient overpower accident is defined. Reactivity feedback coefficients will not be required and simplified material properties are recommended. This report also describes the data required for developing the modeling input. This data includes assembly geometry, reactor power distributions, kinetics and decay heat data, and material properties. Comparisons of benchmark results will take place at a future SAS4A/SASSYS-1 training meeting planned to occur at Argonne National Laboratory. Future benchmark specifications will be planned to expand upon this initial model to include more complex reactivity feedback models, material properties, additional assembly geometry, and primary and intermediate coolant systems.

  9. Embedded Volttron specification - benchmarking small footprint compute device for Volttron

    SciTech Connect

    Sanyal, Jibonananda; Fugate, David L.; Woodworth, Ken; Nutaro, James J.; Kuruganti, Teja

    2015-08-17

    An embedded system is a small footprint computing unit that typically serves a specific purpose closely associated with measurements and control of hardware devices. These units are designed for reasonable durability and operations in a wide range of operating conditions. Some embedded systems support real-time operations and can demonstrate high levels of reliability. Many have failsafe mechanisms built to handle graceful shutdown of the device in exception conditions. The available memory, processing power, and network connectivity of these devices are limited due to the nature of their specific-purpose design and intended application. Industry practice is to carefully design the software for the available hardware capability to suit desired deployment needs. Volttron is an open source agent development and deployment platform designed to enable researchers to interact with devices and appliances without having to write drivers themselves. Hosting Volttron on small footprint embeddable devices enables its demonstration for embedded use. This report details the steps required and the experience in setting up and running Volttron applications on three small footprint devices: the Intel Next Unit of Computing (NUC), the Raspberry Pi 2, and the BeagleBone Black. In addition, the report also details preliminary investigation of the execution performance of Volttron on these devices.

  10. Incorporating specificity into optimization: evaluation of SPA using CSAR 2014 and CASF 2013 benchmarks.

    PubMed

    Yan, Zhiqiang; Wang, Jin

    2016-03-01

    Scoring functions of protein-ligand interactions are widely used in computationally docking software and structure-based drug discovery. Accurate prediction of the binding energy between the protein and the ligand is the main task of the scoring function. The accuracy of a scoring function is normally evaluated by testing it on the benchmarks of protein-ligand complexes. In this work, we report the evaluation analysis of an improved version of scoring function SPecificity and Affinity (SPA). By testing on two independent benchmarks Community Structure-Activity Resource (CSAR) 2014 and Comparative Assessment of Scoring Functions (CASF) 2013, the assessment shows that SPA is relatively more accurate than other compared scoring functions in predicting the interactions between the protein and the ligand. We conclude that the inclusion of the specificity in the optimization can effectively suppress the competitive state on the funnel-like binding energy landscape, and make SPA more accurate in identifying the "native" conformation and scoring the binding decoys. The evaluation of SPA highlights the importance of binding specificity in improving the accuracy of the scoring functions.

  11. Reactor Physics Measurements and Benchmark Specifications for Oak Ridge Highly Enriched Uranium Sphere (ORSphere)

    SciTech Connect

    Marshall, Margaret A.

    2014-11-04

    In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an effort to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with the GODIVA I experiments. Additionally, various material reactivity worths, the surface material worth coefficient, the delayed neutron fraction, the prompt neutron decay constant, relative fission density, and relative neutron importance were all measured. The critical assembly, material reactivity worths, the surface material worth coefficient, and the delayed neutron fraction were all evaluated as benchmark experiment measurements. The reactor physics measurements are the focus of this paper; although for clarity the critical assembly benchmark specifications are briefly discussed.

  12. Reactor Physics Measurements and Benchmark Specifications for Oak Ridge Highly Enriched Uranium Sphere (ORSphere)

    DOE PAGES

    Marshall, Margaret A.

    2014-11-04

    In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an effort to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with themore » GODIVA I experiments. Additionally, various material reactivity worths, the surface material worth coefficient, the delayed neutron fraction, the prompt neutron decay constant, relative fission density, and relative neutron importance were all measured. The critical assembly, material reactivity worths, the surface material worth coefficient, and the delayed neutron fraction were all evaluated as benchmark experiment measurements. The reactor physics measurements are the focus of this paper; although for clarity the critical assembly benchmark specifications are briefly discussed.« less

  13. Weldability of AerMet 100

    SciTech Connect

    Kautz, D.D.; Hoffman, D.E.; Westrich, C.N.

    1991-01-01

    Several test welds were made on AerMet 100 alloy. Both electron beam and pulsed Nd:YAG laser beam welding processes were used to make the welds. All welds were satisfactory, with no cracking or porosity noted in weld cross-sections. 2 refs., 6 figs.

  14. On algorithmic rate-coded AER generation.

    PubMed

    Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel; Linares-Barranco, Bernabé; Civit-Balcells, Antón

    2006-05-01

    This paper addresses the problem of converting a conventional video stream based on sequences of frames into the spike event-based representation known as the address-event-representation (AER). In this paper we concentrate on rate-coded AER. The problem is addressed as an algorithmic problem, in which different methods are proposed, implemented and tested through software algorithms. The proposed algorithms are comparatively evaluated according to different criteria. Emphasis is put on the potential of such algorithms for a) doing the frame-based to event-based representation in real time, and b) that the resulting event streams ressemble as much as possible those generated naturally by rate-coded address-event VLSI chips, such as silicon AER retinae. It is found that simple and straightforward algorithms tend to have high potential for real time but produce event distributions that differ considerably from those obtained in AER VLSI chips. On the other hand, sophisticated algorithms that yield better event distributions are not efficient for real time operations. The methods based on linear-feedback-shift-register (LFSR) pseudorandom number generation is a good compromise, which is feasible for real time and yield reasonably well distributed events in time. Our software experiments, on a 1.6-GHz Pentium IV, show that at 50% AER bus load the proposed algorithms require between 0.011 and 1.14 ms per 8 bit-pixel per frame. One of the proposed LFSR methods is implemented in real time hardware using a prototyping board that includes a VirtexE 300 FPGA. The demonstration hardware is capable of transforming frames of 64 x 64 pixels of 8-bit depth at a frame rate of 25 frames per second, producing spike events at a peak rate of 10(7) events per second. PMID:16722179

  15. An Approach to Industrial Stormwater Benchmarks: Establishing and Using Site-Specific Threshold Criteria at Lawrence Livermore National Laboratory

    SciTech Connect

    Campbell, C G; Mathews, S

    2006-09-07

    Current regulatory schemes use generic or industrial sector specific benchmarks to evaluate the quality of industrial stormwater discharges. While benchmarks can be a useful tool for facility stormwater managers in evaluating the quality stormwater runoff, benchmarks typically do not take into account site-specific conditions, such as: soil chemistry, atmospheric deposition, seasonal changes in water source, and upstream land use. Failing to account for these factors may lead to unnecessary costs to trace a source of natural variation, or potentially missing a significant local water quality problem. Site-specific water quality thresholds, established upon the statistical evaluation of historic data take into account these factors, are a better tool for the direct evaluation of runoff quality, and a more cost-effective trigger to investigate anomalous results. Lawrence Livermore National Laboratory (LLNL), a federal facility, established stormwater monitoring programs to comply with the requirements of the industrial stormwater permit and Department of Energy orders, which require the evaluation of the impact of effluent discharges on the environment. LLNL recognized the need to create a tool to evaluate and manage stormwater quality that would allow analysts to identify trends in stormwater quality and recognize anomalous results so that trace-back and corrective actions could be initiated. LLNL created the site-specific water quality threshold tool to better understand the nature of the stormwater influent and effluent, to establish a technical basis for determining when facility operations might be impacting the quality of stormwater discharges, and to provide ''action levels'' to initiate follow-up to analytical results. The threshold criteria were based on a statistical analysis of the historic stormwater monitoring data and a review of relevant water quality objectives.

  16. Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).

    SciTech Connect

    Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.; Vehar, David W.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.

  17. SMART- Small Motor AerRospace Technology

    NASA Astrophysics Data System (ADS)

    Balucani, M.; Crescenzi, R.; Ferrari, A.; Guarrea, G.; Pontetti, G.; Orsini, F.; Quattrino, L.; Viola, F.

    2004-11-01

    This paper presents the "SMART" (Small Motor AerRospace Tecnology) propulsion system, constituted of microthrusters array realised by semiconductor technology on silicon wafers. SMART system is obtained gluing three main modules: combustion chambers, igniters and nozzles. The module was then filled with propellant and closed by gluing a piece of silicon wafer in the back side of the combustion chambers. The complete assembled module composed of 25 micro- thrusters with a 3 x 5 nozzle is presented. The measurement showed a thrust of 129 mN and impulse of 56,8 mNs burning about 70mg of propellant for the micro-thruster with nozzle and a thrust of 21 mN and impulse of 8,4 mNs for the micro-thruster without nozzle.

  18. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  19. A field-based method to derive macroinvertebrate benchmark for specific conductivity adapted for small data sets and demonstrated in the Hun-Tai River Basin, Northeast China.

    PubMed

    Zhao, Qian; Jia, Xiaobo; Xia, Rui; Lin, Jianing; Zhang, Yuan

    2016-09-01

    Ionic mixtures, measured as specific conductivity, have been increasingly concerned because of their toxicities to aquatic organisms. However, identifying protective values of specific conductivity for aquatic organisms is challenging given that laboratory test systems cannot examine more salt-intolerant species nor effects occurring in streams. Large data sets used for deriving field-based benchmarks are rarely available. In this study, a field-based method for small data sets was used to derive specific conductivity benchmark, which is expected to prevent the extirpation of 95% of local taxa from circum-neutral to alkaline waters dominated by a mixture of SO4(2-) and HCO3(-) anions and other dissolved ions. To compensate for the smaller sample size, species level analyses were combined with genus level analyses. The benchmark is based on extirpation concentration (XC95) values of specific conductivity for 60 macroinvertebrate genera estimated from 296 sampling sites in the Hun-Tai River Basin. We derived the specific conductivity benchmark by using a 2-point interpolation method, which yielded the benchmark of 249 μS/cm. Our study tailored the method that was developed by USEPA to derive aquatic life benchmark for specific conductivity for basin scale application, and may provide useful information for water pollution control and management.

  20. A field-based method to derive macroinvertebrate benchmark for specific conductivity adapted for small data sets and demonstrated in the Hun-Tai River Basin, Northeast China.

    PubMed

    Zhao, Qian; Jia, Xiaobo; Xia, Rui; Lin, Jianing; Zhang, Yuan

    2016-09-01

    Ionic mixtures, measured as specific conductivity, have been increasingly concerned because of their toxicities to aquatic organisms. However, identifying protective values of specific conductivity for aquatic organisms is challenging given that laboratory test systems cannot examine more salt-intolerant species nor effects occurring in streams. Large data sets used for deriving field-based benchmarks are rarely available. In this study, a field-based method for small data sets was used to derive specific conductivity benchmark, which is expected to prevent the extirpation of 95% of local taxa from circum-neutral to alkaline waters dominated by a mixture of SO4(2-) and HCO3(-) anions and other dissolved ions. To compensate for the smaller sample size, species level analyses were combined with genus level analyses. The benchmark is based on extirpation concentration (XC95) values of specific conductivity for 60 macroinvertebrate genera estimated from 296 sampling sites in the Hun-Tai River Basin. We derived the specific conductivity benchmark by using a 2-point interpolation method, which yielded the benchmark of 249 μS/cm. Our study tailored the method that was developed by USEPA to derive aquatic life benchmark for specific conductivity for basin scale application, and may provide useful information for water pollution control and management. PMID:27389551

  1. Benchmark Studies of Induced Radioactivity Produced in LHC Materials, Pt II Specific Activities

    SciTech Connect

    Brugger, M.; Mayer, S.; Roesler, S.; Ulrici, L.; Khater, H.; Prinz, A.; Vincke, H.; /SLAC

    2006-04-12

    A new method to estimate remanent dose rates, to be used with the Monte Carlo code FLUKA, was benchmarked against measurements from an experiment that was performed at the CERN-EU high-energy reference field facility. An extensive collection of samples of different materials were placed downstream of and laterally to a copper target, intercepting a positively charged mixed hadron beam with a momentum of 120 GeV/c. Emphasis was put on the reduction of uncertainties such as careful monitoring of the irradiation parameters, the use of different instruments to measure dose rates, detailed elemental analyses of the irradiated materials and detailed simulations of the irradiation experiment. Measured and calculated dose rates are in good agreement.

  2. Comprehensive benchmarking reveals H2BK20 acetylation as a distinctive signature of cell-state-specific enhancers and promoters.

    PubMed

    Kumar, Vibhor; Rayan, Nirmala Arul; Muratani, Masafumi; Lim, Stefan; Elanggovan, Bavani; Xin, Lixia; Lu, Tess; Makhija, Harshyaa; Poschmann, Jeremie; Lufkin, Thomas; Ng, Huck Hui; Prabhakar, Shyam

    2016-05-01

    Although over 35 different histone acetylation marks have been described, the overwhelming majority of regulatory genomics studies focus exclusively on H3K27ac and H3K9ac. In order to identify novel epigenomic traits of regulatory elements, we constructed a benchmark set of validated enhancers by performing 140 enhancer assays in human T cells. We tested 40 chromatin signatures on this unbiased enhancer set and identified H2BK20ac, a little-studied histone modification, as the most predictive mark of active enhancers. Notably, we detected a novel class of functionally distinct enhancers enriched in H2BK20ac but lacking H3K27ac, which was present in all examined cell lines and also in embryonic forebrain tissue. H2BK20ac was also unique in highlighting cell-type-specific promoters. In contrast, other acetylation marks were present in all active promoters, regardless of cell-type specificity. In stimulated microglial cells, H2BK20ac was more correlated with cell-state-specific expression changes than H3K27ac, with TGF-beta signaling decoupling the two acetylation marks at a subset of regulatory elements. In summary, our study reveals a previously unknown connection between histone acetylation and cell-type-specific gene regulation and indicates that H2BK20ac profiling can be used to uncover new dimensions of gene regulation. PMID:26957309

  3. Transient Inhibition of FGFR2b-Ligands Signaling Leads to Irreversible Loss of Cellular β-Catenin Organization and Signaling in AER during Mouse Limb Development

    PubMed Central

    Tabatabai, Reza; Baptista, Sheryl; Tiozzo, Caterina; Carraro, Gianni; Wheeler, Matthew; Barreto, Guillermo; Braun, Thomas; Li, Xiaokun; Hajihosseini, Mohammad K.; Bellusci, Saverio

    2013-01-01

    The vertebrate limbs develop through coordinated series of inductive, growth and patterning events. Fibroblast Growth Factor receptor 2b (FGFR2b) signaling controls the induction of the Apical Ectodermal Ridge (AER) but its putative roles in limb outgrowth and patterning, as well as in AER morphology and cell behavior have remained unclear. We have investigated these roles through graded and reversible expression of soluble dominant-negative FGFR2b molecules at various times during mouse limb development, using a doxycycline/transactivator/tet(O)-responsive system. Transient attenuation (≤24 hours) of FGFR2b-ligands signaling at E8.5, prior to limb bud induction, leads mostly to the loss or truncation of proximal skeletal elements with less severe impact on distal elements. Attenuation from E9.5 onwards, however, has an irreversible effect on the stability of the AER, resulting in a progressive loss of distal limb skeletal elements. The primary consequences of FGFR2b-ligands attenuation is a transient loss of cell adhesion and down-regulation of P63, β1-integrin and E-cadherin, and a permanent loss of cellular β-catenin organization and WNT signaling within the AER. Combined, these effects lead to the progressive transformation of the AER cells from pluristratified to squamous epithelial-like cells within 24 hours of doxycycline administration. These findings show that FGFR2b-ligands signaling has critical stage-specific roles in maintaining the AER during limb development. PMID:24167544

  4. Benchmarking Deep Networks for Predicting Residue-Specific Quality of Individual Protein Models in CASP11

    PubMed Central

    Liu, Tong; Wang, Yiheng; Eickholt, Jesse; Wang, Zheng

    2016-01-01

    Quality assessment of a protein model is to predict the absolute or relative quality of a protein model using computational methods before the native structure is available. Single-model methods only need one model as input and can predict the absolute residue-specific quality of an individual model. Here, we have developed four novel single-model methods (Wang_deep_1, Wang_deep_2, Wang_deep_3, and Wang_SVM) based on stacked denoising autoencoders (SdAs) and support vector machines (SVMs). We evaluated these four methods along with six other methods participating in CASP11 at the global and local levels using Pearson’s correlation coefficients and ROC analysis. As for residue-specific quality assessment, our four methods achieved better performance than most of the six other CASP11 methods in distinguishing the reliably modeled residues from the unreliable measured by ROC analysis; and our SdA-based method Wang_deep_1 has achieved the highest accuracy, 0.77, compared to SVM-based methods and our ensemble of an SVM and SdAs. However, we found that Wang_deep_2 and Wang_deep_3, both based on an ensemble of multiple SdAs and an SVM, performed slightly better than Wang_deep_1 in terms of ROC analysis, indicating that integrating an SVM with deep networks works well in terms of certain measurements. PMID:26763289

  5. Benchmarking Deep Networks for Predicting Residue-Specific Quality of Individual Protein Models in CASP11.

    PubMed

    Liu, Tong; Wang, Yiheng; Eickholt, Jesse; Wang, Zheng

    2016-01-01

    Quality assessment of a protein model is to predict the absolute or relative quality of a protein model using computational methods before the native structure is available. Single-model methods only need one model as input and can predict the absolute residue-specific quality of an individual model. Here, we have developed four novel single-model methods (Wang_deep_1, Wang_deep_2, Wang_deep_3, and Wang_SVM) based on stacked denoising autoencoders (SdAs) and support vector machines (SVMs). We evaluated these four methods along with six other methods participating in CASP11 at the global and local levels using Pearson's correlation coefficients and ROC analysis. As for residue-specific quality assessment, our four methods achieved better performance than most of the six other CASP11 methods in distinguishing the reliably modeled residues from the unreliable measured by ROC analysis; and our SdA-based method Wang_deep_1 has achieved the highest accuracy, 0.77, compared to SVM-based methods and our ensemble of an SVM and SdAs. However, we found that Wang_deep_2 and Wang_deep_3, both based on an ensemble of multiple SdAs and an SVM, performed slightly better than Wang_deep_1 in terms of ROC analysis, indicating that integrating an SVM with deep networks works well in terms of certain measurements.

  6. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: sensitivity and specificity analysis.

    PubMed

    Kapp, Eugene A; Schütz, Frédéric; Connolly, Lisa M; Chakel, John A; Meza, Jose E; Miller, Christine A; Fenyo, David; Eng, Jimmy K; Adkins, Joshua N; Omenn, Gilbert S; Simpson, Richard J

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X!Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, PeptideProphet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X!Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of "consensus scoring", i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  7. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  8. Benchmarking Tool Kit.

    ERIC Educational Resources Information Center

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  9. Analysis of hot and cold Kritz benchmark with MCNP5 and temperature-specific nuclear data libraries

    SciTech Connect

    Mosteller, R. D.; MacFarlane, R. E.; White, M. C.

    2003-01-01

    One of the longstanding obstacles to the use of the MCNP Monte Carlo code' for reactor physics calculations has been its requirement for nuclear data libraries at the temperature associated with the application of interest. Recently, however, an auxiliary code, named 'doppler,' has been developed that uses an existing nuclear data library as the basis for generating a new library at the desired temperature. doppler has simple input and is straightforward to use. Libraries generated with doppler and based on the existing ENDF66 library have been developed for three hot Kritz benchmark. Results obtained from MCNPS for those hot benchmarks and their cold (ie., room-temperature) counterparts are presented herein.

  10. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  11. Neutron Reference Benchmark Field Specifications: ACRR Polyethylene-Lead-Graphite (PLG) Bucket Environment (ACRR-PLG-CC-32-CL).

    SciTech Connect

    Vega, Richard Manuel; Parm, Edward J.; Griffin, Patrick J.; Vehar, David W.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the Polyethylene-Lead-Graphite (PLG) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 37 integral dosimetry measurements in the neutron field are reported.

  12. Members of the PpaA/AerR Antirepressor Family Bind Cobalamin

    PubMed Central

    Vermeulen, Arjan J.

    2015-01-01

    ABSTRACT PpaA from Rhodobacter sphaeroides is a member of a family of proteins that are thought to function as antirepressors of PpsR, a widely disseminated repressor of photosystem genes in purple photosynthetic bacteria. PpaA family members exhibit sequence similarity to a previously defined SCHIC (sensor containing heme instead of cobalamin) domain; however, the tetrapyrrole-binding specificity of PpaA family members has been unclear, as R. sphaeroides PpaA has been reported to bind heme while the Rhodobacter capsulatus homolog has been reported to bind cobalamin. In this study, we reinvestigated tetrapyrrole binding of PpaA from R. sphaeroides and show that it is not a heme-binding protein but is instead a cobalamin-binding protein. We also use bacterial two-hybrid analysis to show that PpaA is able to interact with PpsR and activate the expression of photosynthesis genes in vivo. Mutations in PpaA that cause loss of cobalamin binding also disrupt PpaA antirepressor activity in vivo. We also tested a number of PpaA homologs from other purple bacterial species and found that cobalamin binding is a conserved feature among members of this family of proteins. IMPORTANCE Cobalamin (vitamin B12) has only recently been recognized as a cofactor that affects gene expression by interacting in a light-dependent manner with transcription factors. A group of related antirepressors known as the AppA/PpaA/AerR family are known to control the expression of photosynthesis genes in part by interacting with either heme or cobalamin. The specificity of which tetrapyrroles that members of this family interact with has, however, remained cloudy. In this study, we address the tetrapyrrole-binding specificity of the PpaA/AerR subgroup and establish that it preferentially binds cobalamin over heme. PMID:26055116

  13. Signal detection in FDA AERS database using Dirichlet process.

    PubMed

    Hu, Na; Huang, Lan; Tiwari, Ram C

    2015-08-30

    In the recent two decades, data mining methods for signal detection have been developed for drug safety surveillance, using large post-market safety data. Several of these methods assume that the number of reports for each drug-adverse event combination is a Poisson random variable with mean proportional to the unknown reporting rate of the drug-adverse event pair. Here, a Bayesian method based on the Poisson-Dirichlet process (DP) model is proposed for signal detection from large databases, such as the Food and Drug Administration's Adverse Event Reporting System (AERS) database. Instead of using a parametric distribution as a common prior for the reporting rates, as is the case with existing Bayesian or empirical Bayesian methods, a nonparametric prior, namely, the DP, is used. The precision parameter and the baseline distribution of the DP, which characterize the process, are modeled hierarchically. The performance of the Poisson-DP model is compared with some other models, through an intensive simulation study using a Bayesian model selection and frequentist performance characteristics such as type-I error, false discovery rate, sensitivity, and power. For illustration, the proposed model and its extension to address a large amount of zero counts are used to analyze statin drugs for signals using the 2006-2011 AERS data. PMID:25924820

  14. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  15. Testing (Validating?) Cross Sections with ICSBEP Benchmarks

    SciTech Connect

    Kahler, Albert C. III

    2012-06-28

    We discuss how to use critical benchmarks from the International Handbook of Evaluated Criticality Safety Benchmark Experiments to determine the applicability of specific cross sections to the end-user's problem of interest. Particular attention is paid to making sure the selected suite of benchmarks includes the user's range of applicability (ROA).

  16. FireHose Streaming Benchmarks

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the streammore » of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.« less

  17. FireHose Streaming Benchmarks

    SciTech Connect

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.

  18. SU-E-I-32: Benchmarking Head CT Doses: A Pooled Vs. Protocol Specific Analysis of Radiation Doses in Adult Head CT Examinations

    SciTech Connect

    Fujii, K; Bostani, M; Cagnon, C; McNitt-Gray, M

    2015-06-15

    Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tube Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks.

  19. Fusion Welding of AerMet 100 Alloy

    SciTech Connect

    ENGLEHART, DAVID A.; MICHAEL, JOSEPH R.; NOVOTNY, PAUL M.; ROBINO, CHARLES V.

    1999-08-01

    A database of mechanical properties for weldment fusion and heat-affected zones was established for AerMet{reg_sign}100 alloy, and a study of the welding metallurgy of the alloy was conducted. The properties database was developed for a matrix of weld processes (electron beam and gas-tungsten arc) welding parameters (heat inputs) and post-weld heat treatment (PWHT) conditions. In order to insure commercial utility and acceptance, the matrix was commensurate with commercial welding technology and practice. Second, the mechanical properties were correlated with fundamental understanding of microstructure and microstructural evolution in this alloy. Finally, assessments of optimal weld process/PWHT combinations for cotildent application of the alloy in probable service conditions were made. The database of weldment mechanical properties demonstrated that a wide range of properties can be obtained in welds in this alloy. In addition, it was demonstrated that acceptable welds, some with near base metal properties, could be produced from several different initial heat treatments. This capability provides a means for defining process parameters and PWHT's to achieve appropriate properties for different applications, and provides useful flexibility in design and manufacturing. The database also indicated that an important region in welds is the softened region which develops in the heat-affected zone (HAZ) and analysis within the welding metallurgy studies indicated that the development of this region is governed by a complex interaction of precipitate overaging and austenite formation. Models and experimental data were therefore developed to describe overaging and austenite formation during thermal cycling. These models and experimental data can be applied to essentially any thermal cycle, and provide a basis for predicting the evolution of microstructure and properties during thermal processing.

  20. Benchmarking in Student Affairs.

    ERIC Educational Resources Information Center

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  1. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  2. An introduction to benchmarking in healthcare.

    PubMed

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related.

  3. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  4. dackel acts in the ectoderm of the zebrafish pectoral fin bud to maintain AER signaling.

    PubMed

    Grandel, H; Draper, B W; Schulte-Merker, S

    2000-10-01

    Classical embryological studies have implied the existence of an apical ectodermal maintenance factor (AEMF) that sustains signaling from the apical ectodermal ridge (AER) during vertebrate limb development. Recent evidence suggests that AEMF activity is composed of different signals involving both a sonic hedgehog (Shh) signal and a fibroblast growth factor 10 (Fgf10) signal from the mesenchyme. In this study we show that the product of the dackel (dak) gene is one of the components that acts in the epidermis of the zebrafish pectoral fin bud to maintain signaling from the apical fold, which is homologous to the AER of tetrapods. dak acts synergistically with Shh to induce fgf4 and fgf8 expression but independently of Shh in promoting apical fold morphogenesis. The failure of dak mutant fin buds to progress from the initial fin induction phase to the autonomous outgrowth phase causes loss of both AER and Shh activity, and subsequently results in a proximodistal truncation of the fin, similar to the result obtained by ridge ablation experiments in the chicken. Further analysis of the dak mutant phenotype indicates that the activity of the transcription factor engrailed 1 (En1) in the ventral non-ridge ectoderm also depends on a maintenance signal probably provided by the ridge. This result uncovers a new interaction between the AER and the dorsoventral organizer in the zebrafish pectoral fin bud.

  5. The usefulness of an independent patient-specific treatment planning verification method using a benchmark plan in high-dose-rate intracavitary brachytherapy for carcinoma of the uterine cervix.

    PubMed

    Takahashi, Yutaka; Koizumi, Masahiko; Sumida, Iori; Isohashi, Fumiaki; Ogata, Toshiyuki; Akino, Yuichi; Yoshioka, Yasuo; Maruoka, Shintaro; Inoue, Shinichi; Konishi, Koji; Ogawa, Kazuhiko

    2012-11-01

    To develop an easy independent patient-specific quality assurance (QA) method using a benchmark plan for high-dose-rate intracavitary brachytherapy for cervix cancer, we conducted benchmark treatment planning with various sizes and combinations of tandem-ovoid and tandem-cylinder applications with 'ideal' geometry outside the patient. Two-dimensional-based treatment planning was conducted based on the Manchester method. We predicted the total dwell time of individual treatment plans from the air kerma strength, total dwell time and prescription dose of the benchmark plan. In addition, we recorded the height (dh), width (dw) and thickness (dt) covered with 100% isodose line. These parameters were compared with 169 and 29 clinical cases for tandem-ovoid or tandem-cylinder cases, respectively. With regard to tandem-ovoid cases, differences in total dwell time, dh, dt and dw between benchmark and individual plans were on average -0.2% ± 3.8%, -1.0 mm ± 2.6 mm, 0.8 mm ± 1.3 mm and -0.1 mm ± 1.5 mm, respectively. With regard to tandem-cylinder cases, differences in total dwell time, dh(front) (the distance from tandem tip to tandem ring), dt and dw between benchmark and individual plans were on average -1.5% ± 3.1%, -1.5 mm ± 4.9 mm, 0.1 mm ± 1.0 mm and 0.2 mm ± 0.8 mm, respectively. Of two cases, more than 13% differences in total dwell time were observed between benchmark plans and the clinical cases, which turned out to be due to the use of the wrong source position setting. These results suggest that our method is easy and useful for independent verification of patient-specific treatment planning QA.

  6. Radiative Forcing by Long-Lived Greenhouse Gases: Calculations with the AER Radiative Transfer Models

    SciTech Connect

    Collins, William; Iacono, Michael J.; Delamere, Jennifer S.; Mlawer, Eli J.; Shephard, Mark W.; Clough, Shepard A.; Collins, William D.

    2008-04-01

    A primary component of the observed, recent climate change is the radiative forcing from increased concentrations of long-lived greenhouse gases (LLGHGs). Effective simulation of anthropogenic climate change by general circulation models (GCMs) is strongly dependent on the accurate representation of radiative processes associated with water vapor, ozone and LLGHGs. In the context of the increasing application of the Atmospheric and Environmental Research, Inc. (AER) radiation models within the GCM community, their capability to calculate longwave and shortwave radiative forcing for clear sky scenarios previously examined by the radiative transfer model intercomparison project (RTMIP) is presented. Forcing calculations with the AER line-by-line (LBL) models are very consistent with the RTMIP line-by-line results in the longwave and shortwave. The AER broadband models, in all but one case, calculate longwave forcings within a range of -0.20 to 0.23 W m{sup -2} of LBL calculations and shortwave forcings within a range of -0.16 to 0.38 W m{sup -2} of LBL results. These models also perform well at the surface, which RTMIP identified as a level at which GCM radiation models have particular difficulty reproducing LBL fluxes. Heating profile perturbations calculated by the broadband models generally reproduce high-resolution calculations within a few hundredths K d{sup -1} in the troposphere and within 0.15 K d{sup -1} in the peak stratospheric heating near 1 hPa. In most cases, the AER broadband models provide radiative forcing results that are in closer agreement with high 20 resolution calculations than the GCM radiation codes examined by RTMIP, which supports the application of the AER models to climate change research.

  7. Vitamin B12 regulates photosystem gene expression via the CrtJ antirepressor AerR in Rhodobacter capsulatus

    PubMed Central

    Cheng, Zhuo; Li, Keran; Hammad, Loubna A.; Karty, Jonathan A.; Bauer, Carl E.

    2014-01-01

    Summary The tetrapyrroles heme, bacteriochlorophyll and cobalamin (B12) exhibit a complex interrelationship regarding their synthesis. In this study, we demonstrate that AerR functions as an antirepressor of the tetrapyrrole regulator CrtJ. We show that purified AerR contains B12 that is bound to a conserved histidine (His145) in AerR. The interaction of AerR to CrtJ was further demonstrated in vitro by pull down experiments using AerR as bait and quantified using microscale thermophoresis. DNase I DNA footprint assays show that AerR containing B12 inhibits CrtJ binding to the bchC promoter. We further show that bchC expression is greatly repressed in a B12 auxotroph of Rhodobacter capsulatus and that B12 regulation of gene expression is mediated by AerR’s ability to function as an antirepressor of CrtJ. This study thus provides a mechanism for how the essential tetrapyrrole, cobalamin controls the synthesis of bacteriochlorophyll, an essential component of the photosystem. PMID:24329562

  8. Closed-Loop Neuromorphic Benchmarks.

    PubMed

    Stewart, Terrence C; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of "minimal" simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  9. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  10. Benchmarking concentrating photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  11. SPEEDES benchmarking analysis

    NASA Astrophysics Data System (ADS)

    Capella, Sebastian J.; Steinman, Jeffrey S.; McGraw, Robert M.

    2002-07-01

    SPEEDES, the Synchronous Parallel Environment for Emulation and Discrete Event Simulation, is a software framework that supports simulation applications across parallel and distributed architectures. SPEEDES is used as a simulation engine in support of numerous defense projects including the Joint Simulation System (JSIMS), the Joint Modeling And Simulation System (JMASS), the High Performance Computing and Modernization Program's (HPCMP) development of a High Performance Computing (HPC) Run-time Infrastructure, and the Defense Modeling and Simulation Office's (DMSO) development of a Human Behavioral Representation (HBR) Testbed. This work documents some of the performance metrics obtained from benchmarking the SPEEDES Simulation Framework with respect to the functionality found in the summer of 2001. Specifically this papers the scalability of SPEEDES with respect to its time management algorithms and simulation object event queues with respect to the number of objects simulated and events processed.

  12. Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine

    2004-01-01

    We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.

  13. Neutron Reference Benchmark Field Specification: ACRR 44 Inch Lead-Boron (LB44) Bucket Environment (ACRR-LB44-CC-32-CL).

    SciTech Connect

    Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.; Vehar, David W.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the 44 inch Lead-Boron (LB44) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.

  14. A polishing hybrid AER/UF membrane process for the treatment of a high DOC content surface water.

    PubMed

    Humbert, H; Gallard, H; Croué, J-P

    2012-03-15

    The efficacy of a combined AER/UF (Anion Exchange Resin/Ultrafiltration) process for the polishing treatment of a high DOC (Dissolved Organic Carbon) content (>8 mgC/L) surface water was investigated at lab-scale using a strong base AER. Both resin dose and bead size had a significant impact on the kinetic removal of DOC for short contact times (i.e. <15 min). For resin doses higher than 700 mg/L and median bead sizes below 250 μm DOC removal remained constant after 30 min of contact time with very high removal rates (80%). Optimum AER treatment conditions were applied in combination with UF membrane filtration on water previously treated by coagulation-flocculation (i.e. 3 mgC/L). A more severe fouling was observed for each filtration run in the presence of AER. This fouling was shown to be mainly reversible and caused by the progressive attrition of the AER through the centrifugal pump leading to the production of resin particles below 50 μm in diameter. More important, the presence of AER significantly lowered the irreversible fouling (loss of permeability recorded after backwash) and reduced the DOC content of the clarified water to l.8 mgC/L (40% removal rate), concentration that remained almost constant throughout the experiment.

  15. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    SciTech Connect

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  16. PyMPI Dynamic Benchmark

    2007-02-16

    Pynamic is a benchmark designed to test a system's ability to handle the Dynamic Linking and Loading (DLL) requirements of Python-based scientific applications. This benchmark is developed to add a workload to our testing environment, a workload that represents a newly emerging class of DLL behaviors. Pynamic buildins on pyMPI, and MPI extension to Python C-extension dummy codes and a glue layer that facilitates linking and loading of the generated dynamic modules into the resultingmore » pyMPI. Pynamic is configurable, enabling modeling the static properties of a specific code as described in section 5. It does not, however, model any significant computationss of the target and hence, it is not subjected to the same level of control as the target code. In fact, HPC computer vendors and tool developers will be encouraged to add it to their tesitn suite once the code release is completed. an ability to produce and run this benchmark is an effective test for valifating the capability of a compiler and linker/loader as well as an OS kernel and other runtime system of HPC computer vendors. In addition, the benchmark is designed as a test case for stressing code development tools. Though Python has recently gained popularity in the HPC community, it heavy DLL operations have hindered certain HPC code development tools, notably parallel debuggers, from performing optimally.« less

  17. Benchmarks for industrial energy efficiency

    SciTech Connect

    Amarnath, K.R.; Kumana, J.D.; Shah, J.V.

    1996-12-31

    What are the standards for improving energy efficiency for industries such as petroleum refining, chemicals, and glass manufacture? How can different industries in emerging markets and developing accelerate the pace of improvements? This paper discusses several case studies and experiences relating to this subject emphasizing the use of energy efficiency benchmarks. Two important benchmarks are discussed. The first is based on a track record of outstanding performers in the related industry segment; the second benchmark is based on site specific factors. Using energy use reduction targets or benchmarks, projects have been implemented in Mexico, Poland, India, Venezuela, Brazil, China, Thailand, Malaysia, Republic of South Africa and Russia. Improvements identified through these projects include a variety of recommendations. The use of oxy-fuel and electric furnaces in the glass industry in Poland; reconfiguration of process heat recovery systems for refineries in China, Malaysia, and Russia; recycling and reuse of process wastewater in Republic of South Africa; cogeneration plant in Venezuela. The paper will discuss three case studies of efforts undertaken in emerging market countries to improve energy efficiency.

  18. Thermal Performance Benchmarking (Presentation)

    SciTech Connect

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  19. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  20. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  1. Uncertainties in modelling Mt. Pinatubo eruption with 2-D AER model and CCM SOCOL

    NASA Astrophysics Data System (ADS)

    Kenzelmann, P.; Weisenstein, D.; Peter, T.; Luo, B. P.; Rozanov, E.; Fueglistaler, S.; Thomason, L. W.

    2009-04-01

    Large volcanic eruptions may introduce a strong forcing on climate. They challenge the skills of climate models. In addition to the short time attenuation of solar light by ashes the formation of stratospheric sulphate aerosols, due to volcanic sulphur dioxide injection into the lower stratosphere, may lead to a significant enhancement of the global albedo. The sulphate aerosols have a residence time of about 2 years. As a consequence of the enhanced sulphate aerosol concentration both the stratospheric chemistry and dynamics are strongly affected. Due to absorption of longwave and near infrared radiation the temperature in the lower stratosphere increases. So far chemistry climate models overestimate this warming [Eyring et al. 2006]. We present an extensive validation of extinction measurements and model runs of the eruption of Mt. Pinatubo in 1991. Even if Mt. Pinatubo eruption has been the best quantified volcanic eruption of this magnitude, the measurements show considerable uncertainties. For instance the total amount of sulphur emitted to the stratosphere ranges from 5-12 Mt sulphur [e.g. Guo et al. 2004, McCormick, 1992]. The largest uncertainties are in the specification of the main aerosol cloud. SAGE II, for instance, could not measure the peak of the aerosol extinction for about 1.5 years, because optical termination was reached. The gap-filling of the SAGE II [Thomason and Peter, 2006] using lidar measurements underestimates the total extinctions in the tropics for the first half year after the eruption by 30% compared to AVHRR [Rusell et. al 1992]. The same applies to the optical dataset described by Stenchikov et al. [1998]. We compare these extinction data derived from measurements with extinctions derived from AER 2D aerosol model calculations [Weisenstein et al., 2007]. Full microphysical calculations with injections of 14, 17, 20 and 26 Mt SO2 in the lower stratosphere were performed. The optical aerosol properties derived from SAGE II

  2. Toxicological Benchmarks for Wildlife

    SciTech Connect

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk

  3. Benchmarking neuromorphic systems with Nengo

    PubMed Central

    Bekolay, Trevor; Stewart, Terrence C.; Eliasmith, Chris

    2015-01-01

    Nengo is a software package for designing and simulating large-scale neural models. Nengo is architected such that the same Nengo model can be simulated on any of several Nengo backends with few to no modifications. Backends translate a model to specific platforms, which include GPUs and neuromorphic hardware. Nengo also contains a large test suite that can be run with any backend and focuses primarily on functional performance. We propose that Nengo's large test suite can be used to benchmark neuromorphic hardware's functional performance and simulation speed in an efficient, unbiased, and future-proof manner. We implement four benchmark models and show that Nengo can collect metrics across five different backends that identify situations in which some backends perform more accurately or quickly. PMID:26539076

  4. The General Concept of Benchmarking and Its Application in Higher Education in Europe

    ERIC Educational Resources Information Center

    Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna

    2009-01-01

    The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…

  5. Adiabatic shear band formation in explosively driven AerMet-100 alloy cylinders

    SciTech Connect

    Sunwoo, A J; Becker, R; Goto, D M; Orzechowski, T J; Springer, H K; Syn, C K; Zhou, J

    2006-02-08

    Two differently heat-treated AerMet-100 alloy cylinders were explosively driven to fragmentation. Soft-captured fragments were studied to characterize the deformation and damage induced by high explosive loading. The characterization of the fragments reveals that the dominant failure mechanism appears to be dynamic fracture along adiabatic shear bands. These shear bands differ in size and morphology depending on the heat-treated conditions. Nanoindentation measurements of the adiabatic shear bands in either material condition indicate higher hardness in the bands compared to the matrix regions of the fragments.

  6. TWODANT benchmark. Progress report

    SciTech Connect

    Lee, Sung

    1994-01-11

    TWODANT (Two-Dimensional, Diffusion-Accelerated, Neutral-Particle Transport) code has been benchmarked against 6 critical experiments (Jezebel plutonium critical assembly) and their k effective values compared with those of KENO and MCNP codes.

  7. Benchmarking for strategic action.

    PubMed

    Jennings, K; Westfall, F

    1992-01-01

    By focusing on three key elements--customer expectations, competitor strengths and vulnerabilities, and organizational competencies--a company's benchmarking effort can be designed to drive the strategic planning process.

  8. Benchmark problems and solutions

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1995-01-01

    The scientific committee, after careful consideration, adopted six categories of benchmark problems for the workshop. These problems do not cover all the important computational issues relevant to Computational Aeroacoustics (CAA). The deciding factor to limit the number of categories to six was the amount of effort needed to solve these problems. For reference purpose, the benchmark problems are provided here. They are followed by the exact or approximate analytical solutions. At present, an exact solution for the Category 6 problem is not available.

  9. Action-Oriented Benchmarking: Using the CEUS Database to Benchmark Commercial Buildings in California

    SciTech Connect

    Mathew, Paul; Mills, Evan; Bourassa, Norman; Brook, Martha

    2008-02-01

    The 2006 Commercial End Use Survey (CEUS) database developed by the California Energy Commission is a far richer source of energy end-use data for non-residential buildings than has previously been available and opens the possibility of creating new and more powerful energy benchmarking processes and tools. In this article--Part 2 of a two-part series--we describe the methodology and selected results from an action-oriented benchmarking approach using the new CEUS database. This approach goes beyond whole-building energy benchmarking to more advanced end-use and component-level benchmarking that enables users to identify and prioritize specific energy efficiency opportunities - an improvement on benchmarking tools typically in use today.

  10. DOE Commercial Building Benchmark Models: Preprint

    SciTech Connect

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  11. NAS Grid Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.

  12. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  13. HSCT Assessment Calculations with the AER 2-D Model: Sensitivities to Transport Formulation, PSC Formulation, Interannual Temperature Variation. Appendix C

    NASA Technical Reports Server (NTRS)

    Weisenstein, Debra K.; Ko, Malcolm K. W.; Scott, Courtney J.; Shia, Run-Lie; Jackman, Charles; Fleming, Eric; Considine, David; Kinnison, Douglas; Connell, Peter; Rotman, Douglas

    1998-01-01

    The summary are: (1) Some chemical differences in background atmosphere are surprisingly large (NOY). (2) Differences in model transport explain a majority of the intertnodel differences in the absence of PSCs. (3) With PSCS, large differences exist in predicted O3 depletion between models with the same transport. (4) AER/LLNL model calculates more O3 depletion in NH than LLNL. (5) AER/GSFC model cannot match calculated O3 depletion of GSFC model in SH. and (6) Results sensitive to interannual temperature variations (at least in NH).

  14. 42 CFR 422.258 - Calculation of benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Calculation of benchmarks. 422.258 Section 422.258... and Plan Approval § 422.258 Calculation of benchmarks. (a) The term “MA area-specific non-drug monthly benchmark amount” means, for a month in a year: (1) For MA local plans with service areas entirely within...

  15. Benchmarking ENDF/B-VII.0

    NASA Astrophysics Data System (ADS)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  16. AerGOM, an improved algorithm for stratospheric aerosol extinction retrieval from GOMOS observations - Part 1: Algorithm description

    NASA Astrophysics Data System (ADS)

    Vanhellemont, Filip; Mateshvili, Nina; Blanot, Laurent; Étienne Robert, Charles; Bingen, Christine; Sofieva, Viktoria; Dalaudier, Francis; Tétard, Cédric; Fussen, Didier; Dekemper, Emmanuel; Kyrölä, Erkki; Laine, Marko; Tamminen, Johanna; Zehner, Claus

    2016-09-01

    The GOMOS instrument on Envisat has successfully demonstrated that a UV-Vis-NIR spaceborne stellar occultation instrument is capable of delivering quality data on the gaseous and particulate composition of Earth's atmosphere. Still, some problems related to data inversion remained to be examined. In the past, it was found that the aerosol extinction profile retrievals in the upper troposphere and stratosphere are of good quality at a reference wavelength of 500 nm but suffer from anomalous, retrieval-related perturbations at other wavelengths. Identification of algorithmic problems and subsequent improvement was therefore necessary. This work has been carried out; the resulting AerGOM Level 2 retrieval algorithm together with the first data version AerGOMv1.0 forms the subject of this paper. The AerGOM algorithm differs from the standard GOMOS IPF processor in a number of important ways: more accurate physical laws have been implemented, all retrieval-related covariances are taken into account, and the aerosol extinction spectral model is strongly improved. Retrieval examples demonstrate that the previously observed profile perturbations have disappeared, and the obtained extinction spectra look in general more consistent. We present a detailed validation study in a companion paper; here, to give a first idea of the data quality, a worst-case comparison at 386 nm shows SAGE II-AerGOM correlation coefficients that are up to 1 order of magnitude larger than the ones obtained with the GOMOS IPFv6.01 data set.

  17. Surveys and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  18. Python/Lua Benchmarks

    SciTech Connect

    Busby, L.

    2014-08-01

    This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.

  19. Benchmarking the World's Best

    ERIC Educational Resources Information Center

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  20. Monte Carlo Benchmark

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  1. Comparison of five benchmarks

    SciTech Connect

    Huss, J. E.; Pennline, J. A.

    1987-02-01

    Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.

  2. Benchmarks: WICHE Region 2012

    ERIC Educational Resources Information Center

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  3. Principles for an ETL Benchmark

    NASA Astrophysics Data System (ADS)

    Wyatt, Len; Caufield, Brian; Pol, Daniel

    Conditions in the marketplace for ETL tools suggest that an industry standard benchmark is needed. The benchmark should provide useful data for comparing the performance of ETL systems, be based on a meaningful scenario, and be scalable over a wide range of data set sizes. This paper gives a general scoping of the proposed benchmark and outlines some key decision points. The Transaction Processing Performance Council (TPC) has formed a development subcommittee to define and produce such a benchmark.

  4. Benchmark Problems for Spacecraft Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.

    2003-01-01

    To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.

  5. AerGOM, an improved algorithm for stratospheric aerosol extinction retrieval from GOMOS observations - Part 2: Intercomparisons

    NASA Astrophysics Data System (ADS)

    Étienne Robert, Charles; Bingen, Christine; Vanhellemont, Filip; Mateshvili, Nina; Dekemper, Emmanuel; Tétard, Cédric; Fussen, Didier; Bourassa, Adam; Zehner, Claus

    2016-09-01

    AerGOM is a retrieval algorithm developed for the GOMOS instrument onboard Envisat as an alternative to the operational retrieval (IPF). AerGOM enhances the quality of the stratospheric aerosol extinction retrieval due to the extension of the spectral range used, refines the aerosol spectral parameterization, the simultaneous inversion of all atmospheric species as well as an improvement of the Rayleigh scattering correction. The retrieval algorithm allows for a good characterization of the stratospheric aerosol extinction for a wide range of wavelengths.In this work, we present the results of stratospheric aerosol extinction comparisons between AerGOM and various spaceborne instruments (SAGE II, SAGE III, POAM III, ACE-MAESTRO and OSIRIS) for different wavelengths. The aerosol extinction intercomparisons for λ < 700 nm and above 20 km show agreements with SAGE II version 7 and SAGE III version 4.0 within ±15 % and ±45 %, respectively. There is a strong positive bias below 20 km at λ < 700 nm, which suggests that cirrus clouds at these altitudes have a large impact on the extinction values. Comparisons performed with GOMOS IPF v6.01 alongside AerGOM show that at short wavelengths and altitudes below 20 km, IPF retrievals are more accurate when evaluated against SAGE II and SAGE III but are much less precise than AerGOM. A modified aerosol spectral parameterization can improve AerGOM in this spectral and altitude range and leads to results that have an accuracy similar to IPF retrievals. Comparisons of AerGOM aerosol extinction coefficients with OSIRIS and SAGE III measurements at wavelengths larger than 700 nm show a very large negative bias at altitudes above 25 km. Therefore, the use of AerGOM aerosol extinction data is not recommended for λ > 700 nm.Due to the unique observational technique of GOMOS, some of the results appear to be dependent on the star occultation parameters such as star apparent temperature and magnitude, solar zenith angle

  6. Radiography benchmark 2014

    SciTech Connect

    Jaenisch, G.-R. Deresch, A. Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  7. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  8. Sequoia Messaging Rate Benchmark

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8)more » with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less

  9. Radiography benchmark 2014

    NASA Astrophysics Data System (ADS)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  10. Storage-Intensive Supercomputing Benchmark Study

    SciTech Connect

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.

  11. MPI Multicore Linktest Benchmark

    2008-01-25

    The MPI Multicore Linktest (LinkTest) measures the aggregate bandwidth from/to a multicore node in a parallel system. It allows the user to specify a variety of different node layout and communication routine variations and reports the maximal observed bandwidth across all specified options. In particular, this benchmark is able to vary the number of tasks on the root node and thereby allows users to study the impact of multicore architectures on MPI communication performance.

  12. Algebraic Multigrid Benchmark

    SciTech Connect

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumps and an anisotropy in one part.

  13. Benchmarking HIPAA compliance.

    PubMed

    Wagner, James R; Thoman, Deborah J; Anumalasetty, Karthikeyan; Hardre, Pat; Ross-Lazarov, Tsvetomir

    2002-01-01

    One of the nation's largest academic medical centers is benchmarking its operations using internally developed software to improve privacy/confidentiality of protected health information (PHI) and to enhance data security to comply with HIPAA regulations. It is also coordinating the development of a web-based interactive product that can help hospitals, physician practices, and managed care organizations measure their compliance with HIPAA regulations.

  14. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  15. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  16. Benchmark on outdoor scenes

    NASA Astrophysics Data System (ADS)

    Zhang, Hairong; Wang, Cheng; Chen, Yiping; Jia, Fukai; Li, Jonathan

    2016-03-01

    Depth super-resolution is becoming popular in computer vision, and most of test data is based on indoor data sets with ground-truth measurements such as Middlebury. However, indoor data sets mainly are acquired from structured light techniques under ideal conditions, which cannot represent the objective world with nature light. Unlike indoor scenes, the uncontrolled outdoor environment is much more complicated and is rich both in visual and depth texture. For that reason, we develop a more challenging and meaningful outdoor benchmark for depth super-resolution using the state-of-the-art active laser scanning system.

  17. Benchmarking emerging logic devices

    NASA Astrophysics Data System (ADS)

    Nikonov, Dmitri

    2014-03-01

    As complementary metal-oxide-semiconductor field-effect transistors (CMOS FET) are being scaled to ever smaller sizes by the semiconductor industry, the demand is growing for emerging logic devices to supplement CMOS in various special functions. Research directions and concepts of such devices are overviewed. They include tunneling, graphene based, spintronic devices etc. The methodology to estimate future performance of emerging (beyond CMOS) devices and simple logic circuits based on them is explained. Results of benchmarking are used to identify more promising concepts and to map pathways for improvement of beyond CMOS computing.

  18. Algebraic Multigrid Benchmark

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumpsmore » and an anisotropy in one part.« less

  19. 2001 benchmarking guide.

    PubMed

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  20. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  1. A Bio-Inspired AER Temporal Tri-Color Differentiator Pixel Array.

    PubMed

    Farian, Łukasz; Leñero-Bardallo, Juan Antonio; Häfliger, Philipp

    2015-10-01

    This article investigates the potential of a bio-inspired vision sensor with pixels that detect transients between three primary colors. The in-pixel color processing is inspired by the retinal color opponency that are found in mammalian retinas. Color transitions in a pixel are represented by voltage spikes, which are akin to a neuron's action potential. These spikes are conveyed off-chip by the Address Event Representation (AER) protocol. To achieve sensitivity to three different color spectra within the visual spectrum, each pixel has three stacked photodiodes at different depths in the silicon substrate. The sensor has been fabricated in the standard TSMC 90 nm CMOS technology. A post-processing method to decode events into color transitions has been proposed and implemented as a custom interface to display real-time color changes in the visual scene. Experimental results are provided. Color transitions can be detected at high speed (up to 2.7 kHz). The sensor has a dynamic range of 58 dB and a power consumption of 22.5 mW. This type of sensor can be of use in industrial, robotics, automotive and other applications where essential information is contained in transient emissions shifts within the visual spectrum.

  2. Continued development and validation of the AER two-dimensional interactive model

    NASA Technical Reports Server (NTRS)

    Ko, M. K. W.; Sze, N. D.; Shia, R. L.; Mackay, M.; Weisenstein, D. K.; Zhou, S. T.

    1996-01-01

    Results from two-dimensional chemistry-transport models have been used to predict the future behavior of ozone in the stratosphere. Since the transport circulation, temperature, and aerosol surface area are fixed in these models, they cannot account for the effects of changes in these quantities, which could be modified because of ozone redistribution and/or other changes in the troposphere associated with climate changes. Interactive two-dimensional models, which calculate the transport circulation and temperature along with concentrations of the chemical species, could provide answers to complement the results from three-dimension model calculations. In this project, we performed the following tasks in pursuit of the respective goals: (1) We continued to refine the 2-D chemistry-transport model; (2) We developed a microphysics model to calculate the aerosol loading and its size distribution; (3) The treatment of physics in the AER 2-D interactive model were refined in the following areas--the heating rate in the troposphere, and wave-forcing from propagation of planetary waves.

  3. A Bio-Inspired AER Temporal Tri-Color Differentiator Pixel Array.

    PubMed

    Farian, Łukasz; Leñero-Bardallo, Juan Antonio; Häfliger, Philipp

    2015-10-01

    This article investigates the potential of a bio-inspired vision sensor with pixels that detect transients between three primary colors. The in-pixel color processing is inspired by the retinal color opponency that are found in mammalian retinas. Color transitions in a pixel are represented by voltage spikes, which are akin to a neuron's action potential. These spikes are conveyed off-chip by the Address Event Representation (AER) protocol. To achieve sensitivity to three different color spectra within the visual spectrum, each pixel has three stacked photodiodes at different depths in the silicon substrate. The sensor has been fabricated in the standard TSMC 90 nm CMOS technology. A post-processing method to decode events into color transitions has been proposed and implemented as a custom interface to display real-time color changes in the visual scene. Experimental results are provided. Color transitions can be detected at high speed (up to 2.7 kHz). The sensor has a dynamic range of 58 dB and a power consumption of 22.5 mW. This type of sensor can be of use in industrial, robotics, automotive and other applications where essential information is contained in transient emissions shifts within the visual spectrum. PMID:26540694

  4. Finding a benchmark for monitoring hospital cleanliness.

    PubMed

    Mulvey, D; Redding, P; Robertson, C; Woodall, C; Kingsmore, P; Bedwell, D; Dancer, S J

    2011-01-01

    This study evaluated three methods for monitoring hospital cleanliness. The aim was to find a benchmark that could indicate risk to patients from a contaminated environment. We performed visual monitoring, ATP bioluminescence and microbiological screening of five clinical surfaces before and after detergent-based cleaning on two wards over a four-week period. Five additional sites that were not featured in the routine domestic specification were also sampled. Measurements from all three methods were integrated and compared in order to choose appropriate levels for routine monitoring. We found that visual assessment did not reflect ATP values nor environmental contamination with microbial flora including Staphylococcus aureus and meticillin-resistant S. aureus (MRSA). There was a relationship between microbial growth categories and the proportion of ATP values exceeding a chosen benchmark but neither reliably predicted the presence of S. aureus or MRSA. ATP values were occasionally diverse. Detergent-based cleaning reduced levels of organic soil by 32% (95% confidence interval: 16-44%; P<0.001) but did not necessarily eliminate indicator staphylococci, some of which survived the cleaning process. An ATP benchmark value of 100 relative light units offered the closest correlation with microbial growth levels <2.5 cfu/cm(2) (receiver operating characteristic ROC curve sensitivity: 57%; specificity: 57%). In conclusion, microbiological and ATP monitoring confirmed environmental contamination, persistence of hospital pathogens and measured the effect on the environment from current cleaning practices. This study has provided provisional benchmarks to assist with future assessment of hospital cleanliness. Further work is required to refine practical sampling strategy and choice of benchmarks. PMID:21129820

  5. A Benchmarking Model. Benchmarking Quality Performance in Vocational Technical Education.

    ERIC Educational Resources Information Center

    Losh, Charles

    The Skills Standards Projects have provided further emphasis on the need for benchmarking U.S. vocational-technical education (VTE) against international competition. Benchmarking is an ongoing systematic process designed to identify, as quantitatively as possible, those practices that produce world class performance. Metrics are those things that…

  6. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  7. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  8. Simple benchmark for complex dose finding studies.

    PubMed

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  9. Benchmarking foreign electronics technologies

    SciTech Connect

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  10. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  11. Internal Benchmarking for Institutional Effectiveness

    ERIC Educational Resources Information Center

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  12. Benchmark for Strategic Performance Improvement.

    ERIC Educational Resources Information Center

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  13. How to Advance TPC Benchmarks with Dependability Aspects

    NASA Astrophysics Data System (ADS)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  14. Applications of Integral Benchmark Data

    SciTech Connect

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  15. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  16. Fuel characteristics pertinent to the design of aircraft fuel systems, Supplement I : additional information on MIL-F-7914(AER) grade JP-5 fuel and several fuel oils

    NASA Technical Reports Server (NTRS)

    Barnett, Henry C; Hibbard, Robert R

    1953-01-01

    Since the release of the first NACA publication on fuel characteristics pertinent to the design of aircraft fuel systems (NACA-RM-E53A21), additional information has become available on MIL-F7914(AER) grade JP-5 fuel and several of the current grades of fuel oils. In order to make this information available to fuel-system designers as quickly as possible, the present report has been prepared as a supplement to NACA-RM-E53A21. Although JP-5 fuel is of greater interest in current fuel-system problems than the fuel oils, the available data are not as extensive. It is believed, however, that the limited data on JP-5 are sufficient to indicate the variations in stocks that the designer must consider under a given fuel specification. The methods used in the preparation and extrapolation of data presented in the tables and figures of this supplement are the same as those used in NACA-RM-E53A21.

  17. Benchmarking of collimation tracking using RHIC beam loss data.

    SciTech Connect

    Robert-Demolaize,G.; Drees, A.

    2008-06-23

    State-of-the-art tracking tools were recently developed at CERN to study the cleaning efficiency of the Large Hadron Collider (LHC) collimation system. In order to estimate the prediction accuracy of these tools, benchmarking studies can be performed using actual beam loss measurements from a machine that already uses a similar multistage collimation system. This paper reviews the main results from benchmarking studies performed with specific data collected from operations at the Relativistic Heavy Ion Collider (RHIC).

  18. Extraction of pure thermal neutron beam for the proposed PGNAA facility at the TRIGA research reactor of AERE, Savar, Bangladesh

    NASA Astrophysics Data System (ADS)

    Alam, Sabina; Zaman, M. A.; Islam, S. M. A.; Ahsan, M. H.

    1993-10-01

    A study on collimators and filters for the design of a spectrometer for prompt gamma neutron activation analysis (PGNAA) at one of the radial beamports of the TRIGA Mark II reactor at AERE, Savar has been carried out. On the basis of this study a collimator and a filter have been designed for the proposed PGNAA facility. Calculations have been done for measuring neutron flux at various positions of the core of the reactor using the computer code TRIGAP. Gamma dose in the core of the reactor has also been measured experimentally using TLD technique in the present work.

  19. Benchmarking in water project analysis

    NASA Astrophysics Data System (ADS)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  20. Phase-covariant quantum benchmarks

    NASA Astrophysics Data System (ADS)

    Calsamiglia, J.; Aspachs, M.; Muñoz-Tapia, R.; Bagan, E.

    2009-05-01

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  1. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  2. A Heterogeneous Medium Analytical Benchmark

    SciTech Connect

    Ganapol, B.D.

    1999-09-27

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results.

  3. California commercial building energy benchmarking

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the identities of building owners might be revealed and

  4. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  5. Sequence Stratigraphy of the Lower Cretaceous in Aer Sag, Erlian Basin, North China

    NASA Astrophysics Data System (ADS)

    Yao, Wei; De Batist, Marc; Wu, Chonglong

    2014-05-01

    The concepts of sequence stratigraphy, initially developed for the study of marine depositional systems, are increasingly also being applied to sequences deposited in lacustrine basins, particularly in the context of petroleum exploration. However, lacustrine basins differ from marine basins. They are typically smaller, exhibit a strong diversification in sedimentary facies, generally contain thinner sequences and are characterized by multiple sedimentary source regions. These characteristics should be taken into account when analyzing sequence stratigraphy in lacustrine basins. Aer Sag is a balanced-fill sag in Erlian basin, North China. During the Early Cretaceous tectonic subsidence is the main controlling factor for sequence development. Based on the unconformities observed at the top of different inversion-induced depositional cycles, the 2nd-order sequence of the Lower Cretaceous can be sub-divided into six 3rd-order sequences of which the lower four, which bear most of the hydrocarbon reservoirs, are the focus of this study. Generally, a complete 3rd-order sequence can be partitioned into four systems tracts: i.e. lowstand systems tract (LST), transgressive systems tract (TST), highstand systems tract (HST) and forced regression systems tract (FRST). In LSTs, tectonic activity is weak and there is a slow subsidence rate. Thus, the rate of creation of accommodation space is so slow that coarsening-upward prograding sedimentary units develop. In TSTs, tectonic activity becomes stronger and the rate of creation of accommodation space outpaces the rate of sediment supply. TSTs are characterized by fining-upward retrograding sedimentary units and by onlaps on seismic profiles that are caused by the expansion of the lake. In HSTs, tectonic activity slows down again and the rate of creation of accommodation space becomes lower than the rate of sediment supply, which causes the lake to shrink and the development of coarsening-upward prograding sedimentary units. In

  6. Benchmarking for Bayesian Reinforcement Learning

    PubMed Central

    Ernst, Damien; Couëtoux, Adrien

    2016-01-01

    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed. PMID:27304891

  7. Benchmarking without ground truth

    NASA Astrophysics Data System (ADS)

    Santini, Simone

    2006-01-01

    Many evaluation techniques for content based image retrieval are based on the availability of a ground truth, that is on a "correct" categorization of images so that, say, if the query image is of category A, only the returned images in category A will be considered as "hits." Based on such a ground truth, standard information retrieval measures such as precision and recall and given and used to evaluate and compare retrieval algorithms. Coherently, the assemblers of benchmarking data bases go to a certain length to have their images categorized. The assumption of the existence of a ground truth is, in many respect, naive. It is well known that the categorization of the images depends on the a priori (from the point of view of such categorization) subdivision of the semantic field in which the images are placed (a trivial observation: a plant subdivision for a botanist is very different from that for a layperson). Even within a given semantic field, however, categorization by human subjects is subject to uncertainty, and it makes little statistical sense to consider the categorization given by one person as the unassailable ground truth. In this paper I propose two evaluation techniques that apply to the case in which the ground truth is subject to uncertainty. In this case, obviously, measures such as precision and recall as well will be subject to uncertainty. The paper will explore the relation between the uncertainty in the ground truth and that in the most commonly used evaluation measures, so that the measurements done on a given system can preserve statistical significance.

  8. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  9. Data-Intensive Benchmarking Suite

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking

  10. Building a knowledge base of severe adverse drug events based on AERS reporting data using semantic web technologies.

    PubMed

    Jiang, Guoqian; Wang, Liwei; Liu, Hongfang; Solbrig, Harold R; Chute, Christopher G

    2013-01-01

    A semantically coded knowledge base of adverse drug events (ADEs) with severity information is critical for clinical decision support systems and translational research applications. However it remains challenging to measure and identify the severity information of ADEs. The objective of the study is to develop and evaluate a semantic web based approach for building a knowledge base of severe ADEs based on the FDA Adverse Event Reporting System (AERS) reporting data. We utilized a normalized AERS reporting dataset and extracted putative drug-ADE pairs and their associated outcome codes in the domain of cardiac disorders. We validated the drug-ADE associations using ADE datasets from SIDe Effect Resource (SIDER) and the UMLS. We leveraged the Common Terminology Criteria for Adverse Event (CTCAE) grading system and classified the ADEs into the CTCAE in the Web Ontology Language (OWL). We identified and validated 2,444 unique Drug-ADE pairs in the domain of cardiac disorders, of which 760 pairs are in Grade 5, 775 pairs in Grade 4 and 2,196 pairs in Grade 3.

  11. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    SciTech Connect

    Thrower, A.W.; Patric, J.; Keister, M.

    2008-07-01

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how these findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in

  12. NAS Grid Benchmarks: A Tool for Grid Space Exploration

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We present an approach for benchmarking services provided by computational Grids. It is based on the NAS Parallel Benchmarks (NPB) and is called NAS Grid Benchmark (NGB) in this paper. We present NGB as a data flow graph encapsulating an instance of an NPB code in each graph node, which communicates with other nodes by sending/receiving initialization data. These nodes may be mapped to the same or different Grid machines. Like NPB, NGB will specify several different classes (problem sizes). NGB also specifies the generic Grid services sufficient for running the bench-mark. The implementor has the freedom to choose any specific Grid environment. However, we describe a reference implementation in Java, and present some scenarios for using NGB.

  13. NHT-1 I/O Benchmarks

    NASA Technical Reports Server (NTRS)

    Carter, Russell; Ciotti, Bob; Fineberg, Sam; Nitzbert, Bill

    1992-01-01

    The NHT-1 benchmarks am a set of three scalable I/0 benchmarks suitable for evaluating the I/0 subsystems of high performance distributed memory computer systems. The benchmarks test application I/0, maximum sustained disk I/0, and maximum sustained network I/0. Sample codes are available which implement the benchmarks.

  14. How to avoid 'death by benchmarking'.

    PubMed

    Wofford, Dave; Libby, Darin

    2015-08-01

    Hospitals and health systems should adopt four key principles and practices when applying benchmarks to determine physician compensation: Acknowledge that a lower percentile may be appropriate. Use the median as the all-in benchmark. Use peer benchmarks when available. Use alternative benchmarks.

  15. Benchmark simulation models, quo vadis?

    PubMed

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  16. Effective Communication and File-I/O Bandwidth Benchmarks

    SciTech Connect

    Koniges, A E; Rabenseifner, R

    2001-05-02

    We describe the design and MPI implementation of two benchmarks created to characterize the balanced system performance of high-performance clusters and supercomputers: b{_}eff, the communication-specific benchmark examines the parallel message passing performance of a system, and b{_}eff{_}io, which characterizes the effective 1/0 bandwidth. Both benchmarks have two goals: (a) to get a detailed insight into the Performance strengths and weaknesses of different parallel communication and I/O patterns, and based on this, (b) to obtain a single bandwidth number that characterizes the average performance of the system namely communication and 1/0 bandwidth. Both benchmarks use a time driven approach and loop over a variety of communication and access patterns to characterize a system in an automated fashion. Results of the two benchmarks are given for several systems including IBM SPs, Cray T3E, NEC SX-5, and Hitachi SR 8000. After a redesign of b{_}eff{_}io, I/O bandwidth results for several compute partition sizes are achieved in an appropriate time for rapid benchmarking.

  17. Benchmarking: a method for continuous quality improvement in health.

    PubMed

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  18. Benchmark Problems for Space Mission Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Folta, David C.; Burns, Richard

    2003-01-01

    To provide a high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested for space mission formation flying. The problems cover formation flying in low altitude, near-circular Earth orbit, high altitude, highly elliptical Earth orbits, and large amplitude lissajous trajectories about co-linear libration points of the Sun-Earth/Moon system. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions that are of interest to various agencies.

  19. 75 FR 27332 - AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC; Eagle Creek Water Resources, LLC; Eagle Creek Land...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-14

    ... Energy Regulatory Commission AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC; Eagle Creek Water Resources... Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC, and Eagle Creek Land Resources, LLC.... For the transferee: Mr. Paul Ho, Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC,...

  20. 77 FR 13592 - AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC, Eagle Creek Land...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-07

    ... Energy Regulatory Commission AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources... Power, LLC, Eagle Creek Water Resources, LLC, and Eagle Creek Land Resources, LLC (transferees) filed an...) 805-1469. Transferees: Mr. Bernard H. Cherry, Eagle Creek Hydro Power, LLC, Eagle Creek...

  1. Radiation Detection Computational Benchmark Scenarios

    SciTech Connect

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  2. SPICE benchmark for global tomographic methods

    NASA Astrophysics Data System (ADS)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  3. H.B. Robinson-2 pressure vessel benchmark

    SciTech Connect

    Remec, I.; Kam, F.B.K.

    1998-02-01

    The H. B. Robinson Unit 2 Pressure Vessel Benchmark (HBR-2 benchmark) is described and analyzed in this report. Analysis of the HBR-2 benchmark can be used as partial fulfillment of the requirements for the qualification of the methodology for calculating neutron fluence in pressure vessels, as required by the U.S. Nuclear Regulatory Commission Regulatory Guide DG-1053, Calculational and Dosimetry Methods for Determining Pressure Vessel Neutron Fluence. Section 1 of this report describes the HBR-2 benchmark and provides all the dimensions, material compositions, and neutron source data necessary for the analysis. The measured quantities, to be compared with the calculated values, are the specific activities at the end of fuel cycle 9. The characteristic feature of the HBR-2 benchmark is that it provides measurements on both sides of the pressure vessel: in the surveillance capsule attached to the thermal shield and in the reactor cavity. In section 2, the analysis of the HBR-2 benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed with three multigroup libraries based on ENDF/B-VI: BUGLE-93, SAILOR-95 and BUGLE-96. The average ratio of the calculated-to-measured specific activities (C/M) for the six dosimeters in the surveillance capsule was 0.90 {+-} 0.04 for all three libraries. The average C/Ms for the cavity dosimeters (without neptunium dosimeter) were 0.89 {+-} 0.10, 0.91 {+-} 0.10, and 0.90 {+-} 0.09 for the BUGLE-93, SAILOR-95 and BUGLE-96 libraries, respectively. It is expected that the agreement of the calculations with the measurements, similar to the agreement obtained in this research, should typically be observed when the discrete-ordinates method and ENDF/B-VI libraries are used for the HBR-2 benchmark analysis.

  4. Symbolic manipulation and transport benchmarks

    SciTech Connect

    Ganapol, B.D.

    1986-01-01

    The establishment of reliable benchmark solutions is an integral part of the development of computational algorithms to solve the Boltzmann equation of particle motion. These solutions provide standards by which code developers can assess new numerical algorithms as well as ensure proper programming. A transport benchmark solution, as defined here, is the accurate numerical evaluation (3 to 5 digits) of an analytical solution to the transport equation. The basic elements of such a solution are an analytical representation free from discretization and a numerical evaluation for which an error estimate can be obtained. Symbolic manipulation software such as REDUCE, MACSYMA, and SMP can greatly aid in the generation of benchmark solutions. The benefit of these manipulators lies both in their ability to perform lengthy algebraic calculations and to write a code that can be incorporated directly into existing programs. Using two fundamental problems from particle transport theory, the author explores the advantages and limitations of the application of the REDUCE software package in generating time dependent benchmark solutions.

  5. A comparison of five benchmarks

    NASA Technical Reports Server (NTRS)

    Huss, Janice E.; Pennline, James A.

    1987-01-01

    Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.

  6. Processor Emulator with Benchmark Applications

    SciTech Connect

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  7. Real-Time Benchmark Suite

    1992-01-17

    This software provides a portable benchmark suite for real time kernels. It tests the performance of many of the system calls, as well as the interrupt response time and task response time to interrupts. These numbers provide a baseline for comparing various real-time kernels and hardware platforms.

  8. Benchmark Lisp And Ada Programs

    NASA Technical Reports Server (NTRS)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  9. Austin Community College Benchmarking Update.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX. Office of Institutional Effectiveness.

    Austin Community College contracted with MGT of America, Inc. in spring 1999 to develop a peer and benchmark (best) practices analysis on key indicators. These indicators were updated in spring 2002 using data from eight Texas community colleges and four non-Texas institutions that represent large, comprehensive, urban community colleges, similar…

  10. Benchmarking short sequence mapping tools

    PubMed Central

    2013-01-01

    Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764

  11. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  12. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  13. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  14. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  15. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  16. The Impact Hydrocode Benchmark and Validation Project

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.

    When properly benchmarked and validated against observations computer models offer a powerful tool for understanding the mechanics of impact crater formation. We present results from a project to benchmark and validate shock physics codes.

  17. Guideline for benchmarking thermal treatment systems for low-level mixed waste

    SciTech Connect

    Hoffman, D.P.; Gibson, L.V. Jr.; Hermes, W.H.; Bastian, R.E.; Davis, W.T.

    1994-01-01

    A process for benchmarking low-level mixed waste (LLMW) treatment technologies has been developed. When used in conjunction with the identification and preparation of surrogate waste mixtures, and with defined quality assurance and quality control procedures, the benchmarking process will effectively streamline the selection of treatment technologies being considered by the US Department of Energy (DOE) for LLMW cleanup and management. Following the quantitative template provided in the benchmarking process will greatly increase the technical information available for the decision-making process. The additional technical information will remove a large part of the uncertainty in the selection of treatment technologies. It is anticipated that the use of the benchmarking process will minimize technology development costs and overall treatment costs. In addition, the benchmarking process will enhance development of the most promising LLMW treatment processes and aid in transferring the technology to the private sector. To instill inherent quality, the benchmarking process is based on defined criteria and a structured evaluation format, which are independent of any specific conventional treatment or emerging process technology. Five categories of benchmarking criteria have been developed for the evaluation: operation/design; personnel health and safety; economics; product quality; and environmental quality. This benchmarking document gives specific guidance on what information should be included and how it should be presented. A standard format for reporting is included in Appendix A and B of this document. Special considerations for LLMW are presented and included in each of the benchmarking categories.

  18. Benchmarking clinical photography services in the NHS.

    PubMed

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession. PMID:26828540

  19. Benchmarking New Designs for the Two-Year Institution of Higher Education.

    ERIC Educational Resources Information Center

    Copa, George H.; Ammentorp, William

    This report, which is intended for technical institutions planning to use benchmark processes to facilitate change, contains five benchmarking studies describing future-oriented practices at two-year technical and community colleges that meet the design specifications stated in the report "New Designs for the Two-Year Institution of Higher…

  20. Pro: benchmarking is the absolute prerequisite for timely and significant business process improvement.

    PubMed

    Hill, Bradford T; Workman, Ronald

    2006-11-28

    Benchmarking in industry has been around for nearly a century, helping companies in nearly every sector imaginable improve their overall performance. Benchmarking's importance in health care, and specifically the clinical laboratory, can be summed up in one simple phrase--"If you cannot measure it, you cannot improve it." Here is why.

  1. How Benchmarking and Higher Education Came Together

    ERIC Educational Resources Information Center

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  2. Geothermal Heat Pump Benchmarking Report

    SciTech Connect

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  3. IT-benchmarking of clinical workflows: concept, implementation, and evaluation.

    PubMed

    Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula

    2014-01-01

    Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project.

  4. Simplified two and three dimensional HTTR benchmark problems

    SciTech Connect

    Zhan Zhang; Dingkang Zhang; Justin M. Pounders; Abderrafi M. Ougouag

    2011-05-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  5. Memory-intensive benchmarks: IRAM vs. cache-based machines

    SciTech Connect

    Gaeke, Brian G.; Husbands, Parry; Kim, Hyun Jin; Li, Xiaoye S.; Moon, Hyun Jin; Oliker, Leonid; Yelick, Katherine A.; Biswas, Rupak

    2001-09-29

    The increasing gap between processor and memory performance has led to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic structures, and the ratio of computation to memory operation.

  6. Benchmark 2 - Springback of a Jaguar Land Rover Aluminium

    NASA Astrophysics Data System (ADS)

    Allen, Martin; Oliveira, Marta; Hazra, Sumit; Adetoro, Oluwamayokun; Das, Abhishek; Cardoso, Rui

    2016-08-01

    The aim of this benchmark is the numerical prediction of the springback of an aluminium panel used in the production of a Jaguar car. The numerical simulation of springback has been very important for the reduction of die try outs through the design of the tools with die compensation, thereby allowing for the production of dimensionally accurate complex parts at a reduced cost. The forming stage of this benchmark includes one single forming operation followed by a trimming operation. Cross-sectional profiles should be reported at specific (provided) sections in the part before and after springback. Problem description, tool geometries, material properties, and the required simulation reports are summarized in this benchmark briefing.

  7. Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.

  8. Benchmark 2 – Springback of a Jaguar Land Rover Aluminium

    NASA Astrophysics Data System (ADS)

    Allen, Martin; Oliveira, Marta; Hazra, Sumit; Adetoro, Oluwamayokun; Das, Abhishek; Cardoso, Rui

    2016-08-01

    The aim of this benchmark is the numerical prediction of the springback of an aluminium panel used in the production of a Jaguar car. The numerical simulation of springback has been very important for the reduction of die try outs through the design of the tools with die compensation, thereby allowing for the production of dimensionally accurate complex parts at a reduced cost. The forming stage of this benchmark includes one single forming operation followed by a trimming operation. Cross-sectional profiles should be reported at specific (provided) sections in the part before and after springback. Problem description, tool geometries, material properties, and the required simulation reports are summarized in this benchmark briefing.

  9. No free lunch and benchmarks.

    PubMed

    Duéñez-Guzmán, Edgar A; Vose, Michael D

    2013-01-01

    We extend previous results concerning black box search algorithms, presenting new theoretical tools related to no free lunch (NFL) where functions are restricted to some benchmark (that need not be permutation closed), algorithms are restricted to some collection (that need not be permutation closed) or limited to some number of steps, or the performance measure is given. Minimax distinctions are considered from a geometric perspective, and basic results on performance matching are also presented.

  10. Restaurant Energy Use Benchmarking Guideline

    SciTech Connect

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  11. MPI Multicore Torus Communication Benchmark

    2008-02-05

    The MPI Multicore Torus Communications Benchmark (TorusTest) measues the aggegate bandwidth across all six links from/to any multicore node in a logical torus. It can run in wo modi: using a static or a random mapping of tasks to torus locations. The former can be used to achieve optimal mappings and aggregate bandwidths that can be achieved with varying node mappings.

  12. Multisensor benchmark data for riot control

    NASA Astrophysics Data System (ADS)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  13. The Medical Library Association Benchmarking Network: results*

    PubMed Central

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd

    2006-01-01

    Objective: This article presents some limited results from the Medical Library Association (MLA) Benchmarking Network survey conducted in 2002. Other uses of the data are also presented. Methods: After several years of development and testing, a Web-based survey opened for data input in December 2001. Three hundred eighty-five MLA members entered data on the size of their institutions and the activities of their libraries. The data from 344 hospital libraries were edited and selected for reporting in aggregate tables and on an interactive site in the Members-Only area of MLANET. The data represent a 16% to 23% return rate and have a 95% confidence level. Results: Specific questions can be answered using the reports. The data can be used to review internal processes, perform outcomes benchmarking, retest a hypothesis, refute a previous survey findings, or develop library standards. The data can be used to compare to current surveys or look for trends by comparing the data to past surveys. Conclusions: The impact of this project on MLA will reach into areas of research and advocacy. The data will be useful in the everyday working of small health sciences libraries as well as provide concrete data on the current practices of health sciences libraries. PMID:16636703

  14. RISKIND verification and benchmark comparisons

    SciTech Connect

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  15. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  16. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    SciTech Connect

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  17. NASA Software Engineering Benchmarking Effort

    NASA Technical Reports Server (NTRS)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  18. Benchmarking for the competitive marketplace.

    PubMed

    Clarke, R W; Sucher, T O

    1999-07-01

    One would get little argument these days regarding the importance of performance measurement in the health care industry. The traditional approach has been the straightforward use of measurable units such as financial comparisons and clinical indicators (e.g., length of stay). Also we in the health care industry have traditionally benchmarked our performance and strategies against those most like ourselves. Today's competitive market demands a more customer-focused set of performance measures that go beyond traditional approaches such as customer service. The most important task in today's environment is to study the customers' emerging priorities and adjust our business to meet those priorities. PMID:11184882

  19. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    SciTech Connect

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  20. Pynamic: the Python Dynamic Benchmark

    SciTech Connect

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  1. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  2. Benchmarking for Excellence and the Nursing Process

    NASA Technical Reports Server (NTRS)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  3. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  4. Benchmarks for acute stroke care delivery

    PubMed Central

    Hall, Ruth E.; Khan, Ferhana; Bayley, Mark T.; Asllani, Eriola; Lindsay, Patrice; Hill, Michael D.; O'Callaghan, Christina; Silver, Frank L.; Kapral, Moira K.

    2013-01-01

    Objective Despite widespread interest in many jurisdictions in monitoring and improving the quality of stroke care delivery, benchmarks for most stroke performance indicators have not been established. The objective of this study was to develop data-derived benchmarks for acute stroke quality indicators. Design Nine key acute stroke quality indicators were selected from the Canadian Stroke Best Practice Performance Measures Manual. Participants A population-based retrospective sample of patients discharged from 142 hospitals in Ontario, Canada, between 1 April 2008 and 31 March 2009 (N = 3191) was used to calculate hospital rates of performance and benchmarks. Intervention The Achievable Benchmark of Care (ABC™) methodology was used to create benchmarks based on the performance of the upper 15% of patients in the top-performing hospitals. Main Outcome Measures Benchmarks were calculated for rates of neuroimaging, carotid imaging, stroke unit admission, dysphasia screening and administration of stroke-related medications. Results The following benchmarks were derived: neuroimaging within 24 h, 98%; admission to a stroke unit, 77%; thrombolysis among patients arriving within 2.5 h, 59%; carotid imaging, 93%; dysphagia screening, 88%; antithrombotic therapy, 98%; anticoagulation for atrial fibrillation, 94%; antihypertensive therapy, 92% and lipid-lowering therapy, 77%. ABC™ acute stroke care benchmarks achieve or exceed the consensus-based targets required by Accreditation Canada, with the exception of dysphagia screening. Conclusions Benchmarks for nine hospital-based acute stroke care quality indicators have been established. These can be used in the development of standards for quality improvement initiatives. PMID:24141011

  5. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    ERIC Educational Resources Information Center

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  6. Benchmark for evaluation and validation of reactor simulations (BEAVRS)

    SciTech Connect

    Horelik, N.; Herman, B.; Forget, B.; Smith, K.

    2013-07-01

    Advances in parallel computing have made possible the development of high-fidelity tools for the design and analysis of nuclear reactor cores, and such tools require extensive verification and validation. This paper introduces BEAVRS, a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading patterns, and numerous in-vessel components. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from fifty-eight instrumented assemblies. Initial comparisons between calculations performed with MIT's OpenMC Monte Carlo neutron transport code and measured cycle 1 HZP test data are presented, and these results display an average deviation of approximately 100 pcm for the various critical configurations and control rod worth measurements. Computed HZP radial fission detector flux maps also agree reasonably well with the available measured data. All results indicate that this benchmark will be extremely useful in validation of coupled-physics codes and uncertainty quantification of in-core physics computational predictions. The detailed BEAVRS specification and its associated data package is hosted online at the MIT Computational Reactor Physics Group web site (http://crpg.mit.edu/), where future revisions and refinements to the benchmark specification will be made publicly available. (authors)

  7. Preliminary Benchmarking Efforts and MCNP Simulation Results for Homeland Security

    SciTech Connect

    Robert Hayes

    2008-04-18

    It is shown in this work that basic measurements made from well defined source detector configurations can be readily converted in to benchmark quality results by which Monte Carlo N-Particle (MCNP) input stacks can be validated. Specifically, a recent measurement made in support of national security at the Nevada Test Site (NTS) is described with sufficient detail to be submitted to the American Nuclear Society’s (ANS) Joint Benchmark Committee (JBC) for consideration as a radiation measurement benchmark. From this very basic measurement, MCNP input stacks are generated and validated both in predicted signal amplitude and spectral shape. Not modeled at this time are those perturbations from the more recent pulse height light (PHL) tally feature, although what spectral deviations are seen can be largely attributed to not including this small correction. The value of this work is as a proof-of-concept demonstration that with well documented historical testing can be converted into formal radiation measurement benchmarks. This effort would support virtual testing of algorithms and new detector configurations.

  8. NAS Parallel Benchmark Results 11-96. 1.0

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Bailey, David; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The NAS Parallel Benchmarks have been developed at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion. In other words, the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. These results represent the best results that have been reported to us by the vendors for the specific 3 systems listed. In this report, we present new NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, SGI Origin200, and SGI Origin2000. We also report High Performance Fortran (HPF) based NPB results for IBM SP2 Wide Nodes, HP/Convex Exemplar SPP2000, and SGI/CRAY T3D. These results have been submitted by Applied Parallel Research (APR) and Portland Group Inc. (PGI). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks.

  9. Benchmarking: A tool to enhance performance

    SciTech Connect

    Munro, J.F.; Kristal, J.; Thompson, G.; Johnson, T.

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  10. COG validation: SINBAD Benchmark Problems

    SciTech Connect

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  11. Benchmarking Asteroid-Deflection Experiment

    NASA Astrophysics Data System (ADS)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  12. Benchmarking Multipacting Simulations in VORPAL

    SciTech Connect

    C. Nieter, C. Roark, P. Stoltz, K. Tian

    2009-05-01

    We will present the results of benchmarking simulations run to test the ability of VORPAL to model multipacting processes in Superconducting Radio Frequency structures. VORPAL is an electromagnetic (FDTD) particle-in-cell simulation code originally developed for applications in plasma and beam physics. The addition of conformal boundaries and algorithms for secondary electron emission allow VORPAL to be applied to multipacting processes. We start with simulations of multipacting between parallel plates where there are well understood theoretical predictions for the frequency bands where multipacting is expected to occur. We reproduce the predicted multipacting bands and demonstrate departures from the theoretical predictions when a more sophisticated model of secondary emission is used. Simulations of existing cavity structures developed at Jefferson National Laboratories will also be presented where we compare results from VORPAL to experimental data.

  13. KRITZ-2 Experimental Benchmark Analysis

    SciTech Connect

    Pavlovichev, A.M.

    2001-09-28

    The KRITZ-2 experiment has been adopted by the OECD/NEA Task Force on Reactor-Based Plutonium Disposition for use as a benchmark exercise. The KRITZ-2 experiment consists of three different core configurations (one with near-weapons-grade MOX) with critical conditions a 20 C and 245 C. The KRITZ-2 experiment has calculated the MCU-REA code, which is a continuous energy Monte Carlo code system developed at the Russian Research Center--Kurchatov Institute and is used extensively in the Fissile Materials Disposition Program. The calculated results for k{sub eff} and fission rate distributions are compared with the experimental data and results of other codes. The results are in good agreement with the experimental values.

  14. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    SciTech Connect

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  15. Benchmarking Learning and Teaching: Developing a Method

    ERIC Educational Resources Information Center

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  16. Benchmark Assessment for Improved Learning. AACC Report

    ERIC Educational Resources Information Center

    Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald

    2010-01-01

    This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…

  17. Beyond Benchmarking: Value-Adding Metrics

    ERIC Educational Resources Information Center

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  18. Large Core Code Evaluation Working Group Benchmark Problem Four: neutronics and burnup analysis of a large heterogeneous fast reactor. Part 1. Analysis of benchmark results. [LMFBR

    SciTech Connect

    Cowan, C.L.; Protsik, R.; Lewellen, J.W.

    1984-01-01

    The Large Core Code Evaluation Working Group Benchmark Problem Four was specified to provide a stringent test of the current methods which are used in the nuclear design and analyses process. The benchmark specifications provided a base for performing detailed burnup calculations over the first two irradiation cycles for a large heterogeneous fast reactor. Particular emphasis was placed on the techniques for modeling the three-dimensional benchmark geometry, and sensitivity studies were carried out to determine the performance parameter sensitivities to changes in the neutronics and burnup specifications. The results of the Benchmark Four calculations indicated that a linked RZ-XY (Hex) two-dimensional representation of the benchmark model geometry can be used to predict mass balance data, power distributions, regionwise fuel exposure data and burnup reactivities with good accuracy when compared with the results of direct three-dimensional computations. Most of the small differences in the results of the benchmark analyses by the different participants were attributed to ambiguities in carrying out the regionwise flux renormalization calculations throughout the burnup step.

  19. Benchmark 1 - Nonlinear strain path forming limit of a reverse draw: Part A: Benchmark description

    NASA Astrophysics Data System (ADS)

    Benchmark-1 Committee

    2013-12-01

    The objective of this benchmark is to demonstrate the predictability of forming limits under nonlinear strain paths for a draw panel with a non-axisymmetric reversed dome-shape at the center. It is important to recognize that treating strain forming limits as though they were static during the deformation process may not lead to successful predictions of this benchmark, due to the nonlinearity of the strain paths involved in this benchmark. The benchmark tool is designed to enable a two-stage draw/reverse draw continuous forming process. Three typical sheet materials, AA5182-O Aluminum, and DP600 and TRIP780 Steels, are selected for this benchmark study.

  20. Lawrence Livermore plutonium button critical experiment benchmark

    SciTech Connect

    Trumble, E.F.; Justice, J.B.; Frost, R.L.

    1994-12-31

    The end of the Cold War and the subsequent weapons reductions have led to an increased need for the safe storage of large amounts of highly enriched plutonium. In support of code validation required to address this need, a set of critical experiments involving arrays of weapons-grade plutonium metal that were performed at the Lawrence Livermore National Laboratory (LLNL) in the late 1960s has been revisited. Although these experiments are well documented, discrepancies and omissions have been found in the earlier reports. Many of these have been resolved in the current work, and these data have been compiled into benchmark descriptions. In addition, a computational verification has been performed on the benchmarks using multiple computer codes. These benchmark descriptions are also being made available to the US Department of Energy (DOE)-sponsored Nuclear Criticality Safety Benchmark Evaluation Working Group for dissemination in the DOE Handbook on Evaluated Criticality Safety Benchmark Experiments.

  1. A proposed benchmark problem for cargo nuclear threat monitoring

    NASA Astrophysics Data System (ADS)

    Wesley Holmes, Thomas; Calderon, Adan; Peeples, Cody R.; Gardner, Robin P.

    2011-10-01

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, [1]). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.×4 in.×16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.×16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.

  2. Increased Uptake of HCV Testing through a Community-Based Educational Intervention in Difficult-to-Reach People Who Inject Drugs: Results from the ANRS-AERLI Study

    PubMed Central

    Roux, Perrine; Rojas Castro, Daniela; Ndiaye, Khadim; Debrus, Marie; Protopopescu, Camélia; Le Gall, Jean-Marie; Haas, Aurélie; Mora, Marion; Spire, Bruno; Suzan-Monti, Marie; Carrieri, Patrizia

    2016-01-01

    Aims The community-based AERLI intervention provided training and education to people who inject drugs (PWID) about HIV and HCV transmission risk reduction, with a focus on drug injecting practices, other injection-related complications, and access to HIV and HCV testing and care. We hypothesized that in such a population where HCV prevalence is very high and where few know their HCV serostatus, AERLI would lead to increased HCV testing. Methods The national multisite intervention study ANRS-AERLI consisted in assessing the impact of an injection-centered face-to-face educational session offered in volunteer harm reduction (HR) centers (“with intervention”) compared with standard HR centers (“without intervention”). The study included 271 PWID interviewed on three occasions: enrolment, 6 and 12 months. Participants in the intervention group received at least one face-to-face educational session during the first 6 months. Measurements The primary outcome of this analysis was reporting to have been tested for HCV during the previous 6 months. Statistical analyses used a two-step Heckman approach to account for bias arising from the non-randomized clustering design. This approach identified factors associated with HCV testing during the previous 6 months. Findings Of the 271 participants, 127 and 144 were enrolled in the control and intervention groups, respectively. Of the latter, 113 received at least one educational session. For the present analysis, we selected 114 and 88 participants eligible for HCV testing in the control and intervention groups, respectively. In the intervention group, 44% of participants reported having being tested for HCV during the previous 6 months at enrolment and 85% at 6 months or 12 months. In the control group, these percentages were 51% at enrolment and 78% at 12 months. Multivariable analyses showed that participants who received at least one educational session during follow-up were more likely to report HCV testing

  3. Development of an ICSBEP Benchmark Evaluation, Nearly 20 Years of Experience

    SciTech Connect

    J. Blair Briggs; John D. Bess

    2011-06-01

    The basic structure of all ICSBEP benchmark evaluations is essentially the same and includes (1) a detailed description of the experiment; (2) an evaluation of the experiment, including an exhaustive effort to quantify the effects of uncertainties on measured quantities; (3) a concise presentation of benchmark-model specifications; (4) sample calculation results; and (5) a summary of experimental references. Computer code input listings and other relevant information are generally preserved in appendixes. Details of an ICSBEP evaluation is presented.

  4. Metrics and Benchmarks for Visualization

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    What is a "good" visualization? How can the quality of a visualization be measured? How can one tell whether one visualization is "better" than another? I claim that the true quality of a visualization can only be measured in the context of a particular purpose. The same image generated from the same data may be excellent for one purpose and abysmal for another. A good measure of visualization quality will correspond to the performance of users in accomplishing the intended purpose, so the "gold standard" is user testing. As a user of visualization software (or at least a consultant to such users) I don't expect visualization software to have been tested in this way for every possible use. In fact, scientific visualization (as distinct from more "production oriented" uses of visualization) will continually encounter new data, new questions and new purposes; user testing can never keep up. User need software they can trust, and advice on appropriate visualizations of particular purposes. Considering the following four processes, and their impact on visualization trustworthiness, reveals important work needed to create worthwhile metrics and benchmarks for visualization. These four processes are (1) complete system testing (user-in-loop), (2) software testing, (3) software design and (4) information dissemination. Additional information is contained in the original extended abstract.

  5. Benchmarking Measures of Network Influence

    PubMed Central

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  6. Benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1994-01-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  7. Benchmarking Measures of Network Influence

    NASA Astrophysics Data System (ADS)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-09-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures.

  8. Benchmark problems in computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Porter-Locklear, Freda

    1994-12-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  9. Developing integrated benchmarks for DOE performance measurement

    SciTech Connect

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  10. Cross Section Evaluation Working Group benchmark specifications. Volume 2. Supplement

    SciTech Connect

    Not Available

    1986-09-01

    Neutron and photon flux spectra have been measured and calculated for the case of neutrons produced by D-T reactions streaming through a cylindrical iron duct surrounded by concrete. Measurements and calculations have also been obtained when the iron duct is partially filled by a laminated stainless steel and borated polyethylene shadow bar. Schematic diagrams of the experimental apparatus is included.

  11. Big Data in AER

    NASA Astrophysics Data System (ADS)

    Kregenow, Julia M.

    2016-01-01

    Penn State University teaches Introductory Astronomy to more undergraduates than any other institution in the U.S. Using a standardized assessment instrument, we have pre-/post- tested over 20,000 students in the last 8 years in both resident and online instruction. This gives us a rare opportunity to look for long term trends in the performance of our students during a period in which online instruction has burgeoned.

  12. Updates to the integrated protein-protein interaction benchmarks: Docking benchmark version 5 and affinity benchmark version 2

    PubMed Central

    Vreven, Thom; Moal, Iain H.; Vangone, Anna; Pierce, Brian G.; Kastritis, Panagiotis L.; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M.J.J.; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were added to the docking benchmark, 35 of which have experimentally-measured binding affinities. These updated docking and affinity benchmarks now contain 230 and 179 entries, respectively. In particular, the number of antibody-antigen complexes has increased significantly, by 67% and 74% in the docking and affinity benchmarks, respectively. We tested previously developed docking and affinity prediction algorithms on the new cases. Considering only the top ten docking predictions per benchmark case, a prediction accuracy of 38% is achieved on all 55 cases, and up to 50% for the 32 rigid-body cases only. Predicted affinity scores are found to correlate with experimental binding energies up to r=0.52 overall, and r=0.72 for the rigid complexes. PMID:26231283

  13. Statistical benchmark for BosonSampling

    NASA Astrophysics Data System (ADS)

    Walschaers, Mattia; Kuipers, Jack; Urbina, Juan-Diego; Mayer, Klaus; Tichy, Malte Christopher; Richter, Klaus; Buchleitner, Andreas

    2016-03-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church-Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects.

  14. Resistance and uptake of cadmium by yeast, Pichia hampshirensis 4Aer, isolated from industrial effluent and its potential use in decontamination of wastewater.

    PubMed

    Khan, Zaman; Rehman, Abdul; Hussain, Syed Z

    2016-09-01

    Pichia hampshirensis 4Aer is first ever used yeast for the bioremediation of environmental cadmium (Cd(+2)) which could maximally remove 22 mM/g and 28 mM/g Cd(+2) from aqueous medium at lab and large scales, respectively. The biosorption was found to be the function of temperature, pH of solution, initial Cd(+2) concentration and biomass dosage. Competitive biosorption was investigated in binary and multi-metal system which indicated the decrease in Cd(+2) biosorption with increasing the competitive metal ions attributed to their higher electronegativity and larger radius. FTIR analysis revealed the active participation of amide and carbonyl moieties in Cd(+2) adsorption confirmed by EDX analysis. Electron micrographs summoned further surface adsorption and increased cell size due to intracellular Cd(+2) accumulation. Cd(+2) was the causative agent of some metal binding proteins as well as prodigious increase in glutathione and other non-protein thiols levels which is the crucial for the yeast to thrive oxidative stress generated by Cd(+2). Our experimental data were consistent with Langmuir as well as Freundlich isotherm models. The yeast obeyed pseudo second order kinetic model which makes it an effective biosorbent for Cd(+2). High bioremediation potential and spontaneity and feasibility of the process make P. hampshirensis 4Aer an impending foundation for green chemistry to exterminate environmental Cd(+2).

  15. Resistance and uptake of cadmium by yeast, Pichia hampshirensis 4Aer, isolated from industrial effluent and its potential use in decontamination of wastewater.

    PubMed

    Khan, Zaman; Rehman, Abdul; Hussain, Syed Z

    2016-09-01

    Pichia hampshirensis 4Aer is first ever used yeast for the bioremediation of environmental cadmium (Cd(+2)) which could maximally remove 22 mM/g and 28 mM/g Cd(+2) from aqueous medium at lab and large scales, respectively. The biosorption was found to be the function of temperature, pH of solution, initial Cd(+2) concentration and biomass dosage. Competitive biosorption was investigated in binary and multi-metal system which indicated the decrease in Cd(+2) biosorption with increasing the competitive metal ions attributed to their higher electronegativity and larger radius. FTIR analysis revealed the active participation of amide and carbonyl moieties in Cd(+2) adsorption confirmed by EDX analysis. Electron micrographs summoned further surface adsorption and increased cell size due to intracellular Cd(+2) accumulation. Cd(+2) was the causative agent of some metal binding proteins as well as prodigious increase in glutathione and other non-protein thiols levels which is the crucial for the yeast to thrive oxidative stress generated by Cd(+2). Our experimental data were consistent with Langmuir as well as Freundlich isotherm models. The yeast obeyed pseudo second order kinetic model which makes it an effective biosorbent for Cd(+2). High bioremediation potential and spontaneity and feasibility of the process make P. hampshirensis 4Aer an impending foundation for green chemistry to exterminate environmental Cd(+2). PMID:27268792

  16. BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS

    SciTech Connect

    Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N

    2008-09-29

    This report represents a description of compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC {gamma}-ray analysis computer program. The ISOTOPIC program performs analyses of {gamma}-ray spectra applied to specific acquisition configurations in order to apply finite-geometry correction factors and sample-matrix-container photon absorption correction factors. The analysis program provides an extensive set of preset acquisition configurations to which the user can add relevant parameters in order to build the geometry and absorption correction factors that the program determines from calculus and from nuclear g-ray absorption and scatter data. The Analytical Development Section field nuclear measurement group of the Savannah River National Laboratory uses the Ortec ISOTOPIC analysis program extensively for analyses of solid waste and process holdup applied to passive {gamma}-ray acquisitions. Frequently the results of these {gamma}-ray acquisitions and analyses are to determine compliance with facility criticality safety guidelines. Another use of results is to designate 55-gallon drum solid waste as qualified TRU waste3 or as low-level waste. Other examples of the application of the ISOTOPIC analysis technique to passive {gamma}-ray acquisitions include analyses of standard waste box items and unique solid waste configurations. In many passive {gamma}-ray acquisition circumstances the container and sample have sufficient density that the calculated energy-dependent transmission correction factors have intrinsic uncertainties in the range 15%-100%. This is frequently the case when assaying 55-gallon drums of solid waste with masses of up to 400 kg and when assaying solid waste in extensive unique containers. Often an accurate assay of the transuranic content of these containers is not required, but rather a good defensible designation as >100 nCi/g (TRU waste) or <100 nCi/g (low level solid waste) is required. In

  17. Benchmarking of Optical Dimerizer Systems

    PubMed Central

    2015-01-01

    Optical dimerizers are a powerful new class of optogenetic tools that allow light-inducible control of protein–protein interactions. Such tools have been useful for regulating cellular pathways and processes with high spatiotemporal resolution in live cells, and a growing number of dimerizer systems are available. As these systems have been characterized by different groups using different methods, it has been difficult for users to compare their properties. Here, we set about to systematically benchmark the properties of four optical dimerizer systems, CRY2/CIB1, TULIPs, phyB/PIF3, and phyB/PIF6. Using a yeast transcriptional assay, we find significant differences in light sensitivity and fold-activation levels between the red light regulated systems but similar responses between the CRY2/CIB and TULIP systems. Further comparison of the ability of the CRY2/CIB1 and TULIP systems to regulate a yeast MAPK signaling pathway also showed similar responses, with slightly less background activity in the dark observed with CRY2/CIB. In the process of developing this work, we also generated an improved blue-light-regulated transcriptional system using CRY2/CIB in yeast. In addition, we demonstrate successful application of the CRY2/CIB dimerizers using a membrane-tethered CRY2, which may allow for better local control of protein interactions. Taken together, this work allows for a better understanding of the capacities of these different dimerization systems and demonstrates new uses of these dimerizers to control signaling and transcription in yeast. PMID:25350266

  18. Benchmarking of optical dimerizer systems.

    PubMed

    Pathak, Gopal P; Strickland, Devin; Vrana, Justin D; Tucker, Chandra L

    2014-11-21

    Optical dimerizers are a powerful new class of optogenetic tools that allow light-inducible control of protein-protein interactions. Such tools have been useful for regulating cellular pathways and processes with high spatiotemporal resolution in live cells, and a growing number of dimerizer systems are available. As these systems have been characterized by different groups using different methods, it has been difficult for users to compare their properties. Here, we set about to systematically benchmark the properties of four optical dimerizer systems, CRY2/CIB1, TULIPs, phyB/PIF3, and phyB/PIF6. Using a yeast transcriptional assay, we find significant differences in light sensitivity and fold-activation levels between the red light regulated systems but similar responses between the CRY2/CIB and TULIP systems. Further comparison of the ability of the CRY2/CIB1 and TULIP systems to regulate a yeast MAPK signaling pathway also showed similar responses, with slightly less background activity in the dark observed with CRY2/CIB. In the process of developing this work, we also generated an improved blue-light-regulated transcriptional system using CRY2/CIB in yeast. In addition, we demonstrate successful application of the CRY2/CIB dimerizers using a membrane-tethered CRY2, which may allow for better local control of protein interactions. Taken together, this work allows for a better understanding of the capacities of these different dimerization systems and demonstrates new uses of these dimerizers to control signaling and transcription in yeast. PMID:25350266

  19. Numerical methods: Analytical benchmarking in transport theory

    SciTech Connect

    Ganapol, B.D. )

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered.

  20. Benchmark calculations from summarized data: an example

    SciTech Connect

    Crump, K. S.; Teeguarden, Justin G.

    2009-03-01

    Benchmark calculations often are made from data extracted from publications. Such datamay not be in a formmost appropriate for benchmark analysis, and, as a result, suboptimal and/or non-standard benchmark analyses are often applied. This problem can be mitigated in some cases using Monte Carlo computational methods that allow the likelihood of the published data to be calculated while still using an appropriate benchmark dose (BMD) definition. Such an approach is illustrated herein using data from a study of workers exposed to styrene, in which a hybrid BMD calculation is implemented from dose response data reported only as means and standard deviations of ratios of scores on neuropsychological tests from exposed subjects to corresponding scores from matched controls. The likelihood of the data is computed using a combination of analytic and Monte Carlo integration methods.

  1. Benchmarking antimicrobial drug use in hospitals.

    PubMed

    Ibrahim, Omar M; Polk, Ron E

    2012-04-01

    Measuring and monitoring antibiotic use in hospitals is believed to be an important component of the strategies available to antimicrobial stewardship programs to address acquired antimicrobial resistance. Recent efforts to organize large numbers of hospitals into networks allow for interhospital comparisons of a variety of healthcare processes and outcomes, a process often called 'benchmarking'. For comparisons of antimicrobial use to be valid, usage figures must be risk-adjusted to account for differences in patient mix and hospital characteristics. The purpose of this review is to describe recent methods to benchmark antimicrobial drug use and to critically assess the potential advantages and the remaining challenges. While many methodological challenges remain, and the clinical outcomes resulting from benchmarking programs have yet to be determined, recent developments suggest that benchmarking antimicrobial drug use will become an important component of antimicrobial stewardship program activities.

  2. Toward Scalable Benchmarks for Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1996-01-01

    This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short- and long-term behavior. These benchmarks should be both portable and scalable so they may be used on storage systems from tens of gigabytes to petabytes or more. By developing a standard set of benchmarks that reflect real user workload, we hope to encourage system designers and users to publish performance figures that can be compared with those of other systems. This will allow users to choose the system that best meets their needs and give designers a tool with which they can measure the performance effects of improvements to their systems.

  3. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, James T.; Hoffman, Forrest; Norby, Richard J

    2012-01-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  4. Implementation of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  5. Benchmarking for Cost Improvement. Final report

    SciTech Connect

    Not Available

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  6. Machine characterization and benchmark performance prediction

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.

    1988-01-01

    From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.

  7. Action-Oriented Benchmarking: Concepts and Tools

    SciTech Connect

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  8. Benchmarking initiatives in the water industry.

    PubMed

    Parena, R; Smeets, E

    2001-01-01

    Customer satisfaction and service care are every day pushing professionals in the water industry to seek to improve their performance, lowering costs and increasing the provided service level. Process Benchmarking is generally recognised as a systematic mechanism of comparing one's own utility with other utilities or businesses with the intent of self-improvement by adopting structures or methods used elsewhere. The IWA Task Force on Benchmarking, operating inside the Statistics and Economics Committee, has been committed to developing a general accepted concept of Process Benchmarking to support water decision-makers in addressing issues of efficiency. In a first step the Task Force disseminated among the Committee members a questionnaire focused on providing suggestions about the kind, the evolution degree and the main concepts of Benchmarking adopted in the represented Countries. A comparison among the guidelines adopted in The Netherlands and Scandinavia has recently challenged the Task Force in drafting a methodology for a worldwide process benchmarking in water industry. The paper provides a framework of the most interesting benchmarking experiences in the water sector and describes in detail both the final results of the survey and the methodology focused on identification of possible improvement areas. PMID:11547972

  9. The Activities of the International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    SciTech Connect

    Briggs, Joseph Blair

    2001-10-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) – Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Spain, and Israel are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled “International Handbook of Evaluated Criticality Safety Benchmark Experiments”. The 2001 Edition of the Handbook contains benchmark specifications for 2642 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data.

  10. BUGLE-93 (ENDF/B-VI) cross-section library data testing using shielding benchmarks

    SciTech Connect

    Hunter, H.T.; Slater, C.O.; White, J.E.

    1994-06-01

    Several integral shielding benchmarks were selected to perform data testing for new multigroup cross-section libraries compiled from the ENDF/B-VI data for light water reactor (LWR) shielding and dosimetry. The new multigroup libraries, BUGLE-93 and VITAMIN-B6, were studied to establish their reliability and response to the benchmark measurements by use of radiation transport codes, ANISN and DORT. Also, direct comparisons of BUGLE-93 and VITAMIN-B6 to BUGLE-80 (ENDF/B-IV) and VITAMIN-E (ENDF/B-V) were performed. Some benchmarks involved the nuclides used in LWR shielding and dosimetry applications, and some were sensitive specific nuclear data, i.e. iron due to its dominant use in nuclear reactor systems and complex set of cross-section resonances. Five shielding benchmarks (four experimental and one calculational) are described and results are presented.

  11. Raising Quality and Achievement. A College Guide to Benchmarking.

    ERIC Educational Resources Information Center

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  12. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  13. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  14. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  15. 42 CFR 414.1255 - Benchmarks for cost measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Benchmarks for cost measures. 414.1255 Section 414... Payment Modifier Under the Physician Fee Schedule § 414.1255 Benchmarks for cost measures. The benchmark...-based payment modifier. In calculating the national benchmark, groups of physicians' performance...

  16. 45 CFR 156.110 - EHB-benchmark plan standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false EHB-benchmark plan standards. 156.110 Section 156... Essential Health Benefits Package § 156.110 EHB-benchmark plan standards. An EHB-benchmark plan must meet..., including oral and vision care. (b) Coverage in each benefit category. A base-benchmark plan not...

  17. 45 CFR 156.110 - EHB-benchmark plan standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false EHB-benchmark plan standards. 156.110 Section 156... Essential Health Benefits Package § 156.110 EHB-benchmark plan standards. An EHB-benchmark plan must meet..., including oral and vision care. (b) Coverage in each benefit category. A base-benchmark plan not...

  18. Cause‐specific long‐term mortality in survivors of childhood cancer in Switzerland: A population‐based study

    PubMed Central

    Schindler, Matthias; Spycher, Ben D.; Ammann, Roland A.; Ansari, Marc; Michel, Gisela

    2016-01-01

    Survivors of childhood cancer have a higher mortality than the general population. We describe cause‐specific long‐term mortality in a population‐based cohort of childhood cancer survivors. We included all children diagnosed with cancer in Switzerland (1976–2007) at age 0–14 years, who survived ≥5 years after diagnosis and followed survivors until December 31, 2012. We obtained causes of death (COD) from the Swiss mortality statistics and used data from the Swiss general population to calculate age‐, calendar year‐, and sex‐standardized mortality ratios (SMR), and absolute excess risks (AER) for different COD, by Poisson regression. We included 3,965 survivors and 49,704 person years at risk. Of these, 246 (6.2%) died, which was 11 times higher than expected (SMR 11.0). Mortality was particularly high for diseases of the respiratory (SMR 14.8) and circulatory system (SMR 12.7), and for second cancers (SMR 11.6). The pattern of cause‐specific mortality differed by primary cancer diagnosis, and changed with time since diagnosis. In the first 10 years after 5‐year survival, 78.9% of excess deaths were caused by recurrence of the original cancer (AER 46.1). Twenty‐five years after diagnosis, only 36.5% (AER 9.1) were caused by recurrence, 21.3% by second cancers (AER 5.3) and 33.3% by circulatory diseases (AER 8.3). Our study confirms an elevated mortality in survivors of childhood cancer for at least 30 years after diagnosis with an increased proportion of deaths caused by late toxicities of the treatment. The results underline the importance of clinical follow‐up continuing years after the end of treatment for childhood cancer. PMID:26950898

  19. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    PubMed

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  20. The VTE Benchmarking Model: Benchmarking Quality Performance in Vocational Technical Education.

    ERIC Educational Resources Information Center

    Losh, Charles

    1993-01-01

    Discusses benchmarking--finding and implementing the best practices--in business and industry and describes a model that can be used in vocational-technical education. Suggests that benchmarking is a tool that can be used by vocational-technical educators as they strive for excellence. (JOW)

  1. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    ERIC Educational Resources Information Center

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  2. Benchmark 2 - Springback of a draw / re-draw panel: Part C: Benchmark analysis

    NASA Astrophysics Data System (ADS)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing

    2013-12-01

    Benchmark analysis is summarized for DP600 and AA 5182-O. Nine simulation results submitted for this benchmark study are compared to the physical measurement results. The details on the codes, friction parameters, mesh technology, CPU, and material models are also summarized at the end of this report with the participant information details.

  3. Toxicological benchmarks for wildlife: 1994 Revision

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  4. Energy benchmarking of South Australian WWTPs.

    PubMed

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  5. Benchmark field study of deep neutron penetration

    SciTech Connect

    Morgan, J.F.; Sale, K. ); Gold, R.; Roberts, J.H.; Preston, C.C. )

    1991-06-10

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry. 18 refs.

  6. New Test Set for Video Quality Benchmarking

    NASA Astrophysics Data System (ADS)

    Raventos, Joaquin

    A new test set design and benchmarking approach (US Patent pending) allows a "standard observer" to assess the end-to-end image quality characteristics of video imaging systems operating in day time or low-light conditions. It uses randomized targets based on extensive application of Photometry, Geometrical Optics, and Digital Media. The benchmarking takes into account the target's contrast sensitivity, its color characteristics, and several aspects of human vision such as visual acuity and dynamic response. The standard observer is part of the "extended video imaging system" (EVIS). The new test set allows image quality benchmarking by a panel of standard observers at the same time. The new approach shows that an unbiased assessment can be guaranteed. Manufacturers, system integrators, and end users will assess end-to-end performance by simulating a choice of different colors, luminance levels, and dynamic conditions in the laboratory or in permanent video systems installations.

  7. Standardized benchmarking in the quest for orthologs.

    PubMed

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  8. NAS Parallel Benchmarks, Multi-Zone Versions

    NASA Technical Reports Server (NTRS)

    vanderWijngaart, Rob F.; Haopiang, Jin

    2003-01-01

    We describe an extension of the NAS Parallel Benchmarks (NPB) suite that involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy, which is common among structured-mesh production flow solver codes in use at NASA Ames and elsewhere, provides relatively easily exploitable coarse-grain parallelism between meshes. Since the individual application benchmarks also allow fine-grain parallelism themselves, this NPB extension, named NPB Multi-Zone (NPB-MZ), is a good candidate for testing hybrid and multi-level parallelization tools and strategies.

  9. Standardized benchmarking in the quest for orthologs.

    PubMed

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods.

  10. Derivation of a benchmark for freshwater ionic strength.

    PubMed

    Cormier, Susan M; Suter, Glenn W; Zheng, Lei

    2013-02-01

    Because increased ionic strength has caused deleterious ecological changes in freshwater streams, thresholds for effects are needed to inform resource-management decisions. In particular, effluents from surface coal mining raise the ionic strength of receiving streams. The authors developed an aquatic life benchmark for specific conductance as a measure of ionic strength that is expected to prevent the local extirpation of 95% of species from neutral to alkaline waters containing a mixture of dissolved ions in which the mass of SO (4)2- + HCO (3)- ≥ Cl(-). Extirpation concentrations of specific conductance were estimated from the presence and absence of benthic invertebrate genera from 2,210 stream samples in West Virginia. The extirpation concentration is the 95th percentile of the distribution of the probability of occurrence of a genus with respect to specific conductance. In a region with a background of 116 µS/cm, the 5th percentile of the species sensitivity distribution of extirpation concentrations for 163 genera is 300 µS/cm. Because the benchmark is not protective of all genera and protects against extirpation rather than reduction in abundance, this level may not fully protect sensitive species or higher-quality, exceptional waters. PMID:23161648

  11. Toxicological benchmarks for wildlife: 1996 Revision

    SciTech Connect

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  12. Benchmarks for the point kinetics equations

    SciTech Connect

    Ganapol, B.; Picca, P.; Previti, A.; Mostacci, D.

    2013-07-01

    A new numerical algorithm is presented for the solution to the point kinetics equations (PKEs), whose accurate solution has been sought for over 60 years. The method couples the simplest of finite difference methods, a backward Euler, with Richardsons extrapolation, also called an acceleration. From this coupling, a series of benchmarks have emerged. These include cases from the literature as well as several new ones. The novelty of this presentation lies in the breadth of reactivity insertions considered, covering both prescribed and feedback reactivities, and the extreme 8- to 9- digit accuracy achievable. The benchmarks presented are to provide guidance to those who wish to develop further numerical improvements. (authors)

  13. Benchmark testing of {sup 233}U evaluations

    SciTech Connect

    Wright, R.Q.; Leal, L.C.

    1997-07-01

    In this paper we investigate the adequacy of available {sup 233}U cross-section data (ENDF/B-VI and JENDL-3) for calculation of critical experiments. An ad hoc revised {sup 233}U evaluation is also tested and appears to give results which are improved relative to those obtained with either ENDF/B-VI or JENDL-3 cross sections. Calculations of k{sub eff} were performed for ten fast benchmarks and six thermal benchmarks using the three cross-section sets. Central reaction-rate-ratio calculations were also performed.

  14. Los Alamos National Laboratory computer benchmarking 1982

    SciTech Connect

    Martin, J.L.

    1983-06-01

    Evaluating the performance of computing machinery is a continual effort of the Computer Research and Applications Group of the Los Alamos National Laboratory. This report summarizes the results of the group's benchmarking activities performed between October 1981 and September 1982, presenting compilation and execution times as well as megaflop rates for a set of benchmark codes. Tests were performed on the following computers: Cray Research, Inc. (CRI) Cray-1S; Control Data Corporation (CDC) 7600, 6600, Cyber 73, Cyber 825, Cyber 835, Cyber 855, and Cyber 205; Digital Equipment Corporation (DEC) VAX 11/780 and VAX 11/782; and Apollo Computer, Inc., Apollo.

  15. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    NASA Astrophysics Data System (ADS)

    van der Marck, Steven C.

    2012-12-01

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such

  16. Overview of TPC Benchmark E: The Next Generation of OLTP Benchmarks

    NASA Astrophysics Data System (ADS)

    Hogan, Trish

    Set to replace the aging TPC-C, the TPC Benchmark E is the next generation OLTP benchmark, which more accurately models client database usage. TPC-E addresses the shortcomings of TPC-C. It has a much more complex workload, requires the use of RAID-protected storage, generates much less I/O, and is much cheaper and easier to set up, run, and audit. After a period of overlap, it is expected that TPC-E will become the de facto OLTP benchmark.

  17. VLSI Implementation of a 2.8 Gevent/s Packet-Based AER Interface with Routing and Event Sorting Functionality

    PubMed Central

    Scholze, Stefan; Schiefer, Stefan; Partzsch, Johannes; Hartmann, Stephan; Mayr, Christian Georg; Höppner, Sebastian; Eisenreich, Holger; Henker, Stephan; Vogginger, Bernhard; Schüffny, Rene

    2011-01-01

    State-of-the-art large-scale neuromorphic systems require sophisticated spike event communication between units of the neural network. We present a high-speed communication infrastructure for a waferscale neuromorphic system, based on application-specific neuromorphic communication ICs in an field programmable gate arrays (FPGA)-maintained environment. The ICs implement configurable axonal delays, as required for certain types of dynamic processing or for emulating spike-based learning among distant cortical areas. Measurements are presented which show the efficacy of these delays in influencing behavior of neuromorphic benchmarks. The specialized, dedicated address-event-representation communication in most current systems requires separate, low-bandwidth configuration channels. In contrast, the configuration of the waferscale neuromorphic system is also handled by the digital packet-based pulse channel, which transmits configuration data at the full bandwidth otherwise used for pulse transmission. The overall so-called pulse communication subgroup (ICs and FPGA) delivers a factor 25–50 more event transmission rate than other current neuromorphic communication infrastructures. PMID:22016720

  18. DICE: Database for the International Criticality Safety Benchmark Evaluation Program Handbook

    SciTech Connect

    Nouri, Ali; Nagel, Pierre; Briggs, J. Blair; Ivanova, Tatiana

    2003-09-15

    The 2002 edition of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) spans more than 26 000 pages and contains 330 evaluations with benchmark specifications for 2881 critical or near-critical configurations. With such a large content, it became evident that the users needed more than a broad and qualitative classification of experiments to make efficient use of the ICSBEP Handbook. This paper describes the features of Database for the International Handbook of Evaluated Criticality Safety Benchmark Experiments (DICE), which is a database for the ICSBEP Handbook. The DICE program contains a relational database loaded with selected information from each configuration and a users' interface that enables one to query the database and to extract specific parameters. Summary descriptions of each experimental configuration can also be obtained. In addition, plotting capabilities provide the means of comparing neutron spectra and sensitivity coefficients for a set of configurations.

  19. Developing and Using Benchmarks for Eddy Current Simulation Codes Validation to Address Industrial Issues

    NASA Astrophysics Data System (ADS)

    Mayos, M.; Buvat, F.; Costan, V.; Moreau, O.; Gilles-Pascaud, C.; Reboud, C.; Foucher, F.

    2011-06-01

    To achieve performance demonstration, which is a legal requirement for the qualification of NDE processes applied on French nuclear power plants, the use of modeling tools is a valuable support, provided that the employed models have been previously validated. To achieve this, in particular for eddy current modeling, a validation methodology based on the use of specific benchmarks close to the actual industrial issue has to be defined. Nonetheless, considering the high variability in code origin and complexity, the feedback from experience on actual cases has shown that it was critical to define simpler generic and public benchmarks in order to perform a preliminary selection. A specific Working Group has been launched in the frame of COFREND, the French Association for NDE, resulting in the definition of several benchmark problems. This action is now ready for mutualization with similar international approaches.

  20. NAS Parallel Benchmarks Results 3-95

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Walter, Howard (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion, i.e., the complete details of the problem are given in a NAS technical document. Except for a few restrictions, benchmark implementors are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: CRAY C90, CRAY T90 and Fujitsu VPP500; (b) Highly Parallel Processors: CRAY T3D, IBM SP2-WN (Wide Nodes), and IBM SP2-TN2 (Thin Nodes 2); and (c) Symmetric Multiprocessors: Convex Exemplar SPPIOOO, CRAY J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL (75 MHz). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention future NAS plans for the NPB.

  1. Canadian Language Benchmarks 2000: Theoretical Framework.

    ERIC Educational Resources Information Center

    Pawlikowska-Smith, Grazyna

    This document provides indepth study and support of the "Canadian Language Benchmarks 2000" (CLB 2000). In order to make the CLB 2000 usable, the competencies and standards were considerably compressed and simplified, and much of the indepth discussion of language ability or proficiency was omitted, at publication. This document includes: (1)…

  2. Benchmarking 2011: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  3. Benchmarking 2010: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  4. What Is the Impact of Subject Benchmarking?

    ERIC Educational Resources Information Center

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  5. Benchmarking Year Five Students' Reading Abilities

    ERIC Educational Resources Information Center

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  6. Issues in Benchmarking and Assessing Institutional Engagement

    ERIC Educational Resources Information Center

    Furco, Andrew; Miller, William

    2009-01-01

    The process of assessing and benchmarking community engagement can take many forms. To date, more than two dozen assessment tools for measuring community engagement institutionalization have been published. These tools vary substantially in purpose, level of complexity, scope, process, structure, and focus. While some instruments are designed to…

  7. Alberta K-12 ESL Proficiency Benchmarks

    ERIC Educational Resources Information Center

    Salmon, Kathy; Ettrich, Mike

    2012-01-01

    The Alberta K-12 ESL Proficiency Benchmarks are organized by division: kindergarten, grades 1-3, grades 4-6, grades 7-9, and grades 10-12. They are descriptors of language proficiency in listening, speaking, reading, and writing. The descriptors are arranged in a continuum of seven language competences across five proficiency levels. Several…

  8. Quality Benchmarks in Undergraduate Psychology Programs

    ERIC Educational Resources Information Center

    Dunn, Dana S.; McCarthy, Maureen A.; Baker, Suzanne; Halonen, Jane S.; Hill, G. William, IV

    2007-01-01

    Performance benchmarks are proposed to assist undergraduate psychology programs in defining their missions and goals as well as documenting their effectiveness. Experienced academic program reviewers compared their experiences to formulate a developmental framework of attributes of undergraduate programs focusing on activity in 8 domains:…

  9. 2010 Recruiting Benchmarks Survey. Research Brief

    ERIC Educational Resources Information Center

    National Association of Colleges and Employers (NJ1), 2010

    2010-01-01

    The National Association of Colleges and Employers conducted its annual survey of employer members from June 15, 2010 to August 15, 2010, to benchmark data relevant to college recruiting. From a base of 861 employers holding organizational membership, there were 268 responses for a response rate of 31 percent. Following are some of the major…

  10. MHEC Survey Establishes Midwest Property Insurance Benchmarks.

    ERIC Educational Resources Information Center

    Midwestern Higher Education Commission Risk Management Institute Research Bulletin, 1994

    1994-01-01

    This publication presents the results of a survey of over 200 midwestern colleges and universities on their property insurance programs and establishes benchmarks to help these institutions evaluate their insurance programs. Findings included the following: (1) 51 percent of respondents currently purchase their property insurance as part of a…

  11. Sequenced Benchmarks for K-8 Science.

    ERIC Educational Resources Information Center

    Kendall, John S.; DeFrees, Keri L.; Richardson, Amy

    This document describes science benchmarks for grades K-8 in Earth and Space Science, Life Science, and Physical Science. Each subject area is divided into topics followed by a short content description and grade level information. Source documents for this paper included science content guides from California, Ohio, South Carolina, and South…

  12. Standardised Benchmarking in the Quest for Orthologs

    PubMed Central

    Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe

    2016-01-01

    The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  13. Administrative benchmarks for Medicare, Medicaid HMOs.

    PubMed

    1998-11-01

    Plus, check out benchmark data on Medicare and Medicaid administrative costs. Every provider knows that HMOs take a slice of the Medicare or Medicaid premium for their administrative costs before they determine provider capitation. But how much does administration really cost? Here's some PMPM data from a study by the Sherlock Company.

  14. Benchmarking Peer Production Mechanisms, Processes & Practices

    ERIC Educational Resources Information Center

    Fischer, Thomas; Kretschmer, Thomas

    2008-01-01

    This deliverable identifies key approaches for quality management in peer production by benchmarking peer production practices and processes in other areas. (Contains 29 footnotes, 13 figures and 2 tables.)[This report has been authored with contributions of: Kaisa Honkonen-Ratinen, Matti Auvinen, David Riley, Jose Pinzon, Thomas Fischer, Thomas…

  15. Benchmark Generation and Simulation at Extreme Scale

    SciTech Connect

    Lagadapati, Mahesh; Mueller, Frank; Engelmann, Christian

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  16. Benchmarking in Universities: League Tables Revisited

    ERIC Educational Resources Information Center

    Turner, David

    2005-01-01

    This paper examines the practice of benchmarking universities using a "league table" approach. Taking the example of the "Sunday Times University League Table", the author reanalyses the descriptive data on UK universities. Using a linear programming technique, data envelope analysis (DEA), the author uses the re-analysis to demonstrate the major…

  17. Environmental radiation: risk benchmarks or benchmarking risk assessment.

    PubMed

    Bates, Matthew E; Valverde, L James; Vogel, John T; Linkov, Igor

    2011-07-01

    In the wake of the compound March 2011 nuclear disaster at the Fukushima I nuclear power plant in Japan, international public dialogue has repeatedly turned to questions of the accuracy of current risk assessment processes to assess nuclear risks and the adequacy of existing regulatory risk thresholds to protect us from nuclear harm. We confront these issues with an emphasis on learning from the incident in Japan for future US policy discussions. Without delving into a broader philosophical discussion of the general social acceptance of the risk, the relative adequacy of existing US Nuclear Regulatory Commission (NRC) risk thresholds is assessed in comparison with the risk thresholds of federal agencies not currently under heightened public scrutiny. Existing NRC thresholds are found to be among the most conservative in the comparison, suggesting that the agency's current regulatory framework is consistent with larger societal ideals. In turning to risk assessment methodologies, the disaster in Japan does indicate room for growth. Emerging lessons seem to indicate an opportunity to enhance resilience through systemic levels of risk aggregation. Specifically, we believe bringing systemic reasoning to the risk management process requires a framework that (i) is able to represent risk-based knowledge and information about a panoply of threats; (ii) provides a systemic understanding (and representation) of the natural and built environments of interest and their dependencies; and (iii) allows for the rational and coherent valuation of a range of outcome variables of interest, both tangible and intangible. Rather than revisiting the thresholds themselves, we see the goal of future nuclear risk management in adopting and implementing risk assessment techniques that systemically evaluate large-scale socio-technical systems with a view toward enhancing resilience and minimizing the potential for surprise. PMID:21608107

  18. Benchmark 1 - Failure Prediction after Cup Drawing, Reverse Redrawing and Expansion Part A: Benchmark Description

    NASA Astrophysics Data System (ADS)

    Watson, Martin; Dick, Robert; Huang, Y. Helen; Lockley, Andrew; Cardoso, Rui; Santos, Abel

    2016-08-01

    This Benchmark is designed to predict the fracture of a food can after drawing, reverse redrawing and expansion. The aim is to assess different sheet metal forming difficulties such as plastic anisotropic earing and failure models (strain and stress based Forming Limit Diagrams) under complex nonlinear strain paths. To study these effects, two distinct materials, TH330 steel (unstoved) and AA5352 aluminum alloy are considered in this Benchmark. Problem description, material properties, and simulation reports with experimental data are summarized.

  19. Using Benchmarking To Influence Tuition and Fee Decisions.

    ERIC Educational Resources Information Center

    Hubbell, Loren W. Loomis; Massa, Robert J.; Lapovsky, Lucie

    2002-01-01

    Discusses the use of benchmarking in managing enrollment. Using a case study, illustrates how benchmarking can help administrators develop strategies for planning and implementing admissions and pricing practices. (EV)

  20. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... program. (2) Makes separate expenditure calculations for each of the following populations of... making up the historical benchmark, determines national growth rates and trends expenditures for...

  1. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... program. (2) Makes separate expenditure calculations for each of the following populations of... making up the historical benchmark, determines national growth rates and trends expenditures for...

  2. The Impact Hydrocode Benchmark and Validation Project: Initial Results

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Cazamias, J.; Coker, R.; Collins, G. S.; Gisler, G.; Holsapple, K. A.; Housen, K. R.; Ivanov, B.; Johnson, C.; Korycansky, D. G.; Melosh, H. J.; Taylor, E. A.; Turtle, E. P.; Wünnemann, K.

    2007-03-01

    This work presents initial results of a validation and benchmarking effort from the impact cratering and explosion community. Several impact codes routinely used to model impact and explosion events are being compared using simple benchmark tests.

  3. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  4. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  5. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  6. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  7. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  8. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  9. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  10. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  11. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  12. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  13. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  14. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  15. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  16. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    ERIC Educational Resources Information Center

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  17. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  18. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  19. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  20. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  1. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  2. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  3. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  4. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  5. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  6. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  7. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  8. 45 CFR 156.100 - State selection of benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false State selection of benchmark. 156.100 Section 156... Essential Health Benefits Package § 156.100 State selection of benchmark. Each State may identify a single EHB-benchmark plan according to the selection criteria described below: (a) State selection of...

  9. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  10. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  11. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  12. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  13. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  14. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  15. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  16. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  17. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  18. 41 CFR 60-300.45 - Benchmarks for hiring.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 41 Public Contracts and Property Management 1 2014-07-01 2014-07-01 false Benchmarks for hiring... VETERANS, AND ARMED FORCES SERVICE MEDAL VETERANS Affirmative Action Program § 60-300.45 Benchmarks for hiring. The benchmark is not a rigid and inflexible quota which must be met, nor is it to be...

  19. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  20. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  1. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  2. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  3. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  4. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  5. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  6. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  7. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  8. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  9. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  10. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  11. 45 CFR 156.100 - State selection of benchmark.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false State selection of benchmark. 156.100 Section 156... Essential Health Benefits Package § 156.100 State selection of benchmark. Each State may identify a single EHB-benchmark plan according to the selection criteria described below: (a) State selection of...

  12. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  13. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  14. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  15. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  16. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  17. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  18. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  19. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  20. 42 CFR 414.1255 - Benchmarks for cost measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Benchmarks for cost measures. 414.1255 Section 414... Payment Modifier Under the Physician Fee Schedule § 414.1255 Benchmarks for cost measures. (a) For the CY 2015 payment adjustment period, the benchmark for each cost measure is the national mean of...

  1. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  2. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  3. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  4. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  5. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  6. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  7. Con: current laboratory benchmarking options are not good enough.

    PubMed

    Reynolds, Debbie

    2006-01-01

    In an ideal world, benchmarking performance in the clinical laboratory would improve performance, quality, and overall patient satisfaction. However, there is a reason why laboratory managers continue to be on the lookout for the perfect benchmarking product--it doesn't exist. As a result, benchmarking performance in the laboratory is inherently flawed. Here is why.

  8. Experimental Criticality Benchmarks for SNAP 10A/2 Reactor Cores

    SciTech Connect

    Krass, A.W.

    2005-12-19

    This report describes computational benchmark models for nuclear criticality derived from descriptions of the Systems for Nuclear Auxiliary Power (SNAP) Critical Assembly (SCA)-4B experimental criticality program conducted by Atomics International during the early 1960's. The selected experimental configurations consist of fueled SNAP 10A/2-type reactor cores subject to varied conditions of water immersion and reflection under experimental control to measure neutron multiplication. SNAP 10A/2-type reactor cores are compact volumes fueled and moderated with the hydride of highly enriched uranium-zirconium alloy. Specifications for the materials and geometry needed to describe a given experimental configuration for a model using MCNP5 are provided. The material and geometry specifications are adequate to permit user development of input for alternative nuclear safety codes, such as KENO. A total of 73 distinct experimental configurations are described.

  9. Using Grid Benchmarks for Dynamic Scheduling of Grid Applications

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Hood, Robert

    2003-01-01

    Navigation or dynamic scheduling of applications on computational grids can be improved through the use of an application-specific characterization of grid resources. Current grid information systems provide a description of the resources, but do not contain any application-specific information. We define a GridScape as dynamic state of the grid resources. We measure the dynamic performance of these resources using the grid benchmarks. Then we use the GridScape for automatic assignment of the tasks of a grid application to grid resources. The scalability of the system is achieved by limiting the navigation overhead to a few percent of the application resource requirements. Our task submission and assignment protocol guarantees that the navigation system does not cause grid congestion. On a synthetic data mining application we demonstrate that Gridscape-based task assignment reduces the application tunaround time.

  10. Two-dimensional benchmark calculations for PNL-30 through PNL-35

    SciTech Connect

    Mosteller, R.D.

    1997-09-01

    Interest in critical experiments with lattices of mixed-oxide (MOX) fuel pins has been revived by the possibility that light water reactors will be used for disposition of weapons-grade plutonium. A series of six experiments with MOX lattices, designated PNL-30 through PNL-35, was performed at Pacific Northwest Laboratories in 1975 and 1976, and a set of benchmark specifications for these experiments subsequently was adopted by the Cross Section Evaluation Working Group (CSEWG). Although there appear to be some problems with these experiments, they remain the only CSEWG benchmarks for MOX lattices. The number of fuel pins in these experiments is relatively low, corresponding to fewer than 4 typical pressurized-water-reactor fuel assemblies. Accordingly, they are more appropriate as benchmarks for lattice-physics codes than for reactor-core simulator codes. Unfortunately, the CSEWG specifications retain the full three-dimensional (3D) detail of the experiments, while lattice-physics codes almost universally are limited to two dimensions (2D). This paper proposes an extension of the benchmark specifications to include a 2D model, and it justifies that extension by comparing results from the MCNP Monte Carlo code for the 2D and 3D specifications.

  11. Developing of Indicators of an E-Learning Benchmarking Model for Higher Education Institutions

    ERIC Educational Resources Information Center

    Sae-Khow, Jirasak

    2014-01-01

    This study was the development of e-learning indicators used as an e-learning benchmarking model for higher education institutes. Specifically, it aimed to: 1) synthesize the e-learning indicators; 2) examine content validity by specialists; and 3) explore appropriateness of the e-learning indicators. Review of related literature included…

  12. Two-dimensional benchmark calculations for PNL-30 through PNL-35

    SciTech Connect

    Mosteller, R.D.

    1997-12-01

    Interest in critical experiments with lattices of mixed-oxide (MOX) fuel pins has been revived by the possibility that light water reactors will be used for the disposition of weapons-grade plutonium. A series of six experiments with MOX lattices, designated PNL-30 through PNL-35, was performed at Pacific Northwest Laboratories in 1975 and 1976, and a set of benchmark specifications for these experiments subsequently was adopted by the Cross Section Evaluation Working Group (CSEWG). Although there appear to be some problems with these experiments, they remain the only CSEWG benchmarks for MOX lattices.

  13. Review of the GMD Benchmark Event in TPL-007-1

    SciTech Connect

    Backhaus, Scott N.; Rivera, Michael Kelly

    2015-07-21

    Los Alamos National Laboratory (LANL) examined the approaches suggested in NERC Standard TPL-007-1 for defining the geo-electric field for the Benchmark Geomagnetic Disturbance (GMD) Event. Specifically; 1. Estimating 100-year exceedance geo-electric field magnitude; The scaling of the GMD Benchmark Event to geomagnetic latitudes below 60 degrees north; and 3. The effect of uncertainties in earth conductivity data on the conversion from geomagnetic field to geo-electric field. This document summarizes the review and presents recommendations for consideration

  14. Upcoming rules and benchmarks concerning the monitoring of and the payment for surgical infections.

    PubMed

    Sajankila, Nitin; Como, John J; Claridge, Jeffrey A

    2014-12-01

    There has been a good deal of dialogue about pay for performance and linking outcomes with reimbursement, especially given the recent national health care legislation. Many such concerns are caused by upcoming changes that have been outlined in the Affordable Care Act. This article discusses these upcoming changes and reviews some of the literature that supports them, specifically those related to surgical infections. Likewise, the lack of support for some of these changes in the academic literature is discussed. Finally, some of the proposed key benchmarks and the methodologies behind the design of those benchmarks are discussed. PMID:25440120

  15. CFD validation in OECD/NEA t-junction benchmark.

    SciTech Connect

    Obabko, A. V.; Fischer, P. F.; Tautges, T. J.; Karabasov, S.; Goloviznin, V. M.; Zaytsev, M. A.; Chudanov, V. V.; Pervichko, V. A.; Aksenova, A. E.

    2011-08-23

    When streams of rapidly moving flow merge in a T-junction, the potential arises for large oscillations at the scale of the diameter, D, with a period scaling as O(D/U), where U is the characteristic flow velocity. If the streams are of different temperatures, the oscillations result in experimental fluctuations (thermal striping) at the pipe wall in the outlet branch that can accelerate thermal-mechanical fatigue and ultimately cause pipe failure. The importance of this phenomenon has prompted the nuclear energy modeling and simulation community to establish a benchmark to test the ability of computational fluid dynamics (CFD) codes to predict thermal striping. The benchmark is based on thermal and velocity data measured in an experiment designed specifically for this purpose. Thermal striping is intrinsically unsteady and hence not accessible to steady state simulation approaches such as steady state Reynolds-averaged Navier-Stokes (RANS) models.1 Consequently, one must consider either unsteady RANS or large eddy simulation (LES). This report compares the results for three LES codes: Nek5000, developed at Argonne National Laboratory (USA), and Cabaret and Conv3D, developed at the Moscow Institute of Nuclear Energy Safety at (IBRAE) in Russia. Nek5000 is based on the spectral element method (SEM), which is a high-order weighted residual technique that combines the geometric flexibility of the finite element method (FEM) with the tensor-product efficiencies of spectral methods. Cabaret is a 'compact accurately boundary-adjusting high-resolution technique' for fluid dynamics simulation. The method is second-order accurate on nonuniform grids in space and time, and has a small dispersion error and computational stencil defined within one space-time cell. The scheme is equipped with a conservative nonlinear correction procedure based on the maximum principle. CONV3D is based on the immersed boundary method and is validated on a wide set of the experimental and

  16. Measurements and ALE3D Simulations for Violence in a Scaled Thermal Explosion Experiment with LX-10 and AerMet 100 Steel

    SciTech Connect

    McClelland, M A; Maienschein, J L; Yoh, J J; deHaven, M R; Strand, O T

    2005-06-03

    We completed a Scaled Thermal Explosion Experiment (STEX) and performed ALE3D simulations for the HMX-based explosive, LX-10, confined in an AerMet 100 (iron-cobalt-nickel alloy) vessel. The explosive was heated at 1 C/h until cookoff at 182 C using a controlled temperature profile. During the explosion, the expansion of the tube and fragment velocities were measured with strain gauges, Photonic-Doppler-Velocimeters (PDVs), and micropower radar units. These results were combined to produce a single curve describing 15 cm of tube wall motion. A majority of the metal fragments were captured and cataloged. A fragment size distribution was constructed, and a typical fragment had a length scale of 2 cm. Based on these results, the explosion was considered to be a violent deflagration. ALE3D models for chemical, thermal, and mechanical behavior were developed for the heating and explosive processes. A four-step chemical kinetics model is employed for the HMX while a one-step model is used for the Viton. A pressure-dependent deflagration model is employed during the expansion. The mechanical behavior of the solid constituents is represented by a Steinberg-Guinan model while polynomial and gamma-law expressions are used for the equation of state of the solid and gas species, respectively. A gamma-law model is employed for the air in gaps, and a mixed material model is used for the interface between air and explosive. A Johnson-Cook model with an empirical rule for failure strain is used to describe fracture behavior. Parameters for the kinetics model were specified using measurements of the One-Dimensional-Time-to-Explosion (ODTX), while measurements for burn rate were employed to determine parameters in the burn front model. The ALE3D models provide good predictions for the thermal behavior and time to explosion, but the predicted wall expansion curve is higher than the measured curve. Possible contributions to this discrepancy include inaccuracies in the chemical models

  17. The PROOF benchmark suite measuring PROOF performance

    NASA Astrophysics Data System (ADS)

    Ryu, S.; Ganis, G.

    2012-06-01

    The PROOF benchmark suite is a new utility suite of PROOF to measure performance and scalability. The primary goal of the benchmark suite is to determine optimal configuration parameters for a set of machines to be used as PROOF cluster. The suite measures the performance of the cluster for a set of standard tasks as a function of the number of effective processes. Cluster administrators can use the suite to measure the performance of the cluster and find optimal configuration parameters. PROOF developers can also utilize the suite to help them measure, identify problems and improve their software. In this paper, the new tool is explained in detail and use cases are presented to illustrate the new tool.

  18. Toxicological benchmarks for wildlife. Environmental Restoration Program

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  19. Benchmark On Sensitivity Calculation (Phase III)

    SciTech Connect

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James; Mennerdahl, Dennis; Golovko, Yury; Raskach, Kirill; Tsiboulia, Anatoly; Lee, Gil Soo; Woo, Sweng-Woong; Bidaud, Adrien; Patel, Amrit; Bledsoe, Keith C; Rearden, Bradley T; Gulliford, J.

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.

  20. Assessing and benchmarking multiphoton microscopes for biologists.

    PubMed

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F

    2014-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs.

  1. Thermodynamic benchmark study using Biacore technology.

    PubMed

    Navratilova, Iva; Papalia, Giuseppe A; Rich, Rebecca L; Bedinger, Daniel; Brophy, Susan; Condon, Brad; Deng, Ta; Emerick, Anne W; Guan, Hann-Wen; Hayden, Tanya; Heutmekers, Thomas; Hoorelbeke, Bart; McCroskey, Mark C; Murphy, Mary M; Nakagawa, Terry; Parmeggiani, Fabio; Qin, Xiaochun; Rebe, Sabina; Tomasevic, Nenad; Tsang, Tiffany; Waddell, M Brett; Zhang, Fred Feiyu; Leavitt, Stephanie; Myszka, David G

    2007-05-01

    A total of 22 individuals participated in this benchmark study to characterize the thermodynamics of small-molecule inhibitor-enzyme interactions using Biacore instruments. Participants were provided with reagents (the enzyme carbonic anhydrase II, which was immobilized onto the sensor surface, and four sulfonamide-based inhibitors) and were instructed to collect response data from 6 to 36 degrees C. van't Hoff enthalpies and entropies were calculated from the temperature dependence of the binding constants. The equilibrium dissociation and thermodynamic constants determined from the Biacore analysis matched the values determined using isothermal titration calorimetry. These results demonstrate that immobilization of the enzyme onto the sensor surface did not alter the thermodynamics of these interactions. This benchmark study also provides insights into the opportunities and challenges in carrying out thermodynamic studies using optical biosensors.

  2. ASBench: benchmarking sets for allosteric discovery.

    PubMed

    Huang, Wenkang; Wang, Guanqiao; Shen, Qiancheng; Liu, Xinyi; Lu, Shaoyong; Geng, Lv; Huang, Zhimin; Zhang, Jian

    2015-08-01

    Allostery allows for the fine-tuning of protein function. Targeting allosteric sites is gaining increasing recognition as a novel strategy in drug design. The key challenge in the discovery of allosteric sites has strongly motivated the development of computational methods and thus high-quality, publicly accessible standard data have become indispensable. Here, we report benchmarking data for experimentally determined allosteric sites through a complex process, including a 'Core set' with 235 unique allosteric sites and a 'Core-Diversity set' with 147 structurally diverse allosteric sites. These benchmarking sets can be exploited to develop efficient computational methods to predict unknown allosteric sites in proteins and reveal unique allosteric ligand-protein interactions to guide allosteric drug design.

  3. Recommendations for Benchmarking Preclinical Studies of Nanomedicines.

    PubMed

    Dawidczyk, Charlene M; Russell, Luisa M; Searson, Peter C

    2015-10-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small-molecule drug therapy for cancer and to achieve both therapeutic and diagnostic functions in the same platform. Preclinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of preclinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of preclinical trials and propose a protocol for benchmarking that we recommend be included in in vivo preclinical studies of drug-delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies.

  4. Utilizing benchmark data from the ANL-ZPR diagnostic cores program

    SciTech Connect

    Schaefer, R. W.; McKnight, R. D.

    2000-02-15

    The support of the criticality safety community is allowing the production of benchmark descriptions of several assemblies from the ZPR Diagnostic Cores Program. The assemblies have high sensitivities to nuclear data for a few isotopes. This can highlight limitations in nuclear data for selected nuclides or in standard methods used to treat these data. The present work extends the use of the simplified model of the U9 benchmark assembly beyond the validation of k{sub eff}. Further simplifications have been made to produce a data testing benchmark in the style of the standard CSEWG benchmark specifications. Calculations for this data testing benchmark are compared to results obtained with more detailed models and methods to determine their biases. These biases or corrections factors can then be applied in the use of the less refined methods and models. Data testing results using Versions IV, V, and VI of the ENDF/B nuclear data are presented for k{sub eff}, f{sup 28}/f{sup 25}, c{sup 28}/f{sup 25}, and {beta}{sub eff}. These limited results demonstrate the importance of studying other integral parameters in addition to k{sub eff} in trying to improve nuclear data and methods and the importance of accounting for methods and/or modeling biases when using data testing results to infer the quality of the nuclear data files.

  5. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE PAGES

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less

  6. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    SciTech Connect

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.

  7. EXPERIMENTAL BENCHMARKING OF THE MAGNETIZED FRICTION FORCE.

    SciTech Connect

    FEDOTOV, A.V.; GALNANDER, B.; LITVINENKO, V.N.; LOFNES, T.; SIDORIN, A.O.; SMIRNOV, A.V.; ZIEMANN, V.

    2005-09-18

    High-energy electron cooling, presently considered as essential tool for several applications in high-energy and nuclear physics, requires accurate description of the friction force. A series of measurements were performed at CELSIUS with the goal to provide accurate data needed for the benchmarking of theories and simulations. Some results of accurate comparison of experimental data with the friction force formulas are presented.

  8. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    NASA Technical Reports Server (NTRS)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  9. Collection of Neutronic VVER Reactor Benchmarks.

    2002-01-30

    Version 00 A system of computational neutronic benchmarks has been developed. In this CD-ROM report, the data generated in the course of the project are reproduced in their integrity with minor corrections. The editing that was performed on the various documents comprising this report was primarily meant to facilitate the production of the CD-ROM and to enable electronic retrieval of the information. The files are electronically navigable.

  10. Cray performance data from five benchmarks

    NASA Technical Reports Server (NTRS)

    Pennline, James A.

    1991-01-01

    The five benchmark programs discussed in TM-88956, February 1987, were run on the CRAY X-MP/24 under different operating systems and compilers. Performance data is reported for runs under early versions of UNICOS and CFT77. The most recent data includes a system of configuration for a X-MP hardware upgrade. Performance figures for the Y-MP are shown for comparison. Differences in the figures are analyzed and discussed.

  11. A Simplified HTTR Diffusion Theory Benchmark

    SciTech Connect

    Rodolfo M. Ferrer; Abderrafi M. Ougouag; Farzad Rahnema

    2010-10-01

    The Georgia Institute of Technology (GA-Tech) recently developed a transport theory benchmark based closely on the geometry and the features of the HTTR reactor that is operational in Japan. Though simplified, the benchmark retains all the principal physical features of the reactor and thus provides a realistic and challenging test for the codes. The purpose of this paper is twofold. The first goal is an extension of the benchmark to diffusion theory applications by generating the additional data not provided in the GA-Tech prior work. The second goal is to use the benchmark on the HEXPEDITE code available to the INL. The HEXPEDITE code is a Green’s function-based neutron diffusion code in 3D hexagonal-z geometry. The results showed that the HEXPEDITE code accurately reproduces the effective multiplication factor of the reference HELIOS solution. A secondary, but no less important, conclusion is that in the testing against actual HTTR data of a full sequence of codes that would include HEXPEDITE, in the apportioning of inevitable discrepancies between experiment and models, the portion of error attributable to HEXPEDITE would be expected to be modest. If large discrepancies are observed, they would have to be explained by errors in the data fed into HEXPEDITE. Results based on a fully realistic model of the HTTR reactor are presented in a companion paper. The suite of codes used in that paper also includes HEXPEDITE. The results shown here should help that effort in the decision making process for refining the modeling steps in the full sequence of codes.

  12. The rotating movement of three immiscible fluids - A benchmark problem

    USGS Publications Warehouse

    Bakker, M.; Oude, Essink G.H.P.; Langevin, C.D.

    2004-01-01

    A benchmark problem involving the rotating movement of three immiscible fluids is proposed for verifying the density-dependent flow component of groundwater flow codes. The problem consists of a two-dimensional strip in the vertical plane filled with three fluids of different densities separated by interfaces. Initially, the interfaces between the fluids make a 45??angle with the horizontal. Over time, the fluids rotate to the stable position whereby the interfaces are horizontal; all flow is caused by density differences. Two cases of the problem are presented, one resulting in a symmetric flow field and one resulting in an asymmetric flow field. An exact analytical solution for the initial flow field is presented by application of the vortex theory and complex variables. Numerical results are obtained using three variable-density groundwater flow codes (SWI, MOCDENS3D, and SEAWAT). Initial horizontal velocities of the interfaces, as simulated by the three codes, compare well with the exact solution. The three codes are used to simulate the positions of the interfaces at two times; the three codes produce nearly identical results. The agreement between the results is evidence that the specific rotational behavior predicted by the models is correct. It also shows that the proposed problem may be used to benchmark variable-density codes. It is concluded that the three models can be used to model accurately the movement of interfaces between immiscible fluids, and have little or no numerical dispersion. ?? 2003 Elsevier B.V. All rights reserved.

  13. Performance Comparison of HPF and MPI Based NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash

    1997-01-01

    Compilers supporting High Performance Form (HPF) features first appeared in late 1994 and early 1995 from Applied Parallel Research (APR), Digital Equipment Corporation, and The Portland Group (PGI). IBM introduced an HPF compiler for the IBM RS/6000 SP2 in April of 1996. Over the past two years, these implementations have shown steady improvement in terms of both features and performance. The performance of various hardware/ programming model (HPF and MPI) combinations will be compared, based on latest NAS Parallel Benchmark results, thus providing a cross-machine and cross-model comparison. Specifically, HPF based NPB results will be compared with MPI based NPB results to provide perspective on performance currently obtainable using HPF versus MPI or versus hand-tuned implementations such as those supplied by the hardware vendors. In addition, we would also present NPB, (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu CAPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, and SGI Origin2000. We would also present sustained performance per dollar for Class B LU, SP and BT benchmarks.

  14. Introduction to the HPC Challenge Benchmark Suite

    SciTech Connect

    Luszczek, Piotr; Dongarra, Jack J.; Koester, David; Rabenseifner,Rolf; Lucas, Bob; Kepner, Jeremy; McCalpin, John; Bailey, David; Takahashi, Daisuke

    2005-04-25

    The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. HPC Challenge is a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. Thus, the suite is designed to augment the Top500 list, providing benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and providing a framework for including additional tests. In particular, the suite is composed of several well known computational kernels (STREAM, HPL, matrix multiply--DGEMM, parallel matrix transpose--PTRANS, FFT, RandomAccess, and bandwidth/latency tests--b{sub eff}) that attempt to span high and low spatial and temporal locality space. By design, the HPC Challenge tests are scalable with the size of data sets being a function of the largest HPL matrix for the tested system.

  15. A PWR Thorium Pin Cell Burnup Benchmark

    SciTech Connect

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  16. Benchmarking and accounting for the (private) cloud

    NASA Astrophysics Data System (ADS)

    Belleman, J.; Schwickerath, U.

    2015-12-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible; the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to have an estimation of the performance of worker nodes also in a very dynamic farm with worker nodes coming and going at a high rate, without the need to benchmark each new node again. An extension to public cloud resources is possible if all conditions under which the benchmark numbers have been obtained are fulfilled.

  17. Perspective: Selected benchmarks from commercial CFD codes

    SciTech Connect

    Freitas, C.J.

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  18. Middleware Evaluation and Benchmarking for Use in Mission Operations Centers

    NASA Technical Reports Server (NTRS)

    Antonucci, Rob; Waktola, Waka

    2005-01-01

    Middleware technologies have been promoted as timesaving, cost-cutting alternatives to the point-to-point communication used in traditional mission operations systems. However, missions have been slow to adopt the new technology. The lack of existing middleware-based missions has given rise to uncertainty about middleware's ability to perform in an operational setting. Most mission architects are also unfamiliar with the technology and do not know the benefits and detriments to architectural choices - or even what choices are available. We will present the findings of a study that evaluated several middleware options specifically for use in a mission operations system. We will address some common misconceptions regarding the applicability of middleware-based architectures, and we will identify the design decisions and tradeoffs that must be made when choosing a middleware solution. The Middleware Comparison and Benchmark Study was conducted at NASA Goddard Space Flight Center to comprehensively evaluate candidate middleware products, compare and contrast the performance of middleware solutions with the traditional point- to-point socket approach, and assess data delivery and reliability strategies. The study focused on requirements of the Global Precipitation Measurement (GPM) mission, validating the potential use of middleware in the GPM mission ground system. The study was jointly funded by GPM and the Goddard Mission Services Evolution Center (GMSEC), a virtual organization for providing mission enabling solutions and promoting the use of appropriate new technologies for mission support. The study was broken into two phases. To perform the generic middleware benchmarking and performance analysis, a network was created with data producers and consumers passing data between themselves. The benchmark monitored the delay, throughput, and reliability of the data as the characteristics were changed. Measurements were taken under a variety of topologies, data demands

  19. Expression of the AsbA1, OXA-12, and AsbM1 beta-lactamases in Aeromonas jandaei AER 14 is coordinated by a two-component regulon.

    PubMed Central

    Alksne, L E; Rasmussen, B A

    1997-01-01

    Aeromonas jandaei AER 14 (formerly Aeromonas sobria AER 14) expresses three inducible beta-lactamases, AsbA1, OXA-12 (AsbB1), and AsbM1. Mutant strains that constitutively overexpress all three enzyme simultaneously, suggesting that they share a common regulatory pathway, have been isolated. Detectable expression of the cloned genes of AsbA1 and OXA-12 in some Escherichia coli K-12 laboratory strains is achieved only in the presence of a blp mutation. These mutations map to the cre operon at 0 min, which encodes a classical two-component regulatory system of unknown function. Two regulatory elements from A. jandaei which permit high-level constitutive expression of OXA-12 in E. coli were cloned. Both loci encode proteins with characteristics of response regulator proteins of two-component regulatory systems. One of these loci, designated blrA, bestowed constitutive expression of all three beta-lactamases in A. jandaei AER 14 when present on a multicopy plasmid, confirming its role in the regulatory pathway of beta-lactamase production in this organism. PMID:9068648

  20. Analytical Benchmark Test Set for Criticality Code Verification

    SciTech Connect

    Sood, A.; Forster, R.A.; Parson, D.K.

    1999-09-20

    A number of published numerical solutions to analytic eigenvalue (k{sub eff}) and eigenfunction equations are summarized for the purpose of creating a criticality verification benchmark test set. The 75-problem test set allows the user to verify the correctness of a criticality code for infinite medium and simple geometries in one- and two-energy groups, one- and two-media, and both isotropic and linearly anisotropic neutron scattering. A three- and six-energy group infinite medium problem are also included in the test set. The problem specifications will produce both k{sub eff}=1 and the quoted k{sub {infinity}} to at least five decimal places. Additional uses of the test set for code verification are also discussed. Los Alamos report LA-13511 contains the details of all 75 test problems.

  1. Toward real-time performance benchmarks for Ada

    NASA Technical Reports Server (NTRS)

    Clapp, Russell M.; Duchesneau, Louis; Volz, Richard A.; Mudge, Trevor N.; Schultze, Timothy

    1986-01-01

    The issue of real-time performance measurements for the Ada programming language through the use of benchmarks is addressed. First, the Ada notion of time is examined and a set of basic measurement techniques are developed. Then a set of Ada language features believed to be important for real-time performance are presented and specific measurement methods discussed. In addition, other important time related features which are not explicitly part of the language but are part of the run-time related features which are not explicitly part of the language but are part of the run-time system are also identified and measurement techniques developed. The measurement techniques are applied to the language and run-time system features and the results are presented.

  2. Analytical Benchmark Test Set for Criticality Code Verification

    SciTech Connect

    Avneet Sood; D. K. Parsons; R. A. Forster

    1999-07-01

    A number of published numerical solutions to analytic eigenvalue (k{sub eff}) and eigenfunction equations are summarized for the purpose of creating a criticality verification benchmark test set. The 75-problem test set allows the user to verify the correctness of a criticality code for infinite medium and simple geometries in one- and two-energy groups, one- and two-media, and both isotropic and anisotropic neutron scattering. The problem specifications will produce both k{sub eff} = 1 and the quoted k{sub {infinity}} to at least five decimal places. Additional uses of the test set for code verification are also discussed. A list of 45 references and an appendix with k{sub {infinity}} derivations is also included.

  3. A Uranium Bioremediation Reactive Transport Benchmark

    SciTech Connect

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  4. Benchmarking NSP Reactors with CORETRAN-01

    SciTech Connect

    Hines, Donald D.; Grow, Rodney L.; Agee, Lance J

    2004-10-15

    As part of an overall verification and validation effort, the Electric Power Research Institute's (EPRIs) CORETRAN-01 has been benchmarked against Northern States Power's Prairie Island and Monticello reactors through 12 cycles of operation. The two Prairie Island reactors are Westinghouse 2-loop units with 121 asymmetric 14 x 14 lattice assemblies utilizing up to 8 wt% gadolinium while Monticello is a General Electric 484 bundle boiling water reactor. All reactor cases were executed in full core utilizing 24 axial nodes per assembly in the fuel with 1 additional reflector node above, below, and around the perimeter of the core. Cross-section sets used in this benchmark effort were generated by EPRI's CPM-3 as well as Studsvik's CASMO-3 and CASMO-4 to allow for separation of the lattice calculation effect from the nodal simulation method. These cases exercised the depletion-shuffle-depletion sequence through four cycles for each unit using plant data to follow actual operations. Flux map calculations were performed for comparison to corresponding measurement statepoints. Additionally, start-up physics testing cases were used to predict cycle physics parameters for comparison to existing plant methods and measurements.These benchmark results agreed well with both current analysis methods and plant measurements, indicating that CORETRAN-01 may be appropriate for steady-state physics calculations of both the Prairie Island and Monticello reactors. However, only the Prairie Island results are discussed in this paper since Monticello results were of similar quality and agreement. No attempt was made in this work to investigate CORETRAN-01 kinetics capability by analyzing plant transients, but these steady-state results form a good foundation for moving in that direction.

  5. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  6. Benchmarking boiler tube failures - Part 1

    SciTech Connect

    Patrick, J.; Oldani, R.; von Behren, D.

    2005-10-01

    Boiler tube failures continue to be the leading cause of downtime for steam power plants. That should not be a surprise; a typical steam generator has miles of tubes that operate at high temperatures and pressures. Are your experiences comparable to those of your peers? Could you learn something from tube-leak benchmarking data that could improve the operation of your plant? The Electric Utility Cost Group (EUCG) recently completed a boiler-tube failure study that is available only to its members. But Power magazine has been given exclusive access to some of the results, published in this article. 4 figs.

  7. NAS Parallel Benchmarks. 2.4

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We describe a new problem size, called Class D, for the NAS Parallel Benchmarks (NPB), whose MPI source code implementation is being released as NPB 2.4. A brief rationale is given for how the new class is derived. We also describe the modifications made to the MPI (Message Passing Interface) implementation to allow the new class to be run on systems with 32-bit integers, and with moderate amounts of memory. Finally, we give the verification values for the new problem size.

  8. Benchmarking East Tennessee`s economic capacity

    SciTech Connect

    1995-04-20

    This presentation is comprised of viewgraphs delineating major economic factors operating in 15 counties in East Tennessee. The purpose of the information presented is to provide a benchmark analysis of economic conditions for use in guiding economic growth in the region. The emphasis of the presentation is economic infrastructure, which is classified into six categories: human resources, technology, financial resources, physical infrastructure, quality of life, and tax and regulation. Data for analysis of key indicators in each of the categories are presented. Preliminary analyses, in the form of strengths and weaknesses and comparison to reference groups, are given.

  9. Learning Through Benchmarking: Developing a Relational, Prospective Approach to Benchmarking ICT in Learning and Teaching

    ERIC Educational Resources Information Center

    Ellis, Robert A.; Moore, Roger R.

    2006-01-01

    This study discusses benchmarking the use of information and communication technologies (ICT) in teaching and learning between two universities with different missions: one an Australian campus-based metropolitan university and the other a British distance-education provider. It argues that the differences notwithstanding, it is possible to…

  10. Building with Benchmarks: The Role of the District in Philadelphia's Benchmark Assessment System

    ERIC Educational Resources Information Center

    Bulkley, Katrina E.; Christman, Jolley Bruce; Goertz, Margaret E.; Lawrence, Nancy R.

    2010-01-01

    In recent years, interim assessments have become an increasingly popular tool in districts seeking to improve student learning and achievement. Philadelphia has been at the forefront of this change, implementing a set of Benchmark assessments aligned with its Core Curriculum district-wide in 2004. In this article, we examine the overall context…

  11. Using the Canadian Language Benchmarks (CLB) to Benchmark College Programs/Courses and Language Proficiency Tests.

    ERIC Educational Resources Information Center

    Epp, Lucy; Stawychny, Mary

    2001-01-01

    Describes a process developed by the Language Training Centre at Red River College (RRC) to use the Canadian language benchmarks in analyzing the language levels used in programs and courses at RRC to identify appropriate entry-level language proficiency and the levels that second language students need in order to meet college or university…

  12. Benchmarking in healthcare: selecting and working with partners.

    PubMed

    Benson, H R

    1995-01-01

    The process of selecting a benchmarking partner begins with gathering information to establish industry standards, identifying potential partners and supplying data on the subject to be benchmarked. Suggested sources of information are business and trade publications; investment industry analysts; journalists; trade associations and professional organizations; government research reports; disclosure documents; current and former employees; and product and service providers. Potential partners should be approached only after careful preparation of a project plan that includes information about the benchmarking team's organization and purpose, description of the subject and a statement of benefits for the prospective partner. After obtaining a commitment from the benchmarking partner, relevant comparative data is gathered and analyzed, using some of the following methods: library research, questionnaires, telephone surveys, site visits and consultants. Because benchmarking often involves sharing information with competitors, a code of ethical conduct has been developed by the International Benchmarking Clearinghouse.

  13. Development of a California commercial building benchmarking database

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  14. Simple mathematical law benchmarks human confrontations.

    PubMed

    Johnson, Neil F; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-10

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a 'lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  15. EVA Health and Human Performance Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  16. Simple mathematical law benchmarks human confrontations

    PubMed Central

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-01-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another – from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a ‘lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds. PMID:24322528

  17. Simple mathematical law benchmarks human confrontations.

    PubMed

    Johnson, Neil F; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-01-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a 'lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds. PMID:24322528

  18. Computational Thermochemistry and Benchmarking of Reliable Methods

    SciTech Connect

    Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

    2006-06-20

    During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

  19. CALOR89: Calorimetry analysis and benchmarking

    SciTech Connect

    Gabriel, T.A.; Alsmiller, R.G. Jr.; Bishop, B.L.; Fu, C.Y. ); Handler, T. ); Panakkal, J.K.; Proudfoot, J. ); Cremaldi, L.; Moore, B.; Reidy, J.J. )

    1990-01-01

    The CALOR89 code system has been utilized for extensive calorimeter benchmarking and design calculations. Even though this code system has previously demonstrated its power in the design of calorimeters, major revisions in the form of better collision models and cross-section data bases have expanded its capabilities. The benchmarking has been done with respect to the ZEUS and DO calorimeters. For the most part, good agreement with experimental data has been obtained. The design calculations presented here were done for a variety of absorbers (depleted uranium, lead, and iron) of various thickness, for a given scintillator thickness and for a fixed absorber thickness using various thickness for the scintillator. These studies indicate that a compensating calorimeter can be built using depleted uranium or lead as the absorber, whereas a purely iron calorimeter would be non-compensating. One possibly major problem exists with the depleted uranium calorimeter due to the large number of neutrons produced and due to the large capture cross-section of uranium. These captured neutrons will produce a signal in the scintillator due to secondary gamma rays for many hundreds of nanoseconds and this may contribute substantially to background noise and pile up. 14 figs.

  20. Direct data access protocols benchmarking on DPM

    NASA Astrophysics Data System (ADS)

    Furano, Fabrizio; Devresse, Adrien; Keeble, Oliver; Mancinelli, Valentina

    2015-12-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring information about any data access protocol to the same monitoring infrastructure that is used to monitor the Xrootd deployments. Our goal is to evaluate under which circumstances the HTTP-based protocols can be good enough for batch or interactive data access. In this contribution we show and discuss the results that our test systems have collected under the circumstances that include ROOT analyses using TTreeCache and stress tests on the metadata performance.

  1. Benchmarking hospital laboratory financial and operational performance.

    PubMed

    Portugal, B

    1993-12-01

    The movement toward more integrated delivery systems requires hospital administrators, medical staffs, and health care network organizations to consider strategies that will meet the future challenges facing laboratory services. Many health care experts predict that the number of hospital inpatient days, staffed acute care beds, and length of stay will continue their precipitous decline, and then stabilize during the next four to five years. Hospitals should carefully evaluate how their laboratories might be affected as a result of the decline in inpatient services and the integration of health care services at all levels. Hospital executive management must find a way to manage staffing levels and technical resources in order to maintain quality patient services in the face of declining test volume. This Special Report discusses relevant benchmarks intended to help hospital administrators and laboratory directors identify "best practices" in hospital laboratories so that comparisons of patterns of care and financial operations can be made. Benchmarking the relative financial and operational performance of hospital laboratories allows health care planners to design the most appropriate laboratory services delivery system for future hospital inpatient and outpatient market demands. Factors influencing financial and operation performance will be investigated, including utilization, testing costs, staffing mix, productivity, and organizational structure. This will be followed by a discussion on the future of laboratories and the trend toward regional laboratories owned by hospital consortiums.

  2. REVISED STREAM CODE AND WASP5 BENCHMARK

    SciTech Connect

    Chen, K

    2005-05-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within {+-}20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within {+-}3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls.

  3. Benchmarking database performance for genomic data.

    PubMed

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc.

  4. Simple mathematical law benchmarks human confrontations

    NASA Astrophysics Data System (ADS)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  5. Improving Mass Balance Modeling of Benchmark Glaciers

    NASA Astrophysics Data System (ADS)

    van Beusekom, A. E.; March, R. S.; O'Neel, S.

    2009-12-01

    The USGS monitors long-term glacier mass balance at three benchmark glaciers in different climate regimes. The coastal and continental glaciers are represented by Wolverine and Gulkana Glaciers in Alaska, respectively. Field measurements began in 1966 and continue. We have reanalyzed the published balance time series with more modern methods and recomputed reference surface and conventional balances. Addition of the most recent data shows a continuing trend of mass loss. We compare the updated balances to the previously accepted balances and discuss differences. Not all balance quantities can be determined from the field measurements. For surface processes, we model missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernize the traditional degree-day model as well as derive new degree-day factors in an effort to closer match the balance time series and thus better predict the future state of the benchmark glaciers. For subsurface processes, we model the refreezing of meltwater for internal accumulation. We examine the sensitivity of the balance time series to the subsurface process of internal accumulation, with the goal of determining the best way to include internal accumulation into balance estimates.

  6. Parallel Ada benchmarks for the SVMS

    NASA Technical Reports Server (NTRS)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  7. Experimentally Relevant Benchmarks for Gyrokinetic Codes

    NASA Astrophysics Data System (ADS)

    Bravenec, Ronald

    2010-11-01

    Although benchmarking of gyrokinetic codes has been performed in the past, e.g., The Numerical Tokamak, The Cyclone Project, The Plasma Microturbulence Project, and various informal activities, these efforts have typically employed simple plasma models. For example, the Cyclone ``base case'' assumed shifted-circle flux surfaces, no magnetic transport, adiabatic electrons, no collisions nor impurities, ρi << a (ρi the ion gyroradius and a the minor radius), and no ExB flow shear. This work presents comparisons of linear frequencies and nonlinear fluxes from GYRO and GS2 with none of the above approximations except ρi << a and no ExB flow shear. The comparisons are performed at two radii of a DIII-D plasma, one in the confinement region (r/a = 0.5) and the other closer to the edge (r/a = 0.7). Many of the plasma parameters differ by a factor of two between these two locations. Good agreement between GYRO and GS2 is found when neglecting collisions. However, differences are found when including e-i collisions (Lorentz model). The sources of the discrepancy are unknown as of yet. Nevertheless, two collisionless benchmarks have been formulated with considerably different plasma parameters. Acknowledgements to J. Candy, E. Belli, and M. Barnes.

  8. The NAS Parallel Benchmarks 2.1 Results

    NASA Technical Reports Server (NTRS)

    Saphir, William; Woo, Alex; Yarrow, Maurice

    1996-01-01

    We present performance results for version 2.1 of the NAS Parallel Benchmarks (NPB) on the following architectures: IBM SP2/66 MHz; SGI Power Challenge Array/90 MHz; Cray Research T3D; and Intel Paragon. The NAS Parallel Benchmarks are a widely-recognized suite of benchmarks originally designed to compare the performance of highly parallel computers with that of traditional supercomputers.

  9. An annotated bibliography of selected books and articles on benchmarking

    SciTech Connect

    Allan, F.C.

    1992-01-01

    This bibliography contains 34 references concerning utilizing benchmarking in the management of businesses. Books and articles are both cited. Methods for gathering and utilizing information are emphasized. (GHH)

  10. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; Fatoohi, Rod

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  11. Hospital Energy Benchmarking Guidance - Version 1.0

    SciTech Connect

    Singer, Brett C.

    2009-09-08

    This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

  12. Using benchmarks for radiation testing of microprocessors and FPGAs

    SciTech Connect

    Quinn, Heather; Robinson, William H.; Rech, Paolo; Aguirre, Miguel; Barnard, Arno; Desogus, Marco; Entrena, Luis; Garcia-Valderas, Mario; Guertin, Steven M.; Kaeli, David; Kastensmidt, Fernanda Lima; Kiddie, Bradley T.; Sanchez-Clemente, Antonio; Reorda, Matteo Sonza; Sterpone, Luca; Wirthlin, Michael

    2015-12-01

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for the hardware and software benchmarks.

  13. Benchmarking in national health service procurement in Scotland.

    PubMed

    Walker, Scott; Masson, Ron; Telford, Ronnie; White, David

    2007-11-01

    The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland. PMID:17958971

  14. Towards Automated Benchmarking of Atomistic Forcefields: Neat Liquid Densities and Static Dielectric Constants from the ThermoML Data Archive

    PubMed Central

    Beauchamp, Kyle A.; Behr, Julie M.; Rustenburg, Ariën S.; Bayly, Christopher I.; Kroenlein, Kenneth; Chodera, John D.

    2015-01-01

    Atomistic molecular simulations are a powerful way to make quantitative predictions, but the accuracy of these predictions depends entirely on the quality of the forcefield employed. While experimental measurements of fundamental physical properties offer a straightforward approach for evaluating forcefield quality, the bulk of this information has been tied up in formats that are not machine-readable. Compiling benchmark datasets of physical properties from non-machine-readable sources requires substantial human effort and is prone to the accumulation of human errors, hindering the development of reproducible benchmarks of forcefield accuracy. Here, we examine the feasibility of benchmarking atomistic forcefields against the NIST ThermoML data archive of physicochemical measurements, which aggregates thousands of experimental measurements in a portable, machine-readable, self-annotating IUPAC-standard format. As a proof of concept, we present a detailed benchmark of the generalized Amber small molecule forcefield (GAFF) using the AM1-BCC charge model against experimental measurements (specifically bulk liquid densities and static dielectric constants at ambient pressure) automatically extracted from the archive, and discuss the extent of data available for use in larger scale (or continuously performed) benchmarks. The results of even this limited initial benchmark highlight a general problem with fixed-charge forcefields in the representation low dielectric environments such as those seen in binding cavities or biological membranes. PMID:26339862

  15. Pediatric Drug Safety Surveillance in FDA-AERS: A Description of Adverse Events from GRiP Project.

    PubMed

    de Bie, Sandra; Ferrajolo, Carmen; Straus, Sabine M J M; Verhamme, Katia M C; Bonhoeffer, Jan; Wong, Ian C K; Sturkenboom, Miriam C J M

    2015-01-01

    Individual case safety reports (ICSRs) are a cornerstone in drug safety surveillance. The knowledge on using these data specifically for children is limited. We studied characteristics of pediatric ICSRs reported to the US Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS). Public available ICSRs reported in children (0-18 years) to FAERS were downloaded from the FDA-website for the period Jan 2004-Dec 2011. Characteristics of these ICSRs, including the reported drugs and events, were described and stratified by age-groups. We included 106,122 pediatric ICSRs (55% boys and 58% from United States) with a median of 1 drug [range 1-3] and 1 event [1-2] per ICSR. Mean age was 9.1 years. 90% was submitted through expedited (15-days) (65%) or periodic reporting (25%) and 10% by non-manufacturers. The proportion and type of pediatric ICSRs reported were relatively stable over time. Most commonly reported drug classes by decreasing frequency were 'nervous system drugs' (58%), 'antineoplastics' (32%) and 'anti-infectives' (25%). Most commonly reported system organ classes were 'general' (13%), 'nervous system' (12%) and 'psychiatric' (11%) disorders. Duration of use could be calculated for 19.7% of the reported drugs, of which 14.5% concerned drugs being used long-term (>6 months). Knowledge on the distribution of the drug classes and events within FAERS is a key first step in developing pediatric specific methods for drug safety surveillance. Because of several differences in terms of drugs and events among age-categories, drug safety signal detection analysis in children needs to be stratified by each age group. PMID:26090678

  16. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    PubMed

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.

  17. Benchmarking novel approaches for modelling species range dynamics

    PubMed Central

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.

    2016-01-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  18. Benchmarking and improving microbial-explicit soil biogeochemistry models

    NASA Astrophysics Data System (ADS)

    Wieder, W. R.; Bonan, G. B.; Hartman, M. D.; Sulman, B. N.; Wang, Y.

    2015-12-01

    Earth system models that are designed to project future carbon (C) cycle - climate feedbacks exhibit notably poor representation of soil biogeochemical processes and generate highly uncertain projections about the fate of the largest terrestrial C pool on Earth. Given these shortcomings there has been intense interest in soil biogeochemical model development, but parallel efforts to create the analytical tools to characterize, improve and benchmark these models have thus far lagged behind. A long-term goal of this work is to develop a framework to compare, evaluate and improve the process-level representation of soil biogeochemical models that could be applied in global land surface models. Here, we present a newly developed global model test bed that is built on the Carnegie Ames Stanford Approach model (CASA-CNP) that can rapidly integrate different soil biogeochemical models that are forced with consistent driver datasets. We focus on evaluation of two microbial explicit soil biogeochemical models that function at global scales: the MIcrobial-MIneral Carbon Stabilization model (MIMICS) and Carbon, Organisms, Rhizosphere, and Protection in the Soil Environment (CORPSE) model. Using the global model test bed coupled to MIMICS and CORPSE we quantify the uncertainty in potential C cycle - climate feedbacks that may be expected with these microbial explicit models, compared with a conventional first-order, linear model. By removing confounding variation of climate and vegetation drivers, our model test bed allows us to isolate key differences among different soil model structure and parameterizations that can be evaluated with further study. Specifically, the global test bed also identifies key parameters that can be estimated using cross-site observations. In global simulations model results are evaluated with steady state litter, microbial biomass, and soil C pools and benchmarked against independent globally gridded data products.

  19. Benchmarking novel approaches for modelling species range dynamics.

    PubMed

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  20. Benchmarking Big Data Systems and the BigData Top100 List.

    PubMed

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  1. Benchmarking Big Data Systems and the BigData Top100 List.

    PubMed

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process. PMID:27447039

  2. LHC benchmarks from flavored gauge mediation

    NASA Astrophysics Data System (ADS)

    Ierushalmi, N.; Iwamoto, S.; Lee, G.; Nepomnyashy, V.; Shadmi, Y.

    2016-07-01

    We present benchmark points for LHC searches from flavored gauge mediation models, in which messenger-matter couplings give flavor-dependent squark masses. Our examples include spectra in which a single squark — stop, scharm, or sup — is much lighter than all other colored superpartners, motivating improved quark flavor tagging at the LHC. Many examples feature flavor mixing; in particular, large stop-scharm mixing is possible. The correct Higgs mass is obtained in some examples by virtue of the large stop A-term. We also revisit the general flavor and CP structure of the models. Even though the A-terms can be substantial, their contributions to EDM's are very suppressed, because of the particular dependence of the A-terms on the messenger coupling. This holds regardless of the messenger-coupling texture. More generally, the special structure of the soft terms often leads to stronger suppression of flavor- and CP-violating processes, compared to naive estimates.

  3. Gatemon Benchmarking and Two-Qubit Operation

    NASA Astrophysics Data System (ADS)

    Casparis, Lucas; Larsen, Thorvald; Olsen, Michael; Petersson, Karl; Kuemmeth, Ferdinand; Krogstrup, Peter; Nygard, Jesper; Marcus, Charles

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability singular to semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors of ~0.5 % for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent SWAP operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of ~91 %, demonstrating the potential of gatemon qubits for building scalable quantum processors. We acknowledge financial support from Microsoft Project Q and the Danish National Research Foundation.

  4. NASA Indexing Benchmarks: Evaluating Text Search Engines

    NASA Technical Reports Server (NTRS)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  5. Gatemon Benchmarking and Two-Qubit Operations.

    PubMed

    Casparis, L; Larsen, T W; Olsen, M S; Kuemmeth, F; Krogstrup, P; Nygård, J; Petersson, K D; Marcus, C M

    2016-04-15

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability characteristic of semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors below 0.7% for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent swap operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of 91%, demonstrating the potential of gatemon qubits for building scalable quantum processors. PMID:27127949

  6. A method for benchmarking CT scanners.

    PubMed

    Al-Farsi, A; Michael, G; Thiele, D

    2005-09-01

    This study involved the development of an objective method to compare the performance of five CT scanners for the purpose of benchmarking. The method used to assess the scanners was to determine the dose-normalised noise at a spatial resolution of 5.5 cm(-1). This gave a dose-normalised percent noise between 0.37% and 0.76%. The scanners were also assessed for radiation dose to patients undergoing abdomen and head CT examinations. Patients' dose-length product (DLP) for the abdomen clinical examinations varied from 305 to 685 mGy-cm, and for the head clinical examinations from 333 to 900 mGy-cm. The study results demonstrated that the comparison of dose and spatial resolution normalised percent noise levels is a useful method of comparing CT scanner performance.

  7. Gatemon Benchmarking and Two-Qubit Operations

    NASA Astrophysics Data System (ADS)

    Casparis, L.; Larsen, T. W.; Olsen, M. S.; Kuemmeth, F.; Krogstrup, P.; Nygârd, J.; Petersson, K. D.; Marcus, C. M.

    2016-04-01

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability characteristic of semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors below 0.7% for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent swap operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of 91%, demonstrating the potential of gatemon qubits for building scalable quantum processors.

  8. Ground truth and benchmarks for performance evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.

    2003-09-01

    Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.

  9. Benchmarking Competitiveness: Is America's Technological Hegemony Waning?

    NASA Astrophysics Data System (ADS)

    Lubell, Michael S.

    2006-03-01

    For more than half a century, by almost every standard, the United States has been the world's leader in scientific discovery, innovation and technological competitiveness. To a large degree, that dominant position stemmed from the circumstances our nation inherited at the conclusion of the World War Two: we were, in effect, the only major nation left standing that did not have to repair serious war damage. And we found ourselves with an extraordinary science and technology base that we had developed for military purposes. We had the laboratories -- industrial, academic and government -- as well as the scientific and engineering personnel -- many of them immigrants who had escaped from war-time Europe. What remained was to convert the wartime machinery into peacetime uses. We adopted private and public policies that accomplished the transition remarkably well, and we have prospered ever since. Our higher education system, our protection of intellectual property rights, our venture capital system, our entrepreneurial culture and our willingness to commit government funds for the support of science and engineering have been key components to our success. But recent competitiveness benchmarks suggest that our dominance is waning rapidly, in part because other nations have begun to emulate our successful model, in part because globalization has ``flattened'' the world and in part because we have been reluctant to pursue the public policies that are necessary to ensure our leadership. We will examine these benchmarks and explore the policy changes that are needed to keep our nation's science and technology enterprise vibrant and our economic growth on an upward trajectory.

  10. Performance Evaluation and Benchmarking of Intelligent Systems

    SciTech Connect

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    2009-09-01

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents

  11. A Benchmark Study on Casting Residual Stress

    SciTech Connect

    Johnson, Eric M.; Watkins, Thomas R; Schmidlin, Joshua E; Dutler, S. A.

    2012-01-01

    Stringent regulatory requirements, such as Tier IV norms, have pushed the cast iron for automotive applications to its limit. The castings need to be designed with closer tolerances by incorporating hitherto unknowns, such as residual stresses arising due to thermal gradients, phase and microstructural changes during solidification phenomenon. Residual stresses were earlier neglected in the casting designs by incorporating large factors of safety. Experimental measurement of residual stress in a casting through neutron or X-ray diffraction, sectioning or hole drilling, magnetic, electric or photoelastic measurements is very difficult and time consuming exercise. A detailed multi-physics model, incorporating thermo-mechanical and phase transformation phenomenon, provides an attractive alternative to assess the residual stresses generated during casting. However, before relying on the simulation methodology, it is important to rigorously validate the prediction capability by comparing it to experimental measurements. In the present work, a benchmark study was undertaken for casting residual stress measurements through neutron diffraction, which was subsequently used to validate the accuracy of simulation prediction. The stress lattice specimen geometry was designed such that subsequent castings would generate adequate residual stresses during solidification and cooling, without any cracks. The residual stresses in the cast specimen were measured using neutron diffraction. Considering the difficulty in accessing the neutron diffraction facility, these measurements can be considered as benchmark for casting simulation validations. Simulations were performed using the identical specimen geometry and casting conditions for predictions of residual stresses. The simulation predictions were found to agree well with the experimentally measured residual stresses. The experimentally validated model can be subsequently used to predict residual stresses in different cast

  12. Benchmarking homogenization algorithms for monthly data

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2012-01-01

    The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  13. Information-Theoretic Benchmarking of Land Surface Models

    NASA Astrophysics Data System (ADS)

    Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong

    2016-04-01

    Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed

  14. Benchmarks for Psychotherapy Efficacy in Adult Major Depression

    ERIC Educational Resources Information Center

    Minami, Takuya; Wampold, Bruce E.; Serlin, Ronald C.; Kircher, John C.; Brown, George S.

    2007-01-01

    This study estimates pretreatment-posttreatment effect size benchmarks for the treatment of major depression in adults that may be useful in evaluating psychotherapy effectiveness in clinical practice. Treatment efficacy benchmarks for major depression were derived for 3 different types of outcome measures: the Hamilton Rating Scale for Depression…

  15. Nomenclatural Benchmarking: The roles of digital typification and telemicroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The process of nomenclatural benchmarking is the examination of type specimens of all available names to ascertain which currently accepted species the specimen bearing the name falls within. We propose a strategy for addressing four challenges for nomenclatural benchmarking. First, there is the mat...

  16. Higher Education Ranking and Leagues Tables: Lessons Learned from Benchmarking

    ERIC Educational Resources Information Center

    Proulx, Roland

    2007-01-01

    The paper intends to contribute to the debate on ranking and league tables by adopting a critical approach to ranking methodologies from the point of view of a university benchmarking exercise. The absence of a strict benchmarking exercise in the ranking process has been, in the opinion of the author, one of the major problems encountered in the…

  17. Practical Considerations when Using Benchmarking for Accountability in Higher Education

    ERIC Educational Resources Information Center

    Achtemeier, Sue D.; Simpson, Ronald D.

    2005-01-01

    The qualitative study on which this article is based examined key individuals' perceptions, both within a research university community and beyond in its external governing board, of how to improve benchmarking as an accountability method in higher education. Differing understanding of benchmarking revealed practical implications for using it as…

  18. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... benefits available under base benchmark plans described in 45 CFR 156.100. (1) States wishing to elect... 42 Public Health 4 2014-10-01 2014-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND...

  19. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... benchmark plans described in 45 CFR 156.100. (1) States wishing to elect Secretary-approved coverage should... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND...

  20. The Learning Organisation: Results of a Benchmarking Study.

    ERIC Educational Resources Information Center

    Zairi, Mohamed

    1999-01-01

    Learning in corporations was assessed using these benchmarks: core qualities of creative organizations, characteristic of organizational creativity, attributes of flexible organizations, use of diversity and conflict, creative human resource management systems, and effective and successful teams. These benchmarks are key elements of the learning…

  1. 29 CFR 1952.263 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.263 Section 1952.263... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  2. 29 CFR 1952.153 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.153 Section 1952.153....153 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were...

  3. 29 CFR 1952.103 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.103 Section 1952.103... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  4. What Are ACT's College Readiness Benchmarks? Issues in College Readiness

    ERIC Educational Resources Information Center

    ACT, Inc., 2010

    2010-01-01

    This article presents questions and answers about ACT's College Readiness Benchmarks. ACT's College Readiness Benchmarks are the minimum ACT test scores required for students to have a high probability of success in credit-bearing college courses--English Composition, social sciences courses, College Algebra, or Biology. Colleges can use the…

  5. 29 CFR 1952.363 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.363 Section 1952.363... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  6. Developing Benchmarks to Measure Teacher Candidates' Performance

    ERIC Educational Resources Information Center

    Frazier, Laura Corbin; Brown-Hobbs, Stacy; Palmer, Barbara Martin

    2013-01-01

    This paper traces the development of teacher candidate benchmarks at one liberal arts institution. Begun as a classroom assessment activity over ten years ago, the benchmarks, through collaboration with professional development school partners, now serve as a primary measure of teacher candidates' performance in the final phases of the…

  7. What Are the ACT College Readiness Benchmarks? Information Brief

    ERIC Educational Resources Information Center

    ACT, Inc., 2013

    2013-01-01

    The ACT College Readiness Benchmarks are the minimum ACT® college readiness assessment scores required for students to have a high probability of success in credit-bearing college courses--English Composition, social sciences courses, College Algebra, or Biology. This report identifies the College Readiness Benchmarks on the ACT Compass scale…

  8. The Use of Educational Standards and Benchmarks in Indicator Publications

    ERIC Educational Resources Information Center

    Thomas, Sally; Peng, Wen-Jung

    2004-01-01

    This paper examines the use of educational standards and benchmarks in international indicator and other relevant policy publications, particularly those originating in the UK. The authors first examine what is meant by educational standards and benchmarks and how these concepts are defined. Then, they address the use of standards and benchmarks…

  9. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the...

  10. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 11 2011-01-01 2011-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the...

  11. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 11 2014-01-01 2014-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the...

  12. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 11 2013-01-01 2013-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the...

  13. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 11 2012-01-01 2012-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the...

  14. Benchmarking Mentoring Practices: A Case Study in Turkey

    ERIC Educational Resources Information Center

    Hudson, Peter; Usak, Muhammet; Savran-Gencer, Ayse

    2010-01-01

    Throughout the world standards have been developed for teaching in particular key learning areas. These standards also present benchmarks that can assist to measure and compare results from one year to the next. There appears to be no benchmarks for mentoring. An instrument devised to measure mentees' perceptions of their mentoring in primary…

  15. Enhancing knowledge of rangeland ecological processes with benchmark ecological sites

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A benchmark ecological site is one that has the greatest potential to yield data and information about ecological functions, processes, and the effects of management or climate changes on a broad area or critical ecological zone. A benchmark ecological site represents other similar sites in a major ...

  16. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGES

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greatermore » than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  17. 29 CFR 1952.363 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.363 Section 1952.363... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  18. 29 CFR 1952.263 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.263 Section 1952.263... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  19. 29 CFR 1952.263 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.263 Section 1952.263... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  20. 29 CFR 1952.153 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.153 Section 1952.153....153 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were...

  1. 29 CFR 1952.153 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.153 Section 1952.153....153 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were...

  2. 29 CFR 1952.363 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.363 Section 1952.363... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  3. 29 CFR 1952.363 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.363 Section 1952.363... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  4. 29 CFR 1952.153 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.153 Section 1952.153....153 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were...

  5. 29 CFR 1952.103 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.103 Section 1952.103... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  6. 29 CFR 1952.363 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.363 Section 1952.363... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  7. 29 CFR 1952.263 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.263 Section 1952.263... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  8. 29 CFR 1952.103 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.103 Section 1952.103... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  9. 29 CFR 1952.153 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.153 Section 1952.153....153 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were...

  10. 29 CFR 1952.103 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.103 Section 1952.103... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  11. 29 CFR 1952.263 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.263 Section 1952.263... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  12. 29 CFR 1952.103 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.103 Section 1952.103... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  13. A Competitive Benchmarking Study of Noncredit Program Administration.

    ERIC Educational Resources Information Center

    Alstete, Jeffrey W.

    1996-01-01

    A benchmarking project to measure administrative processes and financial ratios received 57 usable replies from 300 noncredit continuing education programs. Programs with strong financial surpluses were identified and their processes benchmarked (including response to inquiries, registrants, registrant/staff ratio, new courses, class size,…

  14. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    SciTech Connect

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  15. Benchmarking with the BLASST Sessional Staff Standards Framework

    ERIC Educational Resources Information Center

    Luzia, Karina; Harvey, Marina; Parker, Nicola; McCormack, Coralie; Brown, Natalie R.

    2013-01-01

    Benchmarking as a type of knowledge-sharing around good practice within and between institutions is increasingly common in the higher education sector. More recently, benchmarking as a process that can contribute to quality enhancement has been deployed across numerous institutions with a view to systematising frameworks to assure and enhance the…

  16. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    ERIC Educational Resources Information Center

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  17. Benchmarking Soft Costs for PV Systems in the United States (Presentation)

    SciTech Connect

    Ardani, K.

    2012-06-01

    This paper presents results from the first U.S. based data collection effort to quantify non-hardware, business process costs for PV systems at the residential and commercial scales, using a bottom-up approach. Annual expenditure and labor hour productivity data are analyzed to benchmark business process costs in the specific areas of: (1) customer acquisition; (2) permitting, inspection, and interconnection; (3) labor costs of third party financing; and (4) installation labor.

  18. Benchmark 2 - Springback of a draw / re-draw panel: Part A: Benchmark description

    NASA Astrophysics Data System (ADS)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing; Chen, Zhong

    2013-12-01

    Numerical methods have been effectively implemented to predict springback behavior of complex stampings to reduce die tryout through compensation and produce dimensionally accurate products after forming and trimming. However, accurate prediction of the sprung shape of a panel formed with an initial draw followed with a restrike forming step remains a difficult challenge. The objective of this benchmark was to predict the sprung shape after stamping, restriking and trimming a sheet metal panel. A simple, rectangular draw die was used to draw sheet metal to a set depth with a "larger" tooling radius, followed by additional drawing to a greater depth with a "smaller" tooling radius. Panels were sectioned along a centerline and released to allow measurement of thickness strain and position of the trim line in the sprung condition. Smaller radii were used in the restrike step in order to significantly alter the deformation and the sprung shape. These measurements were used to evaluate numerical analysis predictions submitted by benchmark participants. Additional panels were drawn to "failure" during both the first draw and the re-draw in order to set the parameters for the springback trials and to demonstrate that a sheet metal going through a re-strike operation can exceed conventional forming limits of that under a simple draw operation. Two sheet metals were used for this benchmark study: DP600 steel sheet and aluminum alloy 5182-O.

  19. REVIEW OF RESULTS FOR THE OECD/NEA PHASE VII BENCHMARK: STUDY OF SPENT FUEL COMPOSITIONS FOR LONG TERM DISPOSAL

    SciTech Connect

    Radulescu, Georgeta; Wagner, John C

    2011-01-01

    This paper summarizes the problem specification and compares participants results for the OECD/NEA/WPNCS Expert Group on Burn-up Credit Criticality Safety Phase VII Benchmark Study of Spent Fuel Compositions for Long-Term Disposal. The Phase VII benchmark was developed to study the ability of relevant computer codes and associated nuclear data to predict spent fuel isotopic compositions and corresponding keff values in a cask configuration over the time duration relevant to spent nuclear fuel (SNF) disposal. The benchmark was divided into two sets of calculations: (1) decay calculations out to 1,000,000 years for provided pressurized-water-reactor (PWR) UO2 discharged fuel compositions and (2) burnup credit criticality calculations for a representative cask model at selected time steps. Contributions from 15 organizations and companies in 10 countries were submitted to the Phase VII benchmark exercise. This paper provides a description of the Phase VII benchmark and detailed comparisons of the participants isotopic compositions and keff values that were calculated with a diversity of computer codes and nuclear data sets. Differences observed in the calculated time-dependent nuclide densities are attributed to different decay data or code-specific numerical approximations. The variability of the keff results is consistent with the evaluated uncertainty associated with cross-section data.

  20. A new numerical benchmark of a freshwater lens

    NASA Astrophysics Data System (ADS)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  1. Quantum benchmarks for pure single-mode Gaussian states.

    PubMed

    Chiribella, Giulio; Adesso, Gerardo

    2014-01-10

    Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large-scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian single-mode states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments. PMID:24483875

  2. A benchmark investigation on cleaning photomasks using wafer cleaning technologies

    NASA Astrophysics Data System (ADS)

    Kindt, Louis; Burnham, Jay; Marmillion, Pat

    2004-12-01

    As new technologies are developed for smaller linewidths, the specifications for mask cleanliness become much stricter. Not only must the particle removal efficiency increase, but the largest allowable particle size decreases. Specifications for film thickness and surface roughness are becoming tighter and consequently the integrity of these films must be maintained in order to preserve the functionality of the masks. Residual contamination remaining on the surface of the mask after cleaning processes can lead to subpellicle defect growth once the mask is exposed in a stepper environment. Only during the last several years, has an increased focus been put on improving mask cleaning. Over the years, considerably more effort has been put into developing advanced wafer cleaning technologies. However, because of the small market involved with mask cleaning, wafer cleaning equipment vendors have been reluctant to invest time and effort into developing cleaning processes and adapting their toolset to accommodate masks. With the advent of 300 mm processing, wafer cleaning tools are now more easily adapted to processing masks. These wafer cleaning technologies may offer a solution to the difficulties of mask cleaning and need to be investigated to determine whether or not they warrant continued investigation. This paper focuses on benchmarking advanced wafer cleaning technologies applied to mask cleaning. Ozonated water, hydrogenated water, super critical fluids, and cryogenic cleaning have been investigated with regards to stripping resist and cleaning particles from masks. Results that include film thickness changes, surface contamination, and particle removal efficiency will be discussed.

  3. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    NASA Astrophysics Data System (ADS)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  4. The challenge of benchmarking health systems: is ICT innovation capacity more systemic than organizational dependent?

    PubMed

    Lapão, Luís Velez

    2015-01-01

    The article by Catan et al. presents a benchmarking exercise comparing Israel and Portugal on the implementation of Information and Communication Technologies in the healthcare sector. Special attention was given to e-Health and m-Health. The authors collected information via a set of interviews with key stakeholders. They compared two different cultures and societies, which have reached slightly different implementation outcomes. Although the comparison is very enlightening, it is also challenging. Benchmarking exercises present a set of challenges, such as the choice of methodologies and the assessment of the impact on organizational strategy. Precise benchmarking methodology is a valid tool for eliciting information about alternatives for improving health systems. However, many beneficial interventions, which benchmark as effective, fail to translate into meaningful healthcare outcomes across contexts. There is a relationship between results and the innovational and competitive environments. Differences in healthcare governance and financing models are well known; but little is known about their impact on Information and Communication Technology implementation. The article by Catan et al. provides interesting clues about this issue. Public systems (such as those of Portugal, UK, Sweden, Spain, etc.) present specific advantages and disadvantages concerning Information and Communication Technology development and implementation. Meanwhile, private systems based fundamentally on insurance packages, (such as Israel, Germany, Netherlands or USA) present a different set of advantages and disadvantages - especially a more open context for innovation. Challenging issues from both the Portuguese and Israeli cases will be addressed. Clearly, more research is needed on both benchmarking methodologies and on ICT implementation strategies. PMID:26301085

  5. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...-benchmark plans described in 45 CFR 156.100. ... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 440... HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark...

  6. The role of the hospital registry in achieving outcome benchmarks in cancer care.

    PubMed

    Greene, Frederick L; Gilkerson, Sharon; Tedder, Paige; Smith, Kathy

    2009-06-15

    The hospital registry is a valuable tool for evaluating quality benchmarks in cancer care. As payment for performance standards are adopted, the registry will assume a more dynamic and economically important role in the hospital setting. At Carolinas Medical Center, the registry has been a key instrument in the comparison of state and national benchmarks and for program improvement in meeting standards in the care of breast and colon cancer. One of the significant successes of the American College of Surgeons Commission on Cancer (CoC) Hospital Approvals Program is the support of hospital registries, especially in small and midsized community hospitals throughout the United States. To become a member of the Hospital Approvals Program, a registry must be staffed appropriately and include analytic data for patients who have their primary diagnosis or treatment at the facility 1. The current challenge for most hospitals is to prove that the registry has specific worth when many facets of care are not compensated. Unfortunately a small number of hospitals have disbanded their registries because of the short-sighted decision that the registry and its personnel are a drain on the hospital system and do not generate revenue. In the present era of meeting benchmarks for care as a prelude to being paid by third party and governmental agencies 2,3, a primary argument is that the registry can be revenue-enhancing by quantifying specific outcomes in cancer care. Without having appropriate registry and abstract capability, the hospital leadership cannot measure the specific outcome benchmarks required in the era of "pay for performance" or "pay for participation".

  7. Benchmark Evaluation of Plutonium Nitrate Solution Arrays

    SciTech Connect

    M. A. Marshall; J. D. Bess

    2011-09-01

    In October and November of 1981 thirteen approach-to-critical experiments were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington, using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas{reg_sign} reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L of Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were performed to fill a gap in experimental data regarding criticality limits for storing and handling arrays of Pu solution in reprocessing facilities. Of the thirteen approach-to-critical experiments eleven resulted in extrapolations to critical configurations. Four of the approaches were extrapolated to the critical number of bottles; these were not evaluated further due to the large uncertainty associated with the modeling of a fraction of a bottle. The remaining seven approaches were extrapolated to critical array spacing of 3-4 and 4-4 arrays; these seven critical configurations were evaluation for inclusion as acceptable benchmark experiments in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Detailed and simple models of these configurations were created and the associated bias of these simplifications was determined to range from 0.00116 and 0.00162 {+-} 0.00006 ?keff. Monte Carlo analysis of all models was completed using MCNP5 with ENDF/BVII.0 neutron cross section libraries. A thorough uncertainty analysis of all critical, geometric, and material parameters was performed using parameter

  8. Benchmarking thermal neutron scattering in graphite

    NASA Astrophysics Data System (ADS)

    Zhou, Tong

    A Slowing-Down-Time experiment was designed and performed at the Oak Ridge National Laboratory (ORNL) by using the Oak Ridge Electron Linear Accelerator (ORELA) as a neutron source to study the neutron thermalization in graphite at room and higher temperatures. The MCNP5 code was utilized to simulate the detector responses and help optimize the experimental design including the size of the graphite assembly, furnace, shielding system and detector position. To facilitate such analysis, MCNP5 version 1.30 was modified to enable perturbation calculation using point detector type tallies. By using the modified MCNP5 code, the sensitivity of the experimental models to the graphite total thermal neutron cross-sections was studied to optimize the design of the experiment. Measurements of slowing-down-time spectrum in graphite were performed at room temperature for a 70x70x70 cm graphite pile by using a Li-6 scintillator and a U-235 fission counter at different locations. The measurements were directly compared to Monte Carlo simulations that use different graphite thermal neutron scattering cross-section libraries. Simulations based on the ENDF/B-VI graphite library were found to have a 30%-40% disagreement with the measurements. In addition to the graphite SDT experiment, which provided the data in the energy region above the graphite Bragg-cutoff energy, transmission experiments were performed for different types of graphite samples using the NIST 8.9 A beam (located at NG-6) to investigating the energy region below the Bragg-cutoff energy. Measurements confirmed that reactor grade graphite, which is a two phase material (crystalline graphite and binder (amorphous-like) carbon), has different thermal neutron scattering cross section from pyrolytic graphite (crystalline graphite). The experiments presented in this work compliment each other and provide an experimental data set which can be used to benchmark graphite thermal neutron scattering cross section libraries that

  9. Guidebook for Using the Tool BEST Cement: Benchmarking and Energy Savings Tool for the Cement Industry

    SciTech Connect

    Galitsky, Christina; Price, Lynn; Zhou, Nan; Fuqiu , Zhou; Huawen, Xiong; Xuemin, Zeng; Lan, Wang

    2008-07-30

    The Benchmarking and Energy Savings Tool (BEST) Cement is a process-based tool based on commercially available efficiency technologies used anywhere in the world applicable to the cement industry. This version has been designed for use in China. No actual cement facility with every single efficiency measure included in the benchmark will likely exist; however, the benchmark sets a reasonable standard by which to compare for plants striving to be the best. The energy consumption of the benchmark facility differs due to differences in processing at a given cement facility. The tool accounts for most of these variables and allows the user to adapt the model to operational variables specific for his/her cement facility. Figure 1 shows the boundaries included in a plant modeled by BEST Cement. In order to model the benchmark, i.e., the most energy efficient cement facility, so that it represents a facility similar to the user's cement facility, the user is first required to input production variables in the input sheet (see Section 6 for more information on how to input variables). These variables allow the tool to estimate a benchmark facility that is similar to the user's cement plant, giving a better picture of the potential for that particular facility, rather than benchmarking against a generic one. The input variables required include the following: (1) the amount of raw materials used in tonnes per year (limestone, gypsum, clay minerals, iron ore, blast furnace slag, fly ash, slag from other industries, natural pozzolans, limestone powder (used post-clinker stage), municipal wastes and others); the amount of raw materials that are preblended (prehomogenized and proportioned) and crushed (in tonnes per year); (2) the amount of additives that are dried and ground (in tonnes per year); (3) the production of clinker (in tonnes per year) from each kiln by kiln type; (4) the amount of raw materials, coal and clinker that is ground by mill type (in tonnes per year); (5

  10. Consistency and Magnitude of Differences in Reading Curriculum-Based Measurement Slopes in Benchmark versus Strategic Monitoring

    ERIC Educational Resources Information Center

    Mercer, Sterett H.; Keller-Margulis, Milena A.

    2015-01-01

    Differences in oral reading curriculum-based measurement (R-CBM) slopes based on two commonly used progress monitoring practices in field-based data were compared in this study. Semester-specific R-CBM slopes were calculated for 150 Grade 1 and 2 students who completed benchmark (i.e., 3 R-CBM probes collected 3 times per year) and strategic…

  11. Effect of noise correlations on randomized benchmarking

    NASA Astrophysics Data System (ADS)

    Ball, Harrison; Stace, Thomas M.; Flammia, Steven T.; Biercuk, Michael J.

    2016-02-01

    Among the most popular and well-studied quantum characterization, verification, and validation techniques is randomized benchmarking (RB), an important statistical tool used to characterize the performance of physical logic operations useful in quantum information processing. In this work we provide a detailed mathematical treatment of the effect of temporal noise correlations on the outcomes of RB protocols. We provide a fully analytic framework capturing the accumulation of error in RB expressed in terms of a three-dimensional random walk in "Pauli space." Using this framework we derive the probability density function describing RB outcomes (averaged over noise) for both Markovian and correlated errors, which we show is generally described by a Γ distribution with shape and scale parameters depending on the correlation structure. Long temporal correlations impart large nonvanishing variance and skew in the distribution towards high-fidelity outcomes—consistent with existing experimental data—highlighting potential finite-sampling pitfalls and the divergence of the mean RB outcome from worst-case errors in the presence of noise correlations. We use the filter-transfer function formalism to reveal the underlying reason for these differences in terms of effective coherent averaging of correlated errors in certain random sequences. We conclude by commenting on the impact of these calculations on the utility of single-metric approaches to quantum characterization, verification, and validation.

  12. Geant4 Computing Performance Benchmarking and Monitoring

    DOE PAGES

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less

  13. Geant4 Computing Performance Benchmarking and Monitoring

    SciTech Connect

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  14. Geant4 Computing Performance Benchmarking and Monitoring

    NASA Astrophysics Data System (ADS)

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-01

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. The scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  15. Cove benchmark calculations using SAGUARO and FEMTRAN

    SciTech Connect

    Eaton, R.R.; Martinez, M.J.

    1986-10-01

    Three small-scale, time-dependent, benchmarking calculations have been made using the finite element codes SAGUARO, to determine hydraulic head and water velocity profiles, and FEMTRAN, to predict the solute transport. Sand and hard rock porous materials were used. Time scales for the problems, which ranged from tens of hours to thousands of years, have posed no particular diffculty for the two codes. Studies have been performed to determine the effects of computational mesh, boundary conditions, velocity formulation and SAGUARO/FEMTRAN code-coupling on water and solute transport. Results showed that mesh refinement improved mass conservation. Varying the drain-tile size in COVE 1N had a weak effect on the rate at which the tile field drained. Excellent agreement with published COVE 1N data was obtained for the hydrological field and reasonable agreement for the solute-concentration predictions. The question remains whether these types of calculations can be carried out on repository-scale problems using material characteristic curves representing tuff with fractures.

  16. European Lean Gasoline Direct Injection Vehicle Benchmark

    SciTech Connect

    Chambon, Paul H; Huff, Shean P; Edwards, Kevin Dean; Norman, Kevin M; Prikhodko, Vitaly Y; Thomas, John F

    2011-01-01

    Lean Gasoline Direct Injection (LGDI) combustion is a promising technical path for achieving significant improvements in fuel efficiency while meeting future emissions requirements. Though Stoichiometric Gasoline Direct Injection (SGDI) technology is commercially available in a few vehicles on the American market, LGDI vehicles are not, but can be found in Europe. Oak Ridge National Laboratory (ORNL) obtained a European BMW 1-series fitted with a 2.0l LGDI engine. The vehicle was instrumented and commissioned on a chassis dynamometer. The engine and after-treatment performance and emissions were characterized over US drive cycles (Federal Test Procedure (FTP), the Highway Fuel Economy Test (HFET), and US06 Supplemental Federal Test Procedure (US06)) and steady state mappings. The vehicle micro hybrid features (engine stop-start and intelligent alternator) were benchmarked as well during the course of that study. The data was analyzed to quantify the benefits and drawbacks of the lean gasoline direct injection and micro hybrid technologies from a fuel economy and emissions perspectives with respect to the US market. Additionally that data will be formatted to develop, substantiate, and exercise vehicle simulations with conventional and advanced powertrains.

  17. Benchmarking of Planning Models Using Recorded Dynamics

    SciTech Connect

    Huang, Zhenyu; Yang, Bo; Kosterev, Dmitry

    2009-03-15

    Power system planning extensively uses model simulation to understand the dynamic behaviors and determine the operating limits of a power system. Model quality is key to the safety and reliability of electricity delivery. Planning model benchmarking, or model validation, has been one of the central topics in power engineering studies for years. As model validation aims at obtaining reasonable models to represent dynamic behavior of power system components, it has been essential to validate models against actual measurements. The development of phasor technology provides such measurements and represents a new opportunity for model validation as phasor measurements can capture power system dynamics with high-speed, time-synchronized data. Previously, methods for rigorous comparison of model simulation and recorded dynamics have been developed and applied to quantify model quality of power plants in the Western Electricity Coordinating Council (WECC). These methods can locate model components which need improvement. Recent work continues this effort and focuses on how model parameters may be calibrated to match recorded dynamics after the problematic model components are identified. A calibration method using Extended Kalman Filter technique is being developed. This paper provides an overview of prior work on model validation and presents new development on the calibration method and initial results of model parameter calibration.

  18. Benchmark notch test for life prediction

    NASA Technical Reports Server (NTRS)

    Domas, P. A.; Sharpe, W. N.; Ward, M.; Yau, J. F.

    1982-01-01

    The laser Interferometric Strain Displacement Gage (ISDG) was used to measure local strains in notched Inconel 718 test bars subjected to six different load histories at 649 C (1200 F) and including effects of tensile and compressive hold periods. The measurements were compared to simplified Neuber notch analysis predictions of notch root stress and strain. The actual strains incurred at the root of a discontinuity in cyclically loaded test samples subjected to inelastic deformation at high temperature where creep deformations readily occur were determined. The steady state cyclic, stress-strain response at the root of the discontinuity was analyzed. Flat, double notched uniaxially loaded fatigue specimens manufactured from the nickel base, superalloy Inconel 718 were used. The ISDG was used to obtain cycle by cycle recordings of notch root strain during continuous and hold time cycling at 649 C. Comparisons to Neuber and finite element model analyses were made. The results obtained provide a benchmark data set in high technology design where notch fatigue life is the predominant component service life limitation.

  19. Benchmarking Calculations of Excitonic Couplings between Bacteriochlorophylls.

    PubMed

    Kenny, Elise P; Kassal, Ivan

    2016-01-14

    Excitonic couplings between (bacterio)chlorophyll molecules are necessary for simulating energy transport in photosynthetic complexes. Many techniques for calculating the couplings are in use, from the simple (but inaccurate) point-dipole approximation to fully quantum-chemical methods. We compared several approximations to determine their range of applicability, noting that the propagation of experimental uncertainties poses a fundamental limit on the achievable accuracy. In particular, the uncertainty in crystallographic coordinates yields an uncertainty of about 20% in the calculated couplings. Because quantum-chemical corrections are smaller than 20% in most biologically relevant cases, their considerable computational cost is rarely justified. We therefore recommend the electrostatic TrEsp method across the entire range of molecular separations and orientations because its cost is minimal and it generally agrees with quantum-chemical calculations to better than the geometric uncertainty. Understanding these uncertainties can guard against striving for unrealistic precision; at the same time, detailed benchmarks can allow important qualitative questions-which do not depend on the precise values of the simulation parameters-to be addressed with greater confidence about the conclusions. PMID:26651217

  20. Benchmarking homogenization algorithms for monthly data

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.

    2013-09-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.