Science.gov

Sample records for aer benchmark specification

  1. Simple Benchmark Specifications for Space Radiation Protection

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  2. Benchmark Generation using Domain Specific Modeling

    SciTech Connect

    Bui, Ngoc B.; Zhu, Liming; Gorton, Ian; Liu, Yan

    2007-08-01

    Performance benchmarks are domain specific applications that are specialized to a certain set of technologies and platforms. The development of a benchmark application requires mapping the performance specific domain concepts to an implementation and producing complex technology and platform specific code. Domain Specific Modeling (DSM) promises to bridge the gap between application domains and implementations by allowing designers to specify solutions in domain-specific abstractions and semantics through Domain Specific Languages (DSL). This allows generation of a final implementation automatically from high level models. The modeling and task automation benefits obtained from this approach usually justify the upfront cost involved. This paper employs a DSM based approach to invent a new DSL, DSLBench, for benchmark generation. DSLBench and its associated code generation facilities allow the design and generation of a completely deployable benchmark application for performance testing from a high level model. DSLBench is implemented using Microsoft Domain Specific Language toolkit. It is integrated with the Visual Studio 2005 Team Suite as a plug-in to provide extra modeling capabilities for performance testing. We illustrate the approach using a case study based on .Net and C#.

  3. Specification for the VERA Depletion Benchmark Suite

    SciTech Connect

    Kim, Kang Seog

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  4. AER image filtering

    NASA Astrophysics Data System (ADS)

    Gómez-Rodríguez, F.; Linares-Barranco, A.; Paz, R.; Miró-Amarante, L.; Jiménez, G.; Civit, A.

    2007-05-01

    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows real-time virtual massive connectivity among huge number of neurons located on different chips.[1] By exploiting high speed digital communication circuits (with nano-seconds timing), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Neurons generate "events" according to their activity levels. That is, more active neurons generate more events per unit time and access the interchip communication channel more frequently than neurons with low activity. In Neuromorphic system development, AER brings some advantages to develop real-time image processing system: (1) AER represents the information like time continuous stream not like a frame; (2) AER sends the most important information first (although this depends on the sender); (3) AER allows to process information as soon as it is received. When AER is used in artificial vision field, each pixel is considered like a neuron, so pixel's intensity is represented like a sequence of events; modifying the number and the frequency of these events, it is possible to make some image filtering. In this paper we present four image filters using AER: (a) Noise addition and suppression, (b) brightness modification, (c) single moving object tracking and (d) geometrical transformations (rotation, translation, reduction and magnification). For testing and debugging, we use USB-AER board developed by Robotic and Technology of Computers Applied to Rehabilitation (RTCAR) research group. This board is based on an FPGA, devoted to manage the AER functionality. This board also includes a micro-controlled for USB communication, 2 Mbytes RAM and 2 AER ports (one for input and one for output).

  5. Benchmark specifications for EBR-II shutdown heat removal tests

    SciTech Connect

    Sofu, T.; Briggs, L. L.

    2012-07-01

    Argonne National Laboratory (ANL) is hosting an IAEA-coordinated research project on benchmark analyses of sodium-cooled fast reactor passive safety tests performed at the Experimental Breeder Reactor-II (EBR-II). The benchmark project involves analysis of a protected and an unprotected loss of flow tests conducted during an extensive testing program within the framework of the U.S. Integral Fast Reactor program to demonstrate the inherently safety features of EBR-II as a pool-type, sodium-cooled fast reactor prototype. The project is intended to improve the participants' design and safety analysis capabilities for sodium-cooled fast reactors through validation and qualification of safety analysis codes and methods. This paper provides a description of the EBR-II tests included in the program, and outlines the benchmark specifications being prepared to support the IAEA-coordinated research project. (authors)

  6. Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms

    PubMed Central

    Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas

    2016-01-01

    Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640

  7. Embedded Volttron specification - benchmarking small footprint compute device for Volttron

    SciTech Connect

    Sanyal, Jibonananda; Fugate, David L.; Woodworth, Ken; Nutaro, James J.; Kuruganti, Teja

    2015-08-17

    An embedded system is a small footprint computing unit that typically serves a specific purpose closely associated with measurements and control of hardware devices. These units are designed for reasonable durability and operations in a wide range of operating conditions. Some embedded systems support real-time operations and can demonstrate high levels of reliability. Many have failsafe mechanisms built to handle graceful shutdown of the device in exception conditions. The available memory, processing power, and network connectivity of these devices are limited due to the nature of their specific-purpose design and intended application. Industry practice is to carefully design the software for the available hardware capability to suit desired deployment needs. Volttron is an open source agent development and deployment platform designed to enable researchers to interact with devices and appliances without having to write drivers themselves. Hosting Volttron on small footprint embeddable devices enables its demonstration for embedded use. This report details the steps required and the experience in setting up and running Volttron applications on three small footprint devices: the Intel Next Unit of Computing (NUC), the Raspberry Pi 2, and the BeagleBone Black. In addition, the report also details preliminary investigation of the execution performance of Volttron on these devices.

  8. Reactor Physics Measurements and Benchmark Specifications for Oak Ridge Highly Enriched Uranium Sphere (ORSphere)

    DOE PAGESBeta

    Marshall, Margaret A.

    2014-11-04

    In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an effort to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with themore » GODIVA I experiments. Additionally, various material reactivity worths, the surface material worth coefficient, the delayed neutron fraction, the prompt neutron decay constant, relative fission density, and relative neutron importance were all measured. The critical assembly, material reactivity worths, the surface material worth coefficient, and the delayed neutron fraction were all evaluated as benchmark experiment measurements. The reactor physics measurements are the focus of this paper; although for clarity the critical assembly benchmark specifications are briefly discussed.« less

  9. Reactor Physics Measurements and Benchmark Specifications for Oak Ridge Highly Enriched Uranium Sphere (ORSphere)

    SciTech Connect

    Marshall, Margaret A.

    2014-11-04

    In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an effort to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with the GODIVA I experiments. Additionally, various material reactivity worths, the surface material worth coefficient, the delayed neutron fraction, the prompt neutron decay constant, relative fission density, and relative neutron importance were all measured. The critical assembly, material reactivity worths, the surface material worth coefficient, and the delayed neutron fraction were all evaluated as benchmark experiment measurements. The reactor physics measurements are the focus of this paper; although for clarity the critical assembly benchmark specifications are briefly discussed.

  10. On algorithmic rate-coded AER generation.

    PubMed

    Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel; Linares-Barranco, Bernabé; Civit-Balcells, Antón

    2006-05-01

    This paper addresses the problem of converting a conventional video stream based on sequences of frames into the spike event-based representation known as the address-event-representation (AER). In this paper we concentrate on rate-coded AER. The problem is addressed as an algorithmic problem, in which different methods are proposed, implemented and tested through software algorithms. The proposed algorithms are comparatively evaluated according to different criteria. Emphasis is put on the potential of such algorithms for a) doing the frame-based to event-based representation in real time, and b) that the resulting event streams ressemble as much as possible those generated naturally by rate-coded address-event VLSI chips, such as silicon AER retinae. It is found that simple and straightforward algorithms tend to have high potential for real time but produce event distributions that differ considerably from those obtained in AER VLSI chips. On the other hand, sophisticated algorithms that yield better event distributions are not efficient for real time operations. The methods based on linear-feedback-shift-register (LFSR) pseudorandom number generation is a good compromise, which is feasible for real time and yield reasonably well distributed events in time. Our software experiments, on a 1.6-GHz Pentium IV, show that at 50% AER bus load the proposed algorithms require between 0.011 and 1.14 ms per 8 bit-pixel per frame. One of the proposed LFSR methods is implemented in real time hardware using a prototyping board that includes a VirtexE 300 FPGA. The demonstration hardware is capable of transforming frames of 64 x 64 pixels of 8-bit depth at a frame rate of 25 frames per second, producing spike events at a peak rate of 10(7) events per second. PMID:16722179

  11. An Approach to Industrial Stormwater Benchmarks: Establishing and Using Site-Specific Threshold Criteria at Lawrence Livermore National Laboratory

    SciTech Connect

    Campbell, C G; Mathews, S

    2006-09-07

    Current regulatory schemes use generic or industrial sector specific benchmarks to evaluate the quality of industrial stormwater discharges. While benchmarks can be a useful tool for facility stormwater managers in evaluating the quality stormwater runoff, benchmarks typically do not take into account site-specific conditions, such as: soil chemistry, atmospheric deposition, seasonal changes in water source, and upstream land use. Failing to account for these factors may lead to unnecessary costs to trace a source of natural variation, or potentially missing a significant local water quality problem. Site-specific water quality thresholds, established upon the statistical evaluation of historic data take into account these factors, are a better tool for the direct evaluation of runoff quality, and a more cost-effective trigger to investigate anomalous results. Lawrence Livermore National Laboratory (LLNL), a federal facility, established stormwater monitoring programs to comply with the requirements of the industrial stormwater permit and Department of Energy orders, which require the evaluation of the impact of effluent discharges on the environment. LLNL recognized the need to create a tool to evaluate and manage stormwater quality that would allow analysts to identify trends in stormwater quality and recognize anomalous results so that trace-back and corrective actions could be initiated. LLNL created the site-specific water quality threshold tool to better understand the nature of the stormwater influent and effluent, to establish a technical basis for determining when facility operations might be impacting the quality of stormwater discharges, and to provide ''action levels'' to initiate follow-up to analytical results. The threshold criteria were based on a statistical analysis of the historic stormwater monitoring data and a review of relevant water quality objectives.

  12. Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).

    SciTech Connect

    Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.; Vehar, David W.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.

  13. A field-based method to derive macroinvertebrate benchmark for specific conductivity adapted for small data sets and demonstrated in the Hun-Tai River Basin, Northeast China.

    PubMed

    Zhao, Qian; Jia, Xiaobo; Xia, Rui; Lin, Jianing; Zhang, Yuan

    2016-09-01

    Ionic mixtures, measured as specific conductivity, have been increasingly concerned because of their toxicities to aquatic organisms. However, identifying protective values of specific conductivity for aquatic organisms is challenging given that laboratory test systems cannot examine more salt-intolerant species nor effects occurring in streams. Large data sets used for deriving field-based benchmarks are rarely available. In this study, a field-based method for small data sets was used to derive specific conductivity benchmark, which is expected to prevent the extirpation of 95% of local taxa from circum-neutral to alkaline waters dominated by a mixture of SO4(2-) and HCO3(-) anions and other dissolved ions. To compensate for the smaller sample size, species level analyses were combined with genus level analyses. The benchmark is based on extirpation concentration (XC95) values of specific conductivity for 60 macroinvertebrate genera estimated from 296 sampling sites in the Hun-Tai River Basin. We derived the specific conductivity benchmark by using a 2-point interpolation method, which yielded the benchmark of 249 μS/cm. Our study tailored the method that was developed by USEPA to derive aquatic life benchmark for specific conductivity for basin scale application, and may provide useful information for water pollution control and management. PMID:27389551

  14. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  15. SMART- Small Motor AerRospace Technology

    NASA Astrophysics Data System (ADS)

    Balucani, M.; Crescenzi, R.; Ferrari, A.; Guarrea, G.; Pontetti, G.; Orsini, F.; Quattrino, L.; Viola, F.

    2004-11-01

    This paper presents the "SMART" (Small Motor AerRospace Tecnology) propulsion system, constituted of microthrusters array realised by semiconductor technology on silicon wafers. SMART system is obtained gluing three main modules: combustion chambers, igniters and nozzles. The module was then filled with propellant and closed by gluing a piece of silicon wafer in the back side of the combustion chambers. The complete assembled module composed of 25 micro- thrusters with a 3 x 5 nozzle is presented. The measurement showed a thrust of 129 mN and impulse of 56,8 mNs burning about 70mg of propellant for the micro-thruster with nozzle and a thrust of 21 mN and impulse of 8,4 mNs for the micro-thruster without nozzle.

  16. Comprehensive benchmarking reveals H2BK20 acetylation as a distinctive signature of cell-state-specific enhancers and promoters.

    PubMed

    Kumar, Vibhor; Rayan, Nirmala Arul; Muratani, Masafumi; Lim, Stefan; Elanggovan, Bavani; Xin, Lixia; Lu, Tess; Makhija, Harshyaa; Poschmann, Jeremie; Lufkin, Thomas; Ng, Huck Hui; Prabhakar, Shyam

    2016-05-01

    Although over 35 different histone acetylation marks have been described, the overwhelming majority of regulatory genomics studies focus exclusively on H3K27ac and H3K9ac. In order to identify novel epigenomic traits of regulatory elements, we constructed a benchmark set of validated enhancers by performing 140 enhancer assays in human T cells. We tested 40 chromatin signatures on this unbiased enhancer set and identified H2BK20ac, a little-studied histone modification, as the most predictive mark of active enhancers. Notably, we detected a novel class of functionally distinct enhancers enriched in H2BK20ac but lacking H3K27ac, which was present in all examined cell lines and also in embryonic forebrain tissue. H2BK20ac was also unique in highlighting cell-type-specific promoters. In contrast, other acetylation marks were present in all active promoters, regardless of cell-type specificity. In stimulated microglial cells, H2BK20ac was more correlated with cell-state-specific expression changes than H3K27ac, with TGF-beta signaling decoupling the two acetylation marks at a subset of regulatory elements. In summary, our study reveals a previously unknown connection between histone acetylation and cell-type-specific gene regulation and indicates that H2BK20ac profiling can be used to uncover new dimensions of gene regulation. PMID:26957309

  17. Transient Inhibition of FGFR2b-ligands signaling leads to irreversible loss of cellular β-catenin organization and signaling in AER during mouse limb development.

    PubMed

    Danopoulos, Soula; Parsa, Sara; Al Alam, Denise; Tabatabai, Reza; Baptista, Sheryl; Tiozzo, Caterina; Carraro, Gianni; Wheeler, Matthew; Barreto, Guillermo; Braun, Thomas; Li, Xiaokun; Hajihosseini, Mohammad K; Bellusci, Saverio

    2013-01-01

    The vertebrate limbs develop through coordinated series of inductive, growth and patterning events. Fibroblast Growth Factor receptor 2b (FGFR2b) signaling controls the induction of the Apical Ectodermal Ridge (AER) but its putative roles in limb outgrowth and patterning, as well as in AER morphology and cell behavior have remained unclear. We have investigated these roles through graded and reversible expression of soluble dominant-negative FGFR2b molecules at various times during mouse limb development, using a doxycycline/transactivator/tet(O)-responsive system. Transient attenuation (≤ 24 hours) of FGFR2b-ligands signaling at E8.5, prior to limb bud induction, leads mostly to the loss or truncation of proximal skeletal elements with less severe impact on distal elements. Attenuation from E9.5 onwards, however, has an irreversible effect on the stability of the AER, resulting in a progressive loss of distal limb skeletal elements. The primary consequences of FGFR2b-ligands attenuation is a transient loss of cell adhesion and down-regulation of P63, β1-integrin and E-cadherin, and a permanent loss of cellular β-catenin organization and WNT signaling within the AER. Combined, these effects lead to the progressive transformation of the AER cells from pluristratified to squamous epithelial-like cells within 24 hours of doxycycline administration. These findings show that FGFR2b-ligands signaling has critical stage-specific roles in maintaining the AER during limb development. PMID:24167544

  18. Transient Inhibition of FGFR2b-Ligands Signaling Leads to Irreversible Loss of Cellular β-Catenin Organization and Signaling in AER during Mouse Limb Development

    PubMed Central

    Tabatabai, Reza; Baptista, Sheryl; Tiozzo, Caterina; Carraro, Gianni; Wheeler, Matthew; Barreto, Guillermo; Braun, Thomas; Li, Xiaokun; Hajihosseini, Mohammad K.; Bellusci, Saverio

    2013-01-01

    The vertebrate limbs develop through coordinated series of inductive, growth and patterning events. Fibroblast Growth Factor receptor 2b (FGFR2b) signaling controls the induction of the Apical Ectodermal Ridge (AER) but its putative roles in limb outgrowth and patterning, as well as in AER morphology and cell behavior have remained unclear. We have investigated these roles through graded and reversible expression of soluble dominant-negative FGFR2b molecules at various times during mouse limb development, using a doxycycline/transactivator/tet(O)-responsive system. Transient attenuation (≤24 hours) of FGFR2b-ligands signaling at E8.5, prior to limb bud induction, leads mostly to the loss or truncation of proximal skeletal elements with less severe impact on distal elements. Attenuation from E9.5 onwards, however, has an irreversible effect on the stability of the AER, resulting in a progressive loss of distal limb skeletal elements. The primary consequences of FGFR2b-ligands attenuation is a transient loss of cell adhesion and down-regulation of P63, β1-integrin and E-cadherin, and a permanent loss of cellular β-catenin organization and WNT signaling within the AER. Combined, these effects lead to the progressive transformation of the AER cells from pluristratified to squamous epithelial-like cells within 24 hours of doxycycline administration. These findings show that FGFR2b-ligands signaling has critical stage-specific roles in maintaining the AER during limb development. PMID:24167544

  19. A Hospital-Specific Template for Benchmarking its Cost and Quality

    PubMed Central

    Silber, Jeffrey H; Rosenbaum, Paul R; Ross, Richard N; Ludwig, Justin M; Wang, Wei; Niknam, Bijan A; Saynisch, Philip A; Even-Shoshan, Orit; Kelz, Rachel R; Fleisher, Lee A

    2014-01-01

    Objective Develop an improved method for auditing hospital cost and quality tailored to a specific hospital’s patient population. Data Sources/Setting Medicare claims in general, gynecologic and urologic surgery, and orthopedics from Illinois, New York, and Texas between 2004 and 2006. Study Design A template of 300 representative patients from a single index hospital was constructed and used to match 300 patients at 43 hospitals that had a minimum of 500 patients over a 3-year study period. Data Collection/Extraction Methods From each of 43 hospitals we chose 300 patients most resembling the template using multivariate matching. Principal Findings We found close matches on procedures and patient characteristics, far more balanced than would be expected in a randomized trial. There were little to no differences between the index hospital’s template and the 43 hospitals on most patient characteristics yet large and significant differences in mortality, failure-to-rescue, and cost. Conclusion Matching can produce fair, directly standardized audits. From the perspective of the index hospital, “hospital-specific” template matching provides the fairness of direct standardization with the specific institutional relevance of indirect standardization. Using this approach, hospitals will be better able to examine their performance, and better determine why they are achieving the results they observe. PMID:25201167

  20. Benchmarking Deep Networks for Predicting Residue-Specific Quality of Individual Protein Models in CASP11

    NASA Astrophysics Data System (ADS)

    Liu, Tong; Wang, Yiheng; Eickholt, Jesse; Wang, Zheng

    2016-01-01

    Quality assessment of a protein model is to predict the absolute or relative quality of a protein model using computational methods before the native structure is available. Single-model methods only need one model as input and can predict the absolute residue-specific quality of an individual model. Here, we have developed four novel single-model methods (Wang_deep_1, Wang_deep_2, Wang_deep_3, and Wang_SVM) based on stacked denoising autoencoders (SdAs) and support vector machines (SVMs). We evaluated these four methods along with six other methods participating in CASP11 at the global and local levels using Pearson’s correlation coefficients and ROC analysis. As for residue-specific quality assessment, our four methods achieved better performance than most of the six other CASP11 methods in distinguishing the reliably modeled residues from the unreliable measured by ROC analysis; and our SdA-based method Wang_deep_1 has achieved the highest accuracy, 0.77, compared to SVM-based methods and our ensemble of an SVM and SdAs. However, we found that Wang_deep_2 and Wang_deep_3, both based on an ensemble of multiple SdAs and an SVM, performed slightly better than Wang_deep_1 in terms of ROC analysis, indicating that integrating an SVM with deep networks works well in terms of certain measurements.

  1. Benchmarking Deep Networks for Predicting Residue-Specific Quality of Individual Protein Models in CASP11

    PubMed Central

    Liu, Tong; Wang, Yiheng; Eickholt, Jesse; Wang, Zheng

    2016-01-01

    Quality assessment of a protein model is to predict the absolute or relative quality of a protein model using computational methods before the native structure is available. Single-model methods only need one model as input and can predict the absolute residue-specific quality of an individual model. Here, we have developed four novel single-model methods (Wang_deep_1, Wang_deep_2, Wang_deep_3, and Wang_SVM) based on stacked denoising autoencoders (SdAs) and support vector machines (SVMs). We evaluated these four methods along with six other methods participating in CASP11 at the global and local levels using Pearson’s correlation coefficients and ROC analysis. As for residue-specific quality assessment, our four methods achieved better performance than most of the six other CASP11 methods in distinguishing the reliably modeled residues from the unreliable measured by ROC analysis; and our SdA-based method Wang_deep_1 has achieved the highest accuracy, 0.77, compared to SVM-based methods and our ensemble of an SVM and SdAs. However, we found that Wang_deep_2 and Wang_deep_3, both based on an ensemble of multiple SdAs and an SVM, performed slightly better than Wang_deep_1 in terms of ROC analysis, indicating that integrating an SVM with deep networks works well in terms of certain measurements. PMID:26763289

  2. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  3. Benchmarking HRD.

    ERIC Educational Resources Information Center

    Ford, Donald J.

    1993-01-01

    Discusses benchmarking, the continuous process of measuring one's products, services, and practices against those recognized as leaders in that field to identify areas for improvement. Examines ways in which benchmarking can benefit human resources functions. (JOW)

  4. Multicasting mesh AER: a scalable assembly approach for reconfigurable neuromorphic structured AER systems. Application to ConvNets.

    PubMed

    Zamarreno-Ramos, C; Linares-Barranco, A; Serrano-Gotarredona, T; Linares-Barranco, B

    2013-02-01

    This paper presents a modular, scalable approach to assembling hierarchically structured neuromorphic Address Event Representation (AER) systems. The method consists of arranging modules in a 2D mesh, each communicating bidirectionally with all four neighbors. Address events include a module label. Each module includes an AER router which decides how to route address events. Two routing approaches have been proposed, analyzed and tested, using either destination or source module labels. Our analyses reveal that depending on traffic conditions and network topologies either one or the other approach may result in better performance. Experimental results are given after testing the approach using high-end Virtex-6 FPGAs. The approach is proposed for both single and multiple FPGAs, in which case a special bidirectional parallel-serial AER link with flow control is exploited, using the FPGA Rocket-I/O interfaces. Extensive test results are provided exploiting convolution modules of 64 × 64 pixels with kernels with sizes up to 11 × 11, which process real sensory data from a Dynamic Vision Sensor (DVS) retina. One single Virtex-6 FPGA can hold up to 64 of these convolution modules, which is equivalent to a neural network with 262 × 10(3) neurons and almost 32 million synapses. PMID:23853282

  5. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  6. Neutron Reference Benchmark Field Specifications: ACRR Polyethylene-Lead-Graphite (PLG) Bucket Environment (ACRR-PLG-CC-32-CL).

    SciTech Connect

    Vega, Richard Manuel; Parm, Edward J.; Griffin, Patrick J.; Vehar, David W.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the Polyethylene-Lead-Graphite (PLG) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 37 integral dosimetry measurements in the neutron field are reported.

  7. Signal detection in FDA AERS database using Dirichlet process.

    PubMed

    Hu, Na; Huang, Lan; Tiwari, Ram C

    2015-08-30

    In the recent two decades, data mining methods for signal detection have been developed for drug safety surveillance, using large post-market safety data. Several of these methods assume that the number of reports for each drug-adverse event combination is a Poisson random variable with mean proportional to the unknown reporting rate of the drug-adverse event pair. Here, a Bayesian method based on the Poisson-Dirichlet process (DP) model is proposed for signal detection from large databases, such as the Food and Drug Administration's Adverse Event Reporting System (AERS) database. Instead of using a parametric distribution as a common prior for the reporting rates, as is the case with existing Bayesian or empirical Bayesian methods, a nonparametric prior, namely, the DP, is used. The precision parameter and the baseline distribution of the DP, which characterize the process, are modeled hierarchically. The performance of the Poisson-DP model is compared with some other models, through an intensive simulation study using a Bayesian model selection and frequentist performance characteristics such as type-I error, false discovery rate, sensitivity, and power. For illustration, the proposed model and its extension to address a large amount of zero counts are used to analyze statin drugs for signals using the 2006-2011 AERS data. PMID:25924820

  8. Benchmarking Using Basic DBMS Operations

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  9. PAS/POLY-HAMP SIGNALING IN AER-2, A SOLUBLE HEME-BASED SENSOR

    PubMed Central

    Watts, Kylie J; Taylor, Barry L; Johnson, Mark S

    2011-01-01

    SUMMARY Poly-HAMP domains are widespread in bacterial chemoreceptors, but previous studies have focused on receptors with single HAMP domains. The Pseudomonas aeruginosa chemoreceptor, Aer-2, has an unusual domain architecture consisting of a PAS sensing domain sandwiched between three N-terminal and two C-terminal HAMP domains, followed by a conserved kinase control module. The structure of the N-terminal HAMP domains was recently solved, making Aer-2 the first protein with resolved poly-HAMP structure. The role of Aer-2 in P. aeruginosa is unclear, but here we show that Aer-2 can interact with the chemotaxis system of Escherichia coli to mediate repellent responses to oxygen, carbon monoxide and nitric oxide. Using this model system to investigate signaling and poly-HAMP function, we determined that the Aer-2 PAS domain binds penta-coordinated b-type heme and that reversible signaling requires four of the five HAMP domains. Deleting HAMP 2 and/or 3 resulted in a kinase-off phenotype, whereas deleting HAMP 4 and/or 5 resulted in a kinase-on phenotype. Overall, these data support a model in which ligand-bound Aer-2 PAS and HAMP 2 and 3 act together to relieve inhibition of the kinase control module by HAMP 4 and 5, resulting in the kinase-on state of the Aer-2 receptor. PMID:21255112

  10. PAS/poly-HAMP signalling in Aer-2, a soluble haem-based sensor.

    PubMed

    Watts, Kylie J; Taylor, Barry L; Johnson, Mark S

    2011-02-01

    Poly-HAMP domains are widespread in bacterial chemoreceptors, but previous studies have focused on receptors with single HAMP domains. The Pseudomonas aeruginosa chemoreceptor, Aer-2, has an unusual domain architecture consisting of a PAS-sensing domain sandwiched between three N-terminal and two C-terminal HAMP domains, followed by a conserved kinase control module. The structure of the N-terminal HAMP domains was recently solved, making Aer-2 the first protein with resolved poly-HAMP structure. The role of Aer-2 in P. aeruginosa is unclear, but here we show that Aer-2 can interact with the chemotaxis system of Escherichia coli to mediate repellent responses to oxygen, carbon monoxide and nitric oxide. Using this model system to investigate signalling and poly-HAMP function, we determined that the Aer-2 PAS domain binds penta-co-ordinated b-type haem and that reversible signalling requires four of the five HAMP domains. Deleting HAMP 2 and/or 3 resulted in a kinase-off phenotype, whereas deleting HAMP 4 and/or 5 resulted in a kinase-on phenotype. Overall, these data support a model in which ligand-bound Aer-2 PAS and HAMP 2 and 3 act together to relieve inhibition of the kinase control module by HAMP 4 and 5, resulting in the kinase-on state of the Aer-2 receptor. PMID:21255112

  11. Developing Financial Benchmarks for Critical Access Hospitals

    PubMed Central

    Pink, George H.; Holmes, George M.; Slifkin, Rebecca T.; Thompson, Roger E.

    2009-01-01

    This study developed and applied benchmarks for five indicators included in the CAH Financial Indicators Report, an annual, hospital-specific report distributed to all critical access hospitals (CAHs). An online survey of Chief Executive Officers and Chief Financial Officers was used to establish benchmarks. Indicator values for 2004, 2005, and 2006 were calculated for 421 CAHs and hospital performance was compared to the benchmarks. Although many hospitals performed better than benchmark on one indicator in 1 year, very few performed better than benchmark on all five indicators in all 3 years. The probability of performing better than benchmark differed among peer groups. PMID:19544935

  12. Testing (Validating?) Cross Sections with ICSBEP Benchmarks

    SciTech Connect

    Kahler, Albert C. III

    2012-06-28

    We discuss how to use critical benchmarks from the International Handbook of Evaluated Criticality Safety Benchmark Experiments to determine the applicability of specific cross sections to the end-user's problem of interest. Particular attention is paid to making sure the selected suite of benchmarks includes the user's range of applicability (ROA).

  13. FireHose Streaming Benchmarks

    Energy Science and Technology Software Center (ESTSC)

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the streammore » of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.« less

  14. FireHose Streaming Benchmarks

    SciTech Connect

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.

  15. Time-recovering PCI-AER interface for bio-inspired spiking systems

    NASA Astrophysics Data System (ADS)

    Paz-Vicente, R.; Linares-Barranco, A.; Cascado, D.; Vicente, S.; Jimenez, G.; Civit, A.

    2005-06-01

    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate 'events' according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. When building multi-chip muti-layered AER systems it is absolutely necessary to have a computer interface that allows (a) to read AER interchip traffic into the computer and visualize it on screen, and (b) inject a sequence of events at some point of the AER structure. This is necessary for testing and debugging complex AER systems. This paper presents a PCI to AER interface, that dispatches a sequence of events received from the PCI bus with embedded timing information to establish when each event will be delivered. A set of specialized states machines has been introduced to recovery the possible time delays introduced by the asynchronous AER bus. On the input channel, the interface capture events assigning a timestamp and delivers them through the PCI bus to MATLAB applications. It has been implemented in real time hardware using VHDL and it has been tested in a PCI-AER board, developed by authors, that includes a Spartan II 200 FPGA. The demonstration hardware is currently capable to send and receive events at a peak rate of 8,3 Mev/sec, and a typical rate of 1 Mev/sec.

  16. SU-E-I-32: Benchmarking Head CT Doses: A Pooled Vs. Protocol Specific Analysis of Radiation Doses in Adult Head CT Examinations

    SciTech Connect

    Fujii, K; Bostani, M; Cagnon, C; McNitt-Gray, M

    2015-06-15

    Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tube Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks.

  17. Benchmarking in Student Affairs.

    ERIC Educational Resources Information Center

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  18. Fusion Welding of AerMet 100 Alloy

    SciTech Connect

    ENGLEHART, DAVID A.; MICHAEL, JOSEPH R.; NOVOTNY, PAUL M.; ROBINO, CHARLES V.

    1999-08-01

    A database of mechanical properties for weldment fusion and heat-affected zones was established for AerMet{reg_sign}100 alloy, and a study of the welding metallurgy of the alloy was conducted. The properties database was developed for a matrix of weld processes (electron beam and gas-tungsten arc) welding parameters (heat inputs) and post-weld heat treatment (PWHT) conditions. In order to insure commercial utility and acceptance, the matrix was commensurate with commercial welding technology and practice. Second, the mechanical properties were correlated with fundamental understanding of microstructure and microstructural evolution in this alloy. Finally, assessments of optimal weld process/PWHT combinations for cotildent application of the alloy in probable service conditions were made. The database of weldment mechanical properties demonstrated that a wide range of properties can be obtained in welds in this alloy. In addition, it was demonstrated that acceptable welds, some with near base metal properties, could be produced from several different initial heat treatments. This capability provides a means for defining process parameters and PWHT's to achieve appropriate properties for different applications, and provides useful flexibility in design and manufacturing. The database also indicated that an important region in welds is the softened region which develops in the heat-affected zone (HAZ) and analysis within the welding metallurgy studies indicated that the development of this region is governed by a complex interaction of precipitate overaging and austenite formation. Models and experimental data were therefore developed to describe overaging and austenite formation during thermal cycling. These models and experimental data can be applied to essentially any thermal cycle, and provide a basis for predicting the evolution of microstructure and properties during thermal processing.

  19. Delineating PAS-HAMP interaction surfaces and signalling-associated changes in the aerotaxis receptor Aer.

    PubMed

    Garcia, Darysbel; Watts, Kylie J; Johnson, Mark S; Taylor, Barry L

    2016-04-01

    The Escherichia coli aerotaxis receptor, Aer, monitors cellular oxygen and redox potential via FAD bound to a cytosolic PAS domain. Here, we show that Aer-PAS controls aerotaxis through direct, lateral interactions with a HAMP domain. This contrasts with most chemoreceptors where signals propagate along the protein backbone from an N-terminal sensor to HAMP. We mapped the interaction surfaces of the Aer PAS, HAMP and proximal signalling domains in the kinase-off state by probing the solvent accessibility of 129 cysteine substitutions. Inaccessible PAS-HAMP surfaces overlapped with a cluster of PAS kinase-on lesions and with cysteine substitutions that crosslinked the PAS β-scaffold to the HAMP AS-2 helix. A refined Aer PAS-HAMP interaction model is presented. Compared to the kinase-off state, the kinase-on state increased the accessibility of HAMP residues (apparently relaxing PAS-HAMP interactions), but decreased the accessibility of proximal signalling domain residues. These data are consistent with an alternating static-dynamic model in which oxidized Aer-PAS interacts directly with HAMP AS-2, enforcing a static HAMP domain that in turn promotes a dynamic proximal signalling domain, resulting in a kinase-off output. When PAS-FAD is reduced, PAS interaction with HAMP is relaxed and a dynamic HAMP and static proximal signalling domain convey a kinase-on output. PMID:26713609

  20. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  1. An introduction to benchmarking in healthcare.

    PubMed

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related. PMID:10139084

  2. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  3. 75 FR 27332 - AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC; Eagle Creek Water Resources, LLC; Eagle Creek Land...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-14

    ... Energy Regulatory Commission AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC; Eagle Creek Water Resources... Comments and Motions To Intervene May 7, 2010. On April 30, 2010, AER NY-Gen, LLC (transferor) and Eagle.... Joseph Klimaszewski, AER NY- Gen, LLC, 613 Plank Road, Forestburgh, New York, 12777; phone (845)...

  4. 77 FR 13592 - AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC, Eagle Creek Land...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-07

    ... Energy Regulatory Commission AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources... Comments and Motions To Intervene On February 24, 2012, AER NY-Gen, LLC (transferor), Eagle Creek Hydro...' Contact: Transferor: Mr. Joseph Klimaszewski, AER NY- Gen, LLC, P.O. Box 876, East Aurora, NY 14052,...

  5. Structure of CARB-4 and AER-1 CarbenicillinHydrolyzing β-Lactamases

    PubMed Central

    Sanschagrin, François; Bejaoui, Noureddine; Levesque, Roger C.

    1998-01-01

    We determined the nucleotide sequences of blaCARB-4 encoding CARB-4 and deduced a polypeptide of 288 amino acids. The gene was characterized as a variant of group 2c carbenicillin-hydrolyzing β-lactamases such as PSE-4, PSE-1, and CARB-3. The level of DNA homology between the bla genes for these β-lactamases varied from 98.7 to 99.9%, while that between these genes and blaCARB-4 encoding CARB-4 was 86.3%. The blaCARB-4 gene was acquired from some other source because it has a G+C content of 39.1%, compared to a G+C content of 67% for typical Pseudomonas aeruginosa genes. DNA sequencing revealed that blaAER-1 shared 60.8% DNA identity with blaPSE-3 encoding PSE-3. The deduced AER-1 β-lactamase peptide was compared to class A, B, C, and D enzymes and had 57.6% identity with PSE-3, including an STHK tetrad at the active site. For CARB-4 and AER-1, conserved canonical amino acid boxes typical of class A β-lactamases were identified in a multiple alignment. Analysis of the DNA sequences flanking blaCARB-4 and blaAER-1 confirmed the importance of gene cassettes acquired via integrons in bla gene distribution. PMID:9687391

  6. Radiative Forcing by Long-Lived Greenhouse Gases: Calculations with the AER Radiative Transfer Models

    SciTech Connect

    Collins, William; Iacono, Michael J.; Delamere, Jennifer S.; Mlawer, Eli J.; Shephard, Mark W.; Clough, Shepard A.; Collins, William D.

    2008-04-01

    A primary component of the observed, recent climate change is the radiative forcing from increased concentrations of long-lived greenhouse gases (LLGHGs). Effective simulation of anthropogenic climate change by general circulation models (GCMs) is strongly dependent on the accurate representation of radiative processes associated with water vapor, ozone and LLGHGs. In the context of the increasing application of the Atmospheric and Environmental Research, Inc. (AER) radiation models within the GCM community, their capability to calculate longwave and shortwave radiative forcing for clear sky scenarios previously examined by the radiative transfer model intercomparison project (RTMIP) is presented. Forcing calculations with the AER line-by-line (LBL) models are very consistent with the RTMIP line-by-line results in the longwave and shortwave. The AER broadband models, in all but one case, calculate longwave forcings within a range of -0.20 to 0.23 W m{sup -2} of LBL calculations and shortwave forcings within a range of -0.16 to 0.38 W m{sup -2} of LBL results. These models also perform well at the surface, which RTMIP identified as a level at which GCM radiation models have particular difficulty reproducing LBL fluxes. Heating profile perturbations calculated by the broadband models generally reproduce high-resolution calculations within a few hundredths K d{sup -1} in the troposphere and within 0.15 K d{sup -1} in the peak stratospheric heating near 1 hPa. In most cases, the AER broadband models provide radiative forcing results that are in closer agreement with high 20 resolution calculations than the GCM radiation codes examined by RTMIP, which supports the application of the AER models to climate change research.

  7. Vitamin B12 regulates photosystem gene expression via the CrtJ antirepressor AerR in Rhodobacter capsulatus

    PubMed Central

    Cheng, Zhuo; Li, Keran; Hammad, Loubna A.; Karty, Jonathan A.; Bauer, Carl E.

    2014-01-01

    Summary The tetrapyrroles heme, bacteriochlorophyll and cobalamin (B12) exhibit a complex interrelationship regarding their synthesis. In this study, we demonstrate that AerR functions as an antirepressor of the tetrapyrrole regulator CrtJ. We show that purified AerR contains B12 that is bound to a conserved histidine (His145) in AerR. The interaction of AerR to CrtJ was further demonstrated in vitro by pull down experiments using AerR as bait and quantified using microscale thermophoresis. DNase I DNA footprint assays show that AerR containing B12 inhibits CrtJ binding to the bchC promoter. We further show that bchC expression is greatly repressed in a B12 auxotroph of Rhodobacter capsulatus and that B12 regulation of gene expression is mediated by AerR’s ability to function as an antirepressor of CrtJ. This study thus provides a mechanism for how the essential tetrapyrrole, cobalamin controls the synthesis of bacteriochlorophyll, an essential component of the photosystem. PMID:24329562

  8. Benchmarking for Higher Education.

    ERIC Educational Resources Information Center

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson and Helen…

  9. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  10. Closed-Loop Neuromorphic Benchmarks.

    PubMed

    Stewart, Terrence C; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of "minimal" simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  11. Research Reactor Benchmarks

    SciTech Connect

    Ravnik, Matjaz; Jeraj, Robert

    2003-09-15

    A criticality benchmark experiment performed at the Jozef Stefan Institute TRIGA Mark II research reactor is described. This experiment and its evaluation are given as examples of benchmark experiments at research reactors. For this reason the differences and possible problems compared to other benchmark experiments are particularly emphasized. General guidelines for performing criticality benchmarks in research reactors are given. The criticality benchmark experiment was performed in a normal operating reactor core using commercially available fresh 20% enriched fuel elements containing 12 wt% uranium in uranium-zirconium hydride fuel material. Experimental conditions to minimize experimental errors and to enhance computer modeling accuracy are described. Uncertainties in multiplication factor due to fuel composition and geometry data are analyzed by sensitivity analysis. The simplifications in the benchmark model compared to the actual geometry are evaluated. Sample benchmark calculations with the MCNP and KENO Monte Carlo codes are given.

  12. 42 CFR 422.258 - Calculation of benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Calculation of benchmarks. 422.258 Section 422.258... and Plan Approval § 422.258 Calculation of benchmarks. (a) The term “MA area-specific non-drug monthly... the plan bids. (c) Calculation of MA regional non-drug benchmark amount. CMS calculates the...

  13. Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine

    2004-01-01

    We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.

  14. Neutron Reference Benchmark Field Specification: ACRR 44 Inch Lead-Boron (LB44) Bucket Environment (ACRR-LB44-CC-32-CL).

    SciTech Connect

    Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.; Vehar, David W.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the 44 inch Lead-Boron (LB44) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.

  15. A polishing hybrid AER/UF membrane process for the treatment of a high DOC content surface water.

    PubMed

    Humbert, H; Gallard, H; Croué, J-P

    2012-03-15

    The efficacy of a combined AER/UF (Anion Exchange Resin/Ultrafiltration) process for the polishing treatment of a high DOC (Dissolved Organic Carbon) content (>8 mgC/L) surface water was investigated at lab-scale using a strong base AER. Both resin dose and bead size had a significant impact on the kinetic removal of DOC for short contact times (i.e. <15 min). For resin doses higher than 700 mg/L and median bead sizes below 250 μm DOC removal remained constant after 30 min of contact time with very high removal rates (80%). Optimum AER treatment conditions were applied in combination with UF membrane filtration on water previously treated by coagulation-flocculation (i.e. 3 mgC/L). A more severe fouling was observed for each filtration run in the presence of AER. This fouling was shown to be mainly reversible and caused by the progressive attrition of the AER through the centrifugal pump leading to the production of resin particles below 50 μm in diameter. More important, the presence of AER significantly lowered the irreversible fouling (loss of permeability recorded after backwash) and reduced the DOC content of the clarified water to l.8 mgC/L (40% removal rate), concentration that remained almost constant throughout the experiment. PMID:22200260

  16. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    SciTech Connect

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  17. PyMPI Dynamic Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2007-02-16

    Pynamic is a benchmark designed to test a system's ability to handle the Dynamic Linking and Loading (DLL) requirements of Python-based scientific applications. This benchmark is developed to add a workload to our testing environment, a workload that represents a newly emerging class of DLL behaviors. Pynamic buildins on pyMPI, and MPI extension to Python C-extension dummy codes and a glue layer that facilitates linking and loading of the generated dynamic modules into the resultingmore » pyMPI. Pynamic is configurable, enabling modeling the static properties of a specific code as described in section 5. It does not, however, model any significant computationss of the target and hence, it is not subjected to the same level of control as the target code. In fact, HPC computer vendors and tool developers will be encouraged to add it to their tesitn suite once the code release is completed. an ability to produce and run this benchmark is an effective test for valifating the capability of a compiler and linker/loader as well as an OS kernel and other runtime system of HPC computer vendors. In addition, the benchmark is designed as a test case for stressing code development tools. Though Python has recently gained popularity in the HPC community, it heavy DLL operations have hindered certain HPC code development tools, notably parallel debuggers, from performing optimally.« less

  18. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  19. ICSBEP Benchmarks For Nuclear Data Applications

    NASA Astrophysics Data System (ADS)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  20. Making Benchmark Testing Work

    ERIC Educational Resources Information Center

    Herman, Joan L.; Baker, Eva L.

    2005-01-01

    Many schools are moving to develop benchmark tests to monitor their students' progress toward state standards throughout the academic year. Benchmark tests can provide the ongoing information that schools need to guide instructional programs and to address student learning problems. The authors discuss six criteria that educators can use to…

  1. Thermal Performance Benchmarking (Presentation)

    SciTech Connect

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  2. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  3. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  4. The General Concept of Benchmarking and Its Application in Higher Education in Europe

    ERIC Educational Resources Information Center

    Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna

    2009-01-01

    The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…

  5. Benchmarking neuromorphic systems with Nengo

    PubMed Central

    Bekolay, Trevor; Stewart, Terrence C.; Eliasmith, Chris

    2015-01-01

    Nengo is a software package for designing and simulating large-scale neural models. Nengo is architected such that the same Nengo model can be simulated on any of several Nengo backends with few to no modifications. Backends translate a model to specific platforms, which include GPUs and neuromorphic hardware. Nengo also contains a large test suite that can be run with any backend and focuses primarily on functional performance. We propose that Nengo's large test suite can be used to benchmark neuromorphic hardware's functional performance and simulation speed in an efficient, unbiased, and future-proof manner. We implement four benchmark models and show that Nengo can collect metrics across five different backends that identify situations in which some backends perform more accurately or quickly. PMID:26539076

  6. A performance geodynamo benchmark

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Heien, E. M.

    2014-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. However, to approach the parameters regime of the Earth's outer core, we need massively parallel computational environment for extremely large spatial resolutions. Local methods are expected to be more suitable for massively parallel computation because the local methods needs less data communication than the spherical harmonics expansion, but only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, some numerical dynamo models using spherical harmonics expansion has performed successfully with thousands of processes. We perform benchmark tests to asses various numerical methods to asses the next generation of geodynamo simulations. The purpose of the present benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the present study, we will report the results of the performance benchmark. We perform the participated dynamo models under the same computational environment (XSEDE TACC Stampede), and investigate computational performance. To simplify the problem, we choose the same model and parameter regime as the accuracy benchmark test, but perform the simulations with much finer spatial resolutions as possible to investigate computational capability (e

  7. Uncertainties in modelling Mt. Pinatubo eruption with 2-D AER model and CCM SOCOL

    NASA Astrophysics Data System (ADS)

    Kenzelmann, P.; Weisenstein, D.; Peter, T.; Luo, B. P.; Rozanov, E.; Fueglistaler, S.; Thomason, L. W.

    2009-04-01

    Large volcanic eruptions may introduce a strong forcing on climate. They challenge the skills of climate models. In addition to the short time attenuation of solar light by ashes the formation of stratospheric sulphate aerosols, due to volcanic sulphur dioxide injection into the lower stratosphere, may lead to a significant enhancement of the global albedo. The sulphate aerosols have a residence time of about 2 years. As a consequence of the enhanced sulphate aerosol concentration both the stratospheric chemistry and dynamics are strongly affected. Due to absorption of longwave and near infrared radiation the temperature in the lower stratosphere increases. So far chemistry climate models overestimate this warming [Eyring et al. 2006]. We present an extensive validation of extinction measurements and model runs of the eruption of Mt. Pinatubo in 1991. Even if Mt. Pinatubo eruption has been the best quantified volcanic eruption of this magnitude, the measurements show considerable uncertainties. For instance the total amount of sulphur emitted to the stratosphere ranges from 5-12 Mt sulphur [e.g. Guo et al. 2004, McCormick, 1992]. The largest uncertainties are in the specification of the main aerosol cloud. SAGE II, for instance, could not measure the peak of the aerosol extinction for about 1.5 years, because optical termination was reached. The gap-filling of the SAGE II [Thomason and Peter, 2006] using lidar measurements underestimates the total extinctions in the tropics for the first half year after the eruption by 30% compared to AVHRR [Rusell et. al 1992]. The same applies to the optical dataset described by Stenchikov et al. [1998]. We compare these extinction data derived from measurements with extinctions derived from AER 2D aerosol model calculations [Weisenstein et al., 2007]. Full microphysical calculations with injections of 14, 17, 20 and 26 Mt SO2 in the lower stratosphere were performed. The optical aerosol properties derived from SAGE II

  8. Toxicological Benchmarks for Wildlife

    SciTech Connect

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk

  9. Benchmarking expert system tools

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  10. Action-Oriented Benchmarking: Using the CEUS Database to Benchmark Commercial Buildings in California

    SciTech Connect

    Mathew, Paul; Mills, Evan; Bourassa, Norman; Brook, Martha

    2008-02-01

    The 2006 Commercial End Use Survey (CEUS) database developed by the California Energy Commission is a far richer source of energy end-use data for non-residential buildings than has previously been available and opens the possibility of creating new and more powerful energy benchmarking processes and tools. In this article--Part 2 of a two-part series--we describe the methodology and selected results from an action-oriented benchmarking approach using the new CEUS database. This approach goes beyond whole-building energy benchmarking to more advanced end-use and component-level benchmarking that enables users to identify and prioritize specific energy efficiency opportunities - an improvement on benchmarking tools typically in use today.

  11. BENCHMARKING SUSTAINABILITY ENGINEERING EDUCATION

    EPA Science Inventory

    The goals of this project are to develop and apply a methodology for benchmarking curricula in sustainability engineering and to identify individuals active in sustainability engineering education.

  12. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  13. TWODANT benchmark. Progress report

    SciTech Connect

    Lee, Sung

    1994-01-11

    TWODANT (Two-Dimensional, Diffusion-Accelerated, Neutral-Particle Transport) code has been benchmarked against 6 critical experiments (Jezebel plutonium critical assembly) and their k effective values compared with those of KENO and MCNP codes.

  14. NAS Grid Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.

  15. DOE Commercial Building Benchmark Models: Preprint

    SciTech Connect

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  16. XWeB: The XML Warehouse Benchmark

    NASA Astrophysics Data System (ADS)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  17. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  18. PNNL Information Technology Benchmarking

    SciTech Connect

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  19. The FTIO Benchmark

    NASA Technical Reports Server (NTRS)

    Fagerstrom, Frederick C.; Kuszmaul, Christopher L.; Woo, Alex C. (Technical Monitor)

    1999-01-01

    We introduce a new benchmark for measuring the performance of parallel input/ouput. This benchmark has flexible initialization. size. and scaling properties that allows it to satisfy seven criteria for practical parallel I/O benchmarks. We obtained performance results while running on the a SGI Origin2OOO computer with various numbers of processors: with 4 processors. the performance was 68.9 Mflop/s with 0.52 of the time spent on I/O, with 8 processors the performance was 139.3 Mflop/s with 0.50 of the time spent on I/O, with 16 processors the performance was 173.6 Mflop/s with 0.43 of the time spent on I/O. and with 32 processors the performance was 259.1 Mflop/s with 0.47 of the time spent on I/O.

  20. Benchmarking. It's the future.

    PubMed

    Fazzi, Robert A; Agoglia, Robert V; Harlow, Lynn

    2002-11-01

    You can't go to a state conference, read a home care publication or log on to an Internet listserv ... without hearing or reading someone ... talk about benchmarking. What are your average case mix weights? How many visits are your nurses averaging per day? What is your average caseload for full time nurses in the field? What is your profit or loss per episode? The benchmark systems now available in home care potentially can serve as an early warning and partial protection for agencies. Agencies can collect data, analyze the outcomes, and through comparative benchmarking, determine where they are competitive and where they need to improve. These systems clearly provide agencies with the opportunity to be more proactive. PMID:12436898

  1. Accelerated randomized benchmarking

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Ferrie, Christopher; Cory, D. G.

    2015-01-01

    Quantum information processing offers promising advances for a wide range of fields and applications, provided that we can efficiently assess the performance of the control applied in candidate systems. That is, we must be able to determine whether we have implemented a desired gate, and refine accordingly. Randomized benchmarking reduces the difficulty of this task by exploiting symmetries in quantum operations. Here, we bound the resources required for benchmarking and show that, with prior information, we can achieve several orders of magnitude better accuracy than in traditional approaches to benchmarking. Moreover, by building on state-of-the-art classical algorithms, we reach these accuracies with near-optimal resources. Our approach requires an order of magnitude less data to achieve the same accuracies and to provide online estimates of the errors in the reported fidelities. We also show that our approach is useful for physical devices by comparing to simulations.

  2. Benchmarking ENDF/B-VII.0

    NASA Astrophysics Data System (ADS)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  3. Dual requirement of ectodermal Smad4 during AER formation and termination of feedback signaling in mouse limb buds.

    PubMed

    Benazet, Jean-Denis; Zeller, Rolf

    2013-09-01

    BMP signaling is pivotal for normal limb bud development in vertebrate embryos and genetic analysis of receptors and ligands in the mouse revealed their requirement in both mesenchymal and ectodermal limb bud compartments. In this study, we genetically assessed the potential essential functions of SMAD4, a mediator of canonical BMP/TGFß signal transduction, in the mouse limb bud ectoderm. Msx2-Cre was used to conditionally inactivate Smad4 in the ectoderm of fore- and hindlimb buds. In hindlimb buds, the Smad4 inactivation disrupts the establishment and signaling by the apical ectodermal ridge (AER) from early limb bud stages onwards, which results in severe hypoplasia and/or aplasia of zeugo- and autopodal skeletal elements. In contrast, the developmentally later inactivation of Smad4 in forelimb buds does not alter AER formation and signaling, but prolongs epithelial-mesenchymal feedback signaling in advanced limb buds. The late termination of SHH and AER-FGF signaling delays distal progression of digit ray formation and inhibits interdigit apoptosis. In summary, our genetic analysis reveals the temporally and functionally distinct dual requirement of ectodermal Smad4 during initiation and termination of AER signaling. PMID:23818325

  4. A Suite of Criticality Benchmarks for Validating Nuclear Data

    SciTech Connect

    Stephanie C. Frankle

    1999-04-01

    The continuous-energy neutron data library ENDF60 for use with MCNP{trademark} was released in the fall of 1994, and was based on ENDF/B-Vl evaluations through Release 2. As part of the data validation process for this library, a number of criticality benchmark calculations were performed. The original suite of nine criticality benchmarks used to test ENDF60 has now been expanded to 86 benchmarks. This report documents the specifications for the suite of 86 criticality benchmarks that have been developed for validating nuclear data.

  5. NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  6. Benchmarks: WICHE Region 2012

    ERIC Educational Resources Information Center

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  7. Python/Lua Benchmarks

    SciTech Connect

    Busby, L.

    2014-08-01

    This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.

  8. Changes in Benchmarked Training.

    ERIC Educational Resources Information Center

    Bassi, Laurie J.; Cheney, Scott

    1996-01-01

    Comparisons of the training practices of large companies confirm that the delivery and financing of training is changing rapidly. Companies in the American Society for Training and Development Benchmarking Forum are delivering less training with permanent staff and more with strategic use of technology, contract staff, and external providers,…

  9. Monte Carlo Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  10. Benchmarks Momentum on Increase

    ERIC Educational Resources Information Center

    McNeil, Michele

    2008-01-01

    No longer content with the patchwork quilt of assessments used to measure states' K-12 performance, top policy groups are pushing states toward international benchmarking as a way to better prepare students for a competitive global economy. The National Governors Association, the Council of Chief State School Officers, and the standards-advocacy…

  11. Benchmarking the World's Best

    ERIC Educational Resources Information Center

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  12. Sp6 and Sp8 Transcription Factors Control AER Formation and Dorsal-Ventral Patterning in Limb Development

    PubMed Central

    Haro, Endika; Delgado, Irene; Junco, Marisa; Yamada, Yoshihiko; Mansouri, Ahmed; Oberg, Kerby C.; Ros, Marian A.

    2014-01-01

    The formation and maintenance of the apical ectodermal ridge (AER) is critical for the outgrowth and patterning of the vertebrate limb. The induction of the AER is a complex process that relies on integrated interactions among the Fgf, Wnt, and Bmp signaling pathways that operate within the ectoderm and between the ectoderm and the mesoderm of the early limb bud. The transcription factors Sp6 and Sp8 are expressed in the limb ectoderm and AER during limb development. Sp6 mutant mice display a mild syndactyly phenotype while Sp8 mutants exhibit severe limb truncations. Both mutants show defects in AER maturation and in dorsal-ventral patterning. To gain further insights into the role Sp6 and Sp8 play in limb development, we have produced mice lacking both Sp6 and Sp8 activity in the limb ectoderm. Remarkably, the elimination or significant reduction in Sp6;Sp8 gene dosage leads to tetra-amelia; initial budding occurs, but neither Fgf8 nor En1 are activated. Mutants bearing a single functional allele of Sp8 (Sp6−/−;Sp8+/−) exhibit a split-hand/foot malformation phenotype with double dorsal digit tips probably due to an irregular and immature AER that is not maintained in the center of the bud and on the abnormal expansion of Wnt7a expression to the ventral ectoderm. Our data are compatible with Sp6 and Sp8 working together and in a dose-dependent manner as indispensable mediators of Wnt/βcatenin and Bmp signaling in the limb ectoderm. We suggest that the function of these factors links proximal-distal and dorsal-ventral patterning. PMID:25166858

  13. Sp6 and Sp8 transcription factors control AER formation and dorsal-ventral patterning in limb development.

    PubMed

    Haro, Endika; Delgado, Irene; Junco, Marisa; Yamada, Yoshihiko; Mansouri, Ahmed; Oberg, Kerby C; Ros, Marian A

    2014-08-01

    The formation and maintenance of the apical ectodermal ridge (AER) is critical for the outgrowth and patterning of the vertebrate limb. The induction of the AER is a complex process that relies on integrated interactions among the Fgf, Wnt, and Bmp signaling pathways that operate within the ectoderm and between the ectoderm and the mesoderm of the early limb bud. The transcription factors Sp6 and Sp8 are expressed in the limb ectoderm and AER during limb development. Sp6 mutant mice display a mild syndactyly phenotype while Sp8 mutants exhibit severe limb truncations. Both mutants show defects in AER maturation and in dorsal-ventral patterning. To gain further insights into the role Sp6 and Sp8 play in limb development, we have produced mice lacking both Sp6 and Sp8 activity in the limb ectoderm. Remarkably, the elimination or significant reduction in Sp6;Sp8 gene dosage leads to tetra-amelia; initial budding occurs, but neither Fgf8 nor En1 are activated. Mutants bearing a single functional allele of Sp8 (Sp6-/-;Sp8+/-) exhibit a split-hand/foot malformation phenotype with double dorsal digit tips probably due to an irregular and immature AER that is not maintained in the center of the bud and on the abnormal expansion of Wnt7a expression to the ventral ectoderm. Our data are compatible with Sp6 and Sp8 working together and in a dose-dependent manner as indispensable mediators of Wnt/βcatenin and Bmp signaling in the limb ectoderm. We suggest that the function of these factors links proximal-distal and dorsal-ventral patterning. PMID:25166858

  14. Benchmark Problems for Spacecraft Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.

    2003-01-01

    To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.

  15. Storage-Intensive Supercomputing Benchmark Study

    SciTech Connect

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.

  16. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  17. Radiography benchmark 2014

    SciTech Connect

    Jaenisch, G.-R. Deresch, A. Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  18. Radiography benchmark 2014

    NASA Astrophysics Data System (ADS)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  19. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    deWit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure, This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  20. Sequoia Messaging Rate Benchmark

    SciTech Connect

    Friedley, Andrew

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8) with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.

  1. Sequoia Messaging Rate Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8)more » with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less

  2. MPI Multicore Linktest Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2008-01-25

    The MPI Multicore Linktest (LinkTest) measures the aggregate bandwidth from/to a multicore node in a parallel system. It allows the user to specify a variety of different node layout and communication routine variations and reports the maximal observed bandwidth across all specified options. In particular, this benchmark is able to vary the number of tasks on the root node and thereby allows users to study the impact of multicore architectures on MPI communication performance.

  3. Benchmarking the billing office.

    PubMed

    Woodcock, Elizabeth W; Williams, A Scott; Browne, Robert C; King, Gerald

    2002-09-01

    Benchmarking data related to human and financial resources in the billing process allows an organization to allocate its resources more effectively. Analyzing human resources used in the billing process helps determine cost-effective staffing. The deployment of human resources in a billing office affects timeliness of payment and ability to maximize revenue potential. Analyzing financial resource helps an organization allocate those resources more effectively. PMID:12235973

  4. Algebraic Multigrid Benchmark

    SciTech Connect

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumps and an anisotropy in one part.

  5. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  6. Simvastatin maintains steady patterns of GFR and improves AER and expression of slit diaphragm proteins in type II diabetes.

    PubMed

    Tonolo, G; Velussi, M; Brocco, E; Abaterusso, C; Carraro, A; Morgia, G; Satta, A; Faedda, R; Abhyankar, A; Luthman, H; Nosadini, R

    2006-07-01

    The factors determining the course of glomerular filtration rate (GFR) and albumin excretion rate (AER) and the expression of mRNA of slit diaphragm (SD) and podocyte proteins in microalbuminuric, hypertensive type II diabetic patients are not fully understood. GFR, AER, and SD protein mRNA were studied in 86 microalbuminuric, hypertensive, type II diabetics at baseline and after 4-year random double-blind treatment either with 40 mg simvastatin (Group 1) or with 30 g cholestyramine (Group 2) per day. Both groups had at baseline a GFR decay per year in the previous 2-4 years of 3 ml/min/1.73 m(2). Both Groups 1 and 2 showed a significant decrease of low-density lipoprotein cholesterol levels after simvastatin and cholestyramine treatment (P<0.01). No change from base line values was observed as for hs-C-reactive protein and interleukin-6. A significant decrease of 8-hydroxydeoxyguanosine urinary excretion was observed after simvastatin treatment. GFR did not change from baseline with simvstatin, whereas a decrease was observed with cholestyramine treatment (simvastatin vs cholestyramine: -0.21 vs -2.75 ml/min/1.73 m(2), P<0.01). AER decreased in Group 1 (P<0.01), but not in Group 2 patients. Real-time polymerase chain reaction measurement of mRNA SD proteins (CD2AP, FAT, Actn 4, NPHS1, and NPHS2) significantly increased in kidney biopsy specimens after simvastatin, but not cholestyramine treatment. Simvastatin, but not cholestyramine, 4-year treatment maintains steady patterns of GFR, and improves AER and expression of SD proteins in type II diabetes, despite similar hypocholesterolemic effects in circulation. PMID:16710349

  7. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  8. Benchmark on outdoor scenes

    NASA Astrophysics Data System (ADS)

    Zhang, Hairong; Wang, Cheng; Chen, Yiping; Jia, Fukai; Li, Jonathan

    2016-03-01

    Depth super-resolution is becoming popular in computer vision, and most of test data is based on indoor data sets with ground-truth measurements such as Middlebury. However, indoor data sets mainly are acquired from structured light techniques under ideal conditions, which cannot represent the objective world with nature light. Unlike indoor scenes, the uncontrolled outdoor environment is much more complicated and is rich both in visual and depth texture. For that reason, we develop a more challenging and meaningful outdoor benchmark for depth super-resolution using the state-of-the-art active laser scanning system.

  9. Algebraic Multigrid Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumpsmore » and an anisotropy in one part.« less

  10. Benchmarking emerging logic devices

    NASA Astrophysics Data System (ADS)

    Nikonov, Dmitri

    2014-03-01

    As complementary metal-oxide-semiconductor field-effect transistors (CMOS FET) are being scaled to ever smaller sizes by the semiconductor industry, the demand is growing for emerging logic devices to supplement CMOS in various special functions. Research directions and concepts of such devices are overviewed. They include tunneling, graphene based, spintronic devices etc. The methodology to estimate future performance of emerging (beyond CMOS) devices and simple logic circuits based on them is explained. Results of benchmarking are used to identify more promising concepts and to map pathways for improvement of beyond CMOS computing.

  11. A Bio-Inspired AER Temporal Tri-Color Differentiator Pixel Array.

    PubMed

    Farian, Łukasz; Leñero-Bardallo, Juan Antonio; Häfliger, Philipp

    2015-10-01

    This article investigates the potential of a bio-inspired vision sensor with pixels that detect transients between three primary colors. The in-pixel color processing is inspired by the retinal color opponency that are found in mammalian retinas. Color transitions in a pixel are represented by voltage spikes, which are akin to a neuron's action potential. These spikes are conveyed off-chip by the Address Event Representation (AER) protocol. To achieve sensitivity to three different color spectra within the visual spectrum, each pixel has three stacked photodiodes at different depths in the silicon substrate. The sensor has been fabricated in the standard TSMC 90 nm CMOS technology. A post-processing method to decode events into color transitions has been proposed and implemented as a custom interface to display real-time color changes in the visual scene. Experimental results are provided. Color transitions can be detected at high speed (up to 2.7 kHz). The sensor has a dynamic range of 58 dB and a power consumption of 22.5 mW. This type of sensor can be of use in industrial, robotics, automotive and other applications where essential information is contained in transient emissions shifts within the visual spectrum. PMID:26540694

  12. Continued development and validation of the AER two-dimensional interactive model

    NASA Technical Reports Server (NTRS)

    Ko, M. K. W.; Sze, N. D.; Shia, R. L.; Mackay, M.; Weisenstein, D. K.; Zhou, S. T.

    1996-01-01

    Results from two-dimensional chemistry-transport models have been used to predict the future behavior of ozone in the stratosphere. Since the transport circulation, temperature, and aerosol surface area are fixed in these models, they cannot account for the effects of changes in these quantities, which could be modified because of ozone redistribution and/or other changes in the troposphere associated with climate changes. Interactive two-dimensional models, which calculate the transport circulation and temperature along with concentrations of the chemical species, could provide answers to complement the results from three-dimension model calculations. In this project, we performed the following tasks in pursuit of the respective goals: (1) We continued to refine the 2-D chemistry-transport model; (2) We developed a microphysics model to calculate the aerosol loading and its size distribution; (3) The treatment of physics in the AER 2-D interactive model were refined in the following areas--the heating rate in the troposphere, and wave-forcing from propagation of planetary waves.

  13. Core Benchmarks Descriptions

    SciTech Connect

    Pavlovichev, A.M.

    2001-05-24

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented.

  14. Finding a benchmark for monitoring hospital cleanliness.

    PubMed

    Mulvey, D; Redding, P; Robertson, C; Woodall, C; Kingsmore, P; Bedwell, D; Dancer, S J

    2011-01-01

    This study evaluated three methods for monitoring hospital cleanliness. The aim was to find a benchmark that could indicate risk to patients from a contaminated environment. We performed visual monitoring, ATP bioluminescence and microbiological screening of five clinical surfaces before and after detergent-based cleaning on two wards over a four-week period. Five additional sites that were not featured in the routine domestic specification were also sampled. Measurements from all three methods were integrated and compared in order to choose appropriate levels for routine monitoring. We found that visual assessment did not reflect ATP values nor environmental contamination with microbial flora including Staphylococcus aureus and meticillin-resistant S. aureus (MRSA). There was a relationship between microbial growth categories and the proportion of ATP values exceeding a chosen benchmark but neither reliably predicted the presence of S. aureus or MRSA. ATP values were occasionally diverse. Detergent-based cleaning reduced levels of organic soil by 32% (95% confidence interval: 16-44%; P<0.001) but did not necessarily eliminate indicator staphylococci, some of which survived the cleaning process. An ATP benchmark value of 100 relative light units offered the closest correlation with microbial growth levels <2.5 cfu/cm(2) (receiver operating characteristic ROC curve sensitivity: 57%; specificity: 57%). In conclusion, microbiological and ATP monitoring confirmed environmental contamination, persistence of hospital pathogens and measured the effect on the environment from current cleaning practices. This study has provided provisional benchmarks to assist with future assessment of hospital cleanliness. Further work is required to refine practical sampling strategy and choice of benchmarks. PMID:21129820

  15. Closed benchmarks for network community structure characterization

    NASA Astrophysics Data System (ADS)

    Aldecoa, Rodrigo; Marín, Ignacio

    2012-02-01

    Characterizing the community structure of complex networks is a key challenge in many scientific fields. Very diverse algorithms and methods have been proposed to this end, many working reasonably well in specific situations. However, no consensus has emerged on which of these methods is the best to use in practice. In part, this is due to the fact that testing their performance requires the generation of a comprehensive, standard set of synthetic benchmarks, a goal not yet fully achieved. Here, we present a type of benchmark that we call “closed,” in which an initial network of known community structure is progressively converted into a second network whose communities are also known. This approach differs from all previously published ones, in which networks evolve toward randomness. The use of this type of benchmark allows us to monitor the transformation of the community structure of a network. Moreover, we can predict the optimal behavior of the variation of information, a measure of the quality of the partitions obtained, at any moment of the process. This enables us in many cases to determine the best partition among those suggested by different algorithms. Also, since any network can be used as a starting point, extensive studies and comparisons can be performed using a heterogeneous set of structures, including random ones. These properties make our benchmarks a general standard for comparing community detection algorithms.

  16. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  17. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  18. How to Advance TPC Benchmarks with Dependability Aspects

    NASA Astrophysics Data System (ADS)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  19. Benchmarking foreign electronics technologies

    SciTech Connect

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  20. Internal Benchmarking for Institutional Effectiveness

    ERIC Educational Resources Information Center

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  1. Benchmarking. A Guide for Educators.

    ERIC Educational Resources Information Center

    Tucker, Sue

    This book offers strategies for enhancing a school's teaching and learning by using benchmarking, a team-research and data-driven process for increasing school effectiveness. Benchmarking enables professionals to study and know their systems and continually improve their practices. The book is designed to lead a team step by step through the…

  2. Applications of Integral Benchmark Data

    SciTech Connect

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  3. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. PMID:22237134

  4. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  5. Fuel characteristics pertinent to the design of aircraft fuel systems, Supplement I : additional information on MIL-F-7914(AER) grade JP-5 fuel and several fuel oils

    NASA Technical Reports Server (NTRS)

    Barnett, Henry C; Hibbard, Robert R

    1953-01-01

    Since the release of the first NACA publication on fuel characteristics pertinent to the design of aircraft fuel systems (NACA-RM-E53A21), additional information has become available on MIL-F7914(AER) grade JP-5 fuel and several of the current grades of fuel oils. In order to make this information available to fuel-system designers as quickly as possible, the present report has been prepared as a supplement to NACA-RM-E53A21. Although JP-5 fuel is of greater interest in current fuel-system problems than the fuel oils, the available data are not as extensive. It is believed, however, that the limited data on JP-5 are sufficient to indicate the variations in stocks that the designer must consider under a given fuel specification. The methods used in the preparation and extrapolation of data presented in the tables and figures of this supplement are the same as those used in NACA-RM-E53A21.

  6. Benchmarking of collimation tracking using RHIC beam loss data.

    SciTech Connect

    Robert-Demolaize,G.; Drees, A.

    2008-06-23

    State-of-the-art tracking tools were recently developed at CERN to study the cleaning efficiency of the Large Hadron Collider (LHC) collimation system. In order to estimate the prediction accuracy of these tools, benchmarking studies can be performed using actual beam loss measurements from a machine that already uses a similar multistage collimation system. This paper reviews the main results from benchmarking studies performed with specific data collected from operations at the Relativistic Heavy Ion Collider (RHIC).

  7. Correlational effect size benchmarks.

    PubMed

    Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A

    2015-03-01

    Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PMID:25314367

  8. Virtual machine performance benchmarking.

    PubMed

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising. PMID:21207096

  9. Extraction of pure thermal neutron beam for the proposed PGNAA facility at the TRIGA research reactor of AERE, Savar, Bangladesh

    NASA Astrophysics Data System (ADS)

    Alam, Sabina; Zaman, M. A.; Islam, S. M. A.; Ahsan, M. H.

    1993-10-01

    A study on collimators and filters for the design of a spectrometer for prompt gamma neutron activation analysis (PGNAA) at one of the radial beamports of the TRIGA Mark II reactor at AERE, Savar has been carried out. On the basis of this study a collimator and a filter have been designed for the proposed PGNAA facility. Calculations have been done for measuring neutron flux at various positions of the core of the reactor using the computer code TRIGAP. Gamma dose in the core of the reactor has also been measured experimentally using TLD technique in the present work.

  10. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  11. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  12. A Heterogeneous Medium Analytical Benchmark

    SciTech Connect

    Ganapol, B.D.

    1999-09-27

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results.

  13. California commercial building energy benchmarking

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the identities of building owners might be revealed and

  14. Benchmarking for Bayesian Reinforcement Learning

    PubMed Central

    Ernst, Damien; Couëtoux, Adrien

    2016-01-01

    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed. PMID:27304891

  15. Analysis of an OECD/NEA high-temperature reactor benchmark

    SciTech Connect

    Hosking, J. G.; Newton, T. D.; Koeberl, O.; Morris, P.; Goluoglu, S.; Tombakoglu, T.; Colak, U.; Sartori, E.

    2006-07-01

    This paper describes analyses of the OECD/NEA HTR benchmark organized by the 'Working Party on the Scientific Issues of Reactor Systems (WPRS)', formerly the 'Working Party on the Physics of Plutonium Fuels and Innovative Fuel Cycles'. The benchmark was specifically designed to provide inter-comparisons for plutonium and thorium fuels when used in HTR systems. Calculations considering uranium fuel have also been included in the benchmark, in order to identify any increased uncertainties when using plutonium or thorium fuels. The benchmark consists of five phases, which include cell and whole-core calculations. Analysis of the benchmark has been performed by a number of international participants, who have used a range of deterministic and Monte Carlo code schemes. For each of the benchmark phases, neutronics parameters have been evaluated. Comparisons are made between the results of the benchmark participants, as well as comparisons between the predictions of the deterministic calculations and those from detailed Monte Carlo calculations. (authors)

  16. NAS Grid Benchmarks: A Tool for Grid Space Exploration

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We present an approach for benchmarking services provided by computational Grids. It is based on the NAS Parallel Benchmarks (NPB) and is called NAS Grid Benchmark (NGB) in this paper. We present NGB as a data flow graph encapsulating an instance of an NPB code in each graph node, which communicates with other nodes by sending/receiving initialization data. These nodes may be mapped to the same or different Grid machines. Like NPB, NGB will specify several different classes (problem sizes). NGB also specifies the generic Grid services sufficient for running the bench-mark. The implementor has the freedom to choose any specific Grid environment. However, we describe a reference implementation in Java, and present some scenarios for using NGB.

  17. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  18. Data-Intensive Benchmarking Suite

    Energy Science and Technology Software Center (ESTSC)

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking

  19. Benchmarks for GADRAS performance validation.

    SciTech Connect

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L., Jr.

    2009-09-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  20. Building a knowledge base of severe adverse drug events based on AERS reporting data using semantic web technologies.

    PubMed

    Jiang, Guoqian; Wang, Liwei; Liu, Hongfang; Solbrig, Harold R; Chute, Christopher G

    2013-01-01

    A semantically coded knowledge base of adverse drug events (ADEs) with severity information is critical for clinical decision support systems and translational research applications. However it remains challenging to measure and identify the severity information of ADEs. The objective of the study is to develop and evaluate a semantic web based approach for building a knowledge base of severe ADEs based on the FDA Adverse Event Reporting System (AERS) reporting data. We utilized a normalized AERS reporting dataset and extracted putative drug-ADE pairs and their associated outcome codes in the domain of cardiac disorders. We validated the drug-ADE associations using ADE datasets from SIDe Effect Resource (SIDER) and the UMLS. We leveraged the Common Terminology Criteria for Adverse Event (CTCAE) grading system and classified the ADEs into the CTCAE in the Web Ontology Language (OWL). We identified and validated 2,444 unique Drug-ADE pairs in the domain of cardiac disorders, of which 760 pairs are in Grade 5, 775 pairs in Grade 4 and 2,196 pairs in Grade 3. PMID:23920604

  1. Benchmarking for the Learning and Skills Sector.

    ERIC Educational Resources Information Center

    Owen, Jane

    This document is designed to introduce practitioners in the United Kingdom's learning and skills sector to the principles and practice of benchmarking. The first section defines benchmarking and differentiates metric, diagnostic, and process benchmarking. The remainder of the booklet details the following steps of the benchmarking process: (1) get…

  2. NHT-1 I/O Benchmarks

    NASA Technical Reports Server (NTRS)

    Carter, Russell; Ciotti, Bob; Fineberg, Sam; Nitzbert, Bill

    1992-01-01

    The NHT-1 benchmarks am a set of three scalable I/0 benchmarks suitable for evaluating the I/0 subsystems of high performance distributed memory computer systems. The benchmarks test application I/0, maximum sustained disk I/0, and maximum sustained network I/0. Sample codes are available which implement the benchmarks.

  3. How to avoid 'death by benchmarking'.

    PubMed

    Wofford, Dave; Libby, Darin

    2015-08-01

    Hospitals and health systems should adopt four key principles and practices when applying benchmarks to determine physician compensation: Acknowledge that a lower percentile may be appropriate. Use the median as the all-in benchmark. Use peer benchmarks when available. Use alternative benchmarks. PMID:26548140

  4. Benchmarking: A Method for Continuous Quality Improvement in Health

    PubMed Central

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-01-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  5. Effective Communication and File-I/O Bandwidth Benchmarks

    SciTech Connect

    Koniges, A E; Rabenseifner, R

    2001-05-02

    We describe the design and MPI implementation of two benchmarks created to characterize the balanced system performance of high-performance clusters and supercomputers: b{_}eff, the communication-specific benchmark examines the parallel message passing performance of a system, and b{_}eff{_}io, which characterizes the effective 1/0 bandwidth. Both benchmarks have two goals: (a) to get a detailed insight into the Performance strengths and weaknesses of different parallel communication and I/O patterns, and based on this, (b) to obtain a single bandwidth number that characterizes the average performance of the system namely communication and 1/0 bandwidth. Both benchmarks use a time driven approach and loop over a variety of communication and access patterns to characterize a system in an automated fashion. Results of the two benchmarks are given for several systems including IBM SPs, Cray T3E, NEC SX-5, and Hitachi SR 8000. After a redesign of b{_}eff{_}io, I/O bandwidth results for several compute partition sizes are achieved in an appropriate time for rapid benchmarking.

  6. Benchmark Problems for Space Mission Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Folta, David C.; Burns, Richard

    2003-01-01

    To provide a high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested for space mission formation flying. The problems cover formation flying in low altitude, near-circular Earth orbit, high altitude, highly elliptical Earth orbits, and large amplitude lissajous trajectories about co-linear libration points of the Sun-Earth/Moon system. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions that are of interest to various agencies.

  7. FLOWTRAN-TF code benchmarking

    SciTech Connect

    Flach, G.P.

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss Of Coolant Accident (LOCA). A description of the code is given by Flach et al. (1990). This report provides benchmarking results for the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit (Smith et al., 1990a; 1990b). Individual constitutive relations are benchmarked in Sections 2 through 5 while in Sections 6 and 7 integral code benchmarking results are presented. An overall assessment of FLOWTRAN-TF for its intended use in computing the ECS power limit completes the document.

  8. Radiation Detection Computational Benchmark Scenarios

    SciTech Connect

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  9. H.B. Robinson-2 pressure vessel benchmark

    SciTech Connect

    Remec, I.; Kam, F.B.K.

    1998-02-01

    The H. B. Robinson Unit 2 Pressure Vessel Benchmark (HBR-2 benchmark) is described and analyzed in this report. Analysis of the HBR-2 benchmark can be used as partial fulfillment of the requirements for the qualification of the methodology for calculating neutron fluence in pressure vessels, as required by the U.S. Nuclear Regulatory Commission Regulatory Guide DG-1053, Calculational and Dosimetry Methods for Determining Pressure Vessel Neutron Fluence. Section 1 of this report describes the HBR-2 benchmark and provides all the dimensions, material compositions, and neutron source data necessary for the analysis. The measured quantities, to be compared with the calculated values, are the specific activities at the end of fuel cycle 9. The characteristic feature of the HBR-2 benchmark is that it provides measurements on both sides of the pressure vessel: in the surveillance capsule attached to the thermal shield and in the reactor cavity. In section 2, the analysis of the HBR-2 benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed with three multigroup libraries based on ENDF/B-VI: BUGLE-93, SAILOR-95 and BUGLE-96. The average ratio of the calculated-to-measured specific activities (C/M) for the six dosimeters in the surveillance capsule was 0.90 {+-} 0.04 for all three libraries. The average C/Ms for the cavity dosimeters (without neptunium dosimeter) were 0.89 {+-} 0.10, 0.91 {+-} 0.10, and 0.90 {+-} 0.09 for the BUGLE-93, SAILOR-95 and BUGLE-96 libraries, respectively. It is expected that the agreement of the calculations with the measurements, similar to the agreement obtained in this research, should typically be observed when the discrete-ordinates method and ENDF/B-VI libraries are used for the HBR-2 benchmark analysis.

  10. TPC-V: A Benchmark for Evaluating the Performance of Database Applications in Virtual Environments

    NASA Astrophysics Data System (ADS)

    Sethuraman, Priya; Reza Taheri, H.

    For two decades, TPC benchmarks have been the gold standards for evaluating the performance of database servers. An area that TPC benchmarks had not addressed until now was virtualization. Virtualization is now a major technology in use in data centers, and is the number one technology on Gartner Group's Top Technologies List. In 2009, the TPC formed a Working Group to develop a benchmark specifically intended for virtual environments that run database applications. We will describe the characteristics of this benchmark, and provide a status update on its development.

  11. Austin Community College Benchmarking Update.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX. Office of Institutional Effectiveness.

    Austin Community College contracted with MGT of America, Inc. in spring 1999 to develop a peer and benchmark (best) practices analysis on key indicators. These indicators were updated in spring 2002 using data from eight Texas community colleges and four non-Texas institutions that represent large, comprehensive, urban community colleges, similar…

  12. Processor Emulator with Benchmark Applications

    SciTech Connect

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  13. Real-Time Benchmark Suite

    Energy Science and Technology Software Center (ESTSC)

    1992-01-17

    This software provides a portable benchmark suite for real time kernels. It tests the performance of many of the system calls, as well as the interrupt response time and task response time to interrupts. These numbers provide a baseline for comparing various real-time kernels and hardware platforms.

  14. Guideline for benchmarking thermal treatment systems for low-level mixed waste

    SciTech Connect

    Hoffman, D.P.; Gibson, L.V. Jr.; Hermes, W.H.; Bastian, R.E.; Davis, W.T.

    1994-01-01

    A process for benchmarking low-level mixed waste (LLMW) treatment technologies has been developed. When used in conjunction with the identification and preparation of surrogate waste mixtures, and with defined quality assurance and quality control procedures, the benchmarking process will effectively streamline the selection of treatment technologies being considered by the US Department of Energy (DOE) for LLMW cleanup and management. Following the quantitative template provided in the benchmarking process will greatly increase the technical information available for the decision-making process. The additional technical information will remove a large part of the uncertainty in the selection of treatment technologies. It is anticipated that the use of the benchmarking process will minimize technology development costs and overall treatment costs. In addition, the benchmarking process will enhance development of the most promising LLMW treatment processes and aid in transferring the technology to the private sector. To instill inherent quality, the benchmarking process is based on defined criteria and a structured evaluation format, which are independent of any specific conventional treatment or emerging process technology. Five categories of benchmarking criteria have been developed for the evaluation: operation/design; personnel health and safety; economics; product quality; and environmental quality. This benchmarking document gives specific guidance on what information should be included and how it should be presented. A standard format for reporting is included in Appendix A and B of this document. Special considerations for LLMW are presented and included in each of the benchmarking categories.

  15. Benchmarking New Designs for the Two-Year Institution of Higher Education.

    ERIC Educational Resources Information Center

    Copa, George H.; Ammentorp, William

    This report, which is intended for technical institutions planning to use benchmark processes to facilitate change, contains five benchmarking studies describing future-oriented practices at two-year technical and community colleges that meet the design specifications stated in the report "New Designs for the Two-Year Institution of Higher…

  16. Benchmarking short sequence mapping tools

    PubMed Central

    2013-01-01

    Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764

  17. CAVIAR: a 45k neuron, 5M synapse, 12G connects/s AER hardware sensory-processing- learning-actuating system for high-speed visual object recognition and tracking.

    PubMed

    Serrano-Gotarredona, Rafael; Oster, Matthias; Lichtsteiner, Patrick; Linares-Barranco, Alejandro; Paz-Vicente, Rafael; Gomez-Rodriguez, Francisco; Camunas-Mesa, Luis; Berner, Raphael; Rivas-Perez, Manuel; Delbruck, Tobi; Liu, Shih-Chii; Douglas, Rodney; Hafliger, Philipp; Jimenez-Moreno, Gabriel; Civit Ballcels, Anton; Serrano-Gotarredona, Teresa; Acosta-Jimenez, Antonio J; Linares-Barranco, Bernabé

    2009-09-01

    This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asychronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45k neurons (spiking cells), up to 5M synapses, performs 12G synaptic operations per second, and achieves millisecond object recognition and tracking latencies. PMID:19635693

  18. The Growth of Benchmarking in Higher Education.

    ERIC Educational Resources Information Center

    Schofield, Allan

    2000-01-01

    Benchmarking is used in higher education to improve performance by comparison with other institutions. Types used include internal, external competitive, external collaborative, external transindustry, and implicit. Methods include ideal type (or gold) standard, activity-based benchmarking, vertical and horizontal benchmarking, and comparative…

  19. How Benchmarking and Higher Education Came Together

    ERIC Educational Resources Information Center

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  20. Sustainable value assessment of farms using frontier efficiency benchmarks.

    PubMed

    Van Passel, Steven; Van Huylenbroeck, Guido; Lauwers, Ludwig; Mathijs, Erik

    2009-07-01

    Appropriate assessment of firm sustainability facilitates actor-driven processes towards sustainable development. The methodology in this paper builds further on two proven methodologies for the assessment of sustainability performance: it combines the sustainable value approach with frontier efficiency benchmarks. The sustainable value methodology tries to relate firm performance to the use of different resources. This approach assesses contributions to corporate sustainability by comparing firm resource productivity with the resource productivity of a benchmark, and this for all resources considered. The efficiency is calculated by estimating the production frontier indicating the maximum feasible production possibilities. In this research, the sustainable value approach is combined with efficiency analysis methods to benchmark sustainability assessment. In this way, the production theoretical underpinnings of efficiency analysis enrich the sustainable value approach. The methodology is presented using two different functional forms: the Cobb-Douglas and the translog functional forms. The simplicity of the Cobb-Douglas functional form as benchmark is very attractive but it lacks flexibility. The translog functional form is more flexible but has the disadvantage that it requires a lot of data to avoid estimation problems. Using frontier methods for deriving firm specific benchmarks has the advantage that the particular situation of each company is taken into account when assessing sustainability. Finally, we showed that the methodology can be used as an integrative sustainability assessment tool for policy measures. PMID:19553001

  1. IT-benchmarking of clinical workflows: concept, implementation, and evaluation.

    PubMed

    Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula

    2014-01-01

    Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project. PMID:24825693

  2. Simplified two and three dimensional HTTR benchmark problems

    SciTech Connect

    Zhan Zhang; Dingkang Zhang; Justin M. Pounders; Abderrafi M. Ougouag

    2011-05-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  3. Memory-intensive benchmarks: IRAM vs. cache-based machines

    SciTech Connect

    Gaeke, Brian G.; Husbands, Parry; Kim, Hyun Jin; Li, Xiaoye S.; Moon, Hyun Jin; Oliker, Leonid; Yelick, Katherine A.; Biswas, Rupak

    2001-09-29

    The increasing gap between processor and memory performance has led to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic structures, and the ratio of computation to memory operation.

  4. The Medical Library Association Benchmarking Network: results*

    PubMed Central

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd

    2006-01-01

    Objective: This article presents some limited results from the Medical Library Association (MLA) Benchmarking Network survey conducted in 2002. Other uses of the data are also presented. Methods: After several years of development and testing, a Web-based survey opened for data input in December 2001. Three hundred eighty-five MLA members entered data on the size of their institutions and the activities of their libraries. The data from 344 hospital libraries were edited and selected for reporting in aggregate tables and on an interactive site in the Members-Only area of MLANET. The data represent a 16% to 23% return rate and have a 95% confidence level. Results: Specific questions can be answered using the reports. The data can be used to review internal processes, perform outcomes benchmarking, retest a hypothesis, refute a previous survey findings, or develop library standards. The data can be used to compare to current surveys or look for trends by comparing the data to past surveys. Conclusions: The impact of this project on MLA will reach into areas of research and advocacy. The data will be useful in the everyday working of small health sciences libraries as well as provide concrete data on the current practices of health sciences libraries. PMID:16636703

  5. Geothermal Heat Pump Benchmarking Report

    SciTech Connect

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  6. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  7. MPI Multicore Torus Communication Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2008-02-05

    The MPI Multicore Torus Communications Benchmark (TorusTest) measues the aggegate bandwidth across all six links from/to any multicore node in a logical torus. It can run in wo modi: using a static or a random mapping of tasks to torus locations. The former can be used to achieve optimal mappings and aggregate bandwidths that can be achieved with varying node mappings.

  8. Restaurant Energy Use Benchmarking Guideline

    SciTech Connect

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  9. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    SciTech Connect

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  10. RISKIND verification and benchmark comparisons

    SciTech Connect

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  11. Nuclear Data Performance Testing Using Sensitive, but Less Frequently Used ICSBEP Benchmarks

    SciTech Connect

    J. Blair Briggs; John D. Bess

    2011-08-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) has published the International Handbook of Evaluated Criticality Safety Benchmark Experiments annually since 1995. The Handbook now spans over 51,000 pages with benchmark specifications for 4,283 critical, near critical, or subcritical configurations; 24 criticality alarm placement/shielding configurations with multiple dose points for each; and 200 configurations that have been categorized as fundamental physics measurements relevant to criticality safety applications. Benchmark data in the ICSBEP Handbook were originally intended for validation of criticality safety methods and data; however, the benchmark specifications are now used extensively for nuclear data testing. There are several, less frequently used benchmarks within the Handbook that are very sensitive to thorium and certain key structural and moderating materials. Calculated results for many of those benchmarks using modern nuclear data libraries suggest there is still room for improvement. These and other highly sensitive, but rarely quoted benchmarks are highlighted and data testing results provided using the Monte Carlo N-Particle Version 5 (MCNP5) code and continuous energy ENDF/B-V, VI.8, and VII.0, JEFF-3.1, and JENDL-3.3 nuclear data libraries.

  12. APPLICATION OF BENCHMARK DOSE METHODOLOGY TO DATA FROM PRENATAL DEVELOPMENTAL TOXICITY STUDIES

    EPA Science Inventory

    The benchmark dose (BMD) concept was applied to 246 conventional developmental toxicity datasets from government, industry and commercial laboratories. Five modeling approaches were used, two generic and three specific to developmental toxicity (DT models). BMDs for both quantal ...

  13. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    SciTech Connect

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  14. Gaia FGK benchmark stars: Metallicity

    NASA Astrophysics Data System (ADS)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  15. NASA Software Engineering Benchmarking Effort

    NASA Technical Reports Server (NTRS)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  16. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  17. Benchmarking for Excellence and the Nursing Process

    NASA Technical Reports Server (NTRS)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  18. Pynamic: the Python Dynamic Benchmark

    SciTech Connect

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  19. Benchmarking image fusion algorithm performance

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2012-06-01

    Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.

  20. TRENDS: Compendium of Benchmark Objects

    NASA Astrophysics Data System (ADS)

    Gonzales, Erica J.; Crepp, Justin R.; Bechter, Eric; Johnson, John A.; Montet, Benjamin T.; Howard, Andrew; Marcy, Geoffrey W.; Isaacson, Howard T.

    2016-01-01

    The physical properties of faint stellar and substellar objects are highly uncertain. For example, the masses of brown dwarfs are usually inferred using theoretical models, which are age dependent and have yet to be properly tested. With the goal of identifying new benchmark objects through observations with NIRC2 at Keck, we have carried out a comprehensive adaptive-optics survey as part of the TRENDS (TaRgetting bENchmark-objects with Doppler Spectroscopy) high-contrast imaging program. TRENDS targets nearby (d < 100 pc), Sun-like stars showing long-term radial velocity accelerations. We present the discovery of 28 confirmed, co-moving companions as well as 19 strong candidate companions to F-, G-, and K-stars with well-determined parallaxes and metallicities. Benchmark objects of this nature lend themselves to a three dimensional orbit determination that will ultimately yield a precise dynamical mass. Unambiguous mass measurements of very low mass companions, which straddle the hydrogen-burning boundary, will allow our compendium of objects to serve as excellent testbeds to substantiate theoretical evolutionary and atmospheric models in regimes where they currently breakdown (low temperature, low mass, and old age).

  1. Characterizing universal gate sets via dihedral benchmarking

    NASA Astrophysics Data System (ADS)

    Carignan-Dugas, Arnaud; Wallman, Joel J.; Emerson, Joseph

    2015-12-01

    We describe a practical experimental protocol for robustly characterizing the error rates of non-Clifford gates associated with dihedral groups, including small single-qubit rotations. Our dihedral benchmarking protocol is a generalization of randomized benchmarking that relaxes the usual unitary 2-design condition. Combining this protocol with existing randomized benchmarking schemes enables practical universal gate sets for quantum information processing to be characterized in a way that is robust against state-preparation and measurement errors. In particular, our protocol enables direct benchmarking of the π /8 gate even under the gate-dependent error model that is expected in leading approaches to fault-tolerant quantum computation.

  2. Benchmarks for acute stroke care delivery

    PubMed Central

    Hall, Ruth E.; Khan, Ferhana; Bayley, Mark T.; Asllani, Eriola; Lindsay, Patrice; Hill, Michael D.; O'Callaghan, Christina; Silver, Frank L.; Kapral, Moira K.

    2013-01-01

    Objective Despite widespread interest in many jurisdictions in monitoring and improving the quality of stroke care delivery, benchmarks for most stroke performance indicators have not been established. The objective of this study was to develop data-derived benchmarks for acute stroke quality indicators. Design Nine key acute stroke quality indicators were selected from the Canadian Stroke Best Practice Performance Measures Manual. Participants A population-based retrospective sample of patients discharged from 142 hospitals in Ontario, Canada, between 1 April 2008 and 31 March 2009 (N = 3191) was used to calculate hospital rates of performance and benchmarks. Intervention The Achievable Benchmark of Care (ABC™) methodology was used to create benchmarks based on the performance of the upper 15% of patients in the top-performing hospitals. Main Outcome Measures Benchmarks were calculated for rates of neuroimaging, carotid imaging, stroke unit admission, dysphasia screening and administration of stroke-related medications. Results The following benchmarks were derived: neuroimaging within 24 h, 98%; admission to a stroke unit, 77%; thrombolysis among patients arriving within 2.5 h, 59%; carotid imaging, 93%; dysphagia screening, 88%; antithrombotic therapy, 98%; anticoagulation for atrial fibrillation, 94%; antihypertensive therapy, 92% and lipid-lowering therapy, 77%. ABC™ acute stroke care benchmarks achieve or exceed the consensus-based targets required by Accreditation Canada, with the exception of dysphagia screening. Conclusions Benchmarks for nine hospital-based acute stroke care quality indicators have been established. These can be used in the development of standards for quality improvement initiatives. PMID:24141011

  3. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  4. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    ERIC Educational Resources Information Center

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  5. Present Status and Extensions of the Monte Carlo Performance Benchmark

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  6. NAS Parallel Benchmark Results 11-96. 1.0

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Bailey, David; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The NAS Parallel Benchmarks have been developed at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion. In other words, the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. These results represent the best results that have been reported to us by the vendors for the specific 3 systems listed. In this report, we present new NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, SGI Origin200, and SGI Origin2000. We also report High Performance Fortran (HPF) based NPB results for IBM SP2 Wide Nodes, HP/Convex Exemplar SPP2000, and SGI/CRAY T3D. These results have been submitted by Applied Parallel Research (APR) and Portland Group Inc. (PGI). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks.

  7. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    SciTech Connect

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  8. Benchmarking ICRF simulations for ITER

    SciTech Connect

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  9. Benchmarking Multipacting Simulations in VORPAL

    SciTech Connect

    C. Nieter, C. Roark, P. Stoltz, K. Tian

    2009-05-01

    We will present the results of benchmarking simulations run to test the ability of VORPAL to model multipacting processes in Superconducting Radio Frequency structures. VORPAL is an electromagnetic (FDTD) particle-in-cell simulation code originally developed for applications in plasma and beam physics. The addition of conformal boundaries and algorithms for secondary electron emission allow VORPAL to be applied to multipacting processes. We start with simulations of multipacting between parallel plates where there are well understood theoretical predictions for the frequency bands where multipacting is expected to occur. We reproduce the predicted multipacting bands and demonstrate departures from the theoretical predictions when a more sophisticated model of secondary emission is used. Simulations of existing cavity structures developed at Jefferson National Laboratories will also be presented where we compare results from VORPAL to experimental data.

  10. COG validation: SINBAD Benchmark Problems

    SciTech Connect

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  11. Beyond Benchmarking: Value-Adding Metrics

    ERIC Educational Resources Information Center

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  12. Sequenced Benchmarks for Geography and History

    ERIC Educational Resources Information Center

    Kendall, John S.; Richardson, Amy T.; Ryan, Susan E.

    2005-01-01

    This report is one in a series of reference documents designed to assist those who are directly involved in the revision and improvement of content standards, as well as teachers who use standards and benchmarks to guide everyday instruction. Reports in the series provide information about how benchmarks might best appear in a sequence of…

  13. Benchmarking Learning and Teaching: Developing a Method

    ERIC Educational Resources Information Center

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  14. A performance benchmark test for geodynamo simulations

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Heien, E. M.

    2013-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. As new models and numerical methods continue to be developed, it is important to update and extend benchmarks for testing these models. The first dynamo benchmark of Christensen et al. (2001) was applied to models based on spherical harmonic expansion methods. However, only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, spherical harmonics expansion methods perform poorly on massively parallel computers because global data communications are required for the spherical harmonics expansions to evaluate nonlinear terms. We perform benchmark tests to asses various numerical methods for the next generation of geodynamo simulations. The purpose of this benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the accuracy benchmark, we compare the dynamo models by using modest Ekman and Rayleigh numbers proposed by Christensen et. al. (2001). We investigate a required spatial resolution for each dynamo code to obtain less than 1% difference from the suggested solution of the benchmark test using the two magnetic boundary conditions. In the performance benchmark, we investigate computational performance under the same computational environment. We perform these

  15. A proposed benchmark problem for cargo nuclear threat monitoring

    NASA Astrophysics Data System (ADS)

    Wesley Holmes, Thomas; Calderon, Adan; Peeples, Cody R.; Gardner, Robin P.

    2011-10-01

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, [1]). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.×4 in.×16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.×16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.

  16. Benchmarking--Measuring and Comparing for Continuous Improvement.

    ERIC Educational Resources Information Center

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  17. Cross Section Evaluation Working Group benchmark specifications. Volume 2. Supplement

    SciTech Connect

    Not Available

    1986-09-01

    Neutron and photon flux spectra have been measured and calculated for the case of neutrons produced by D-T reactions streaming through a cylindrical iron duct surrounded by concrete. Measurements and calculations have also been obtained when the iron duct is partially filled by a laminated stainless steel and borated polyethylene shadow bar. Schematic diagrams of the experimental apparatus is included.

  18. Increased Uptake of HCV Testing through a Community-Based Educational Intervention in Difficult-to-Reach People Who Inject Drugs: Results from the ANRS-AERLI Study

    PubMed Central

    Roux, Perrine; Rojas Castro, Daniela; Ndiaye, Khadim; Debrus, Marie; Protopopescu, Camélia; Le Gall, Jean-Marie; Haas, Aurélie; Mora, Marion; Spire, Bruno; Suzan-Monti, Marie; Carrieri, Patrizia

    2016-01-01

    Aims The community-based AERLI intervention provided training and education to people who inject drugs (PWID) about HIV and HCV transmission risk reduction, with a focus on drug injecting practices, other injection-related complications, and access to HIV and HCV testing and care. We hypothesized that in such a population where HCV prevalence is very high and where few know their HCV serostatus, AERLI would lead to increased HCV testing. Methods The national multisite intervention study ANRS-AERLI consisted in assessing the impact of an injection-centered face-to-face educational session offered in volunteer harm reduction (HR) centers (“with intervention”) compared with standard HR centers (“without intervention”). The study included 271 PWID interviewed on three occasions: enrolment, 6 and 12 months. Participants in the intervention group received at least one face-to-face educational session during the first 6 months. Measurements The primary outcome of this analysis was reporting to have been tested for HCV during the previous 6 months. Statistical analyses used a two-step Heckman approach to account for bias arising from the non-randomized clustering design. This approach identified factors associated with HCV testing during the previous 6 months. Findings Of the 271 participants, 127 and 144 were enrolled in the control and intervention groups, respectively. Of the latter, 113 received at least one educational session. For the present analysis, we selected 114 and 88 participants eligible for HCV testing in the control and intervention groups, respectively. In the intervention group, 44% of participants reported having being tested for HCV during the previous 6 months at enrolment and 85% at 6 months or 12 months. In the control group, these percentages were 51% at enrolment and 78% at 12 months. Multivariable analyses showed that participants who received at least one educational session during follow-up were more likely to report HCV testing

  19. Effective File I/O Bandwidth Benchmark

    SciTech Connect

    Rabenseifner, R.; Koniges, A.E.

    2000-02-15

    The effective I/O bandwidth benchmark (b{_}eff{_}io) covers two goals: (1) to achieve a characteristic average number for the I/O bandwidth achievable with parallel MPI-I/O applications, and (2) to get detailed information about several access patterns and buffer lengths. The benchmark examines ''first write'', ''rewrite'' and ''read'' access, strided (individual and shared pointers) and segmented collective patterns on one file per application and non-collective access to one file per process. The number of parallel accessing processes is also varied and well-formed I/O is compared with non-well formed. On systems, meeting the rule that the total memory can be written to disk in 10 minutes, the benchmark should not need more than 15 minutes for a first pass of all patterns. The benchmark is designed analogously to the effective bandwidth benchmark for message passing (b{_}eff) that characterizes the message passing capabilities of a system in a few minutes. First results of the b{_}eff{_}io benchmark are given for IBM SP and Cray T3E systems and compared with existing benchmarks based on parallel Posix-I/O.

  20. Clinically meaningful performance benchmarks in MS

    PubMed Central

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (<6 seconds, 6–7.99 seconds, and ≥8 seconds) and found group main effects on 12 of 13 objective and subjective measures (p < 0.05). Conclusions: Using a cross-sectional design, we identified 2 clinically meaningful T25FW benchmarks of ≥6 seconds (6–7.99) and ≥8 seconds. Longitudinal and larger studies are needed to confirm the clinical utility and relevance of these proposed T25FW benchmarks and to parse out whether there are additional benchmarks in the lower (<6 seconds) and higher (>10 seconds) ranges of performance. PMID:24174581

  1. Developing integrated benchmarks for DOE performance measurement

    SciTech Connect

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  2. ASIS healthcare security benchmarking study.

    PubMed

    2001-01-01

    Effective security has aligned itself into the everyday operations of a healthcare organization. This is evident in every regional market segment, regardless of size, location, and provider clinical expertise or organizational growth. This research addresses key security issues from an acute care provider to freestanding facilities, from rural hospitals and community hospitals to large urban teaching hospitals. Security issues and concerns are identified and addressed daily by senior and middle management. As provider campuses become larger and more diverse, the hospitals surveyed have identified critical changes and improvements that are proposed or pending. Mitigating liabilities and improving patient, visitor, and/or employee safety are consequential to the performance and viability of all healthcare providers. Healthcare organizations have identified the requirement to compete for patient volume and revenue. The facility that can deliver high-quality healthcare in a comfortable, safe, secure, and efficient atmosphere will have a significant competitive advantage over a facility where patient or visitor security and safety is deficient. Continuing changes in healthcare organizations' operating structure and healthcare geographic layout mean changes in leadership and direction. These changes have led to higher levels of corporate responsibility. As a result, each organization participating in this benchmark study has added value and will derive value for the overall benefit of the healthcare providers throughout the nation. This study provides a better understanding of how the fundamental security needs of security in healthcare organizations are being addressed and its solutions identified and implemented. PMID:11602980

  3. Benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1994-01-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  4. Metrics and Benchmarks for Visualization

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    What is a "good" visualization? How can the quality of a visualization be measured? How can one tell whether one visualization is "better" than another? I claim that the true quality of a visualization can only be measured in the context of a particular purpose. The same image generated from the same data may be excellent for one purpose and abysmal for another. A good measure of visualization quality will correspond to the performance of users in accomplishing the intended purpose, so the "gold standard" is user testing. As a user of visualization software (or at least a consultant to such users) I don't expect visualization software to have been tested in this way for every possible use. In fact, scientific visualization (as distinct from more "production oriented" uses of visualization) will continually encounter new data, new questions and new purposes; user testing can never keep up. User need software they can trust, and advice on appropriate visualizations of particular purposes. Considering the following four processes, and their impact on visualization trustworthiness, reveals important work needed to create worthwhile metrics and benchmarks for visualization. These four processes are (1) complete system testing (user-in-loop), (2) software testing, (3) software design and (4) information dissemination. Additional information is contained in the original extended abstract.

  5. Cause-specific long-term mortality in survivors of childhood cancer in Switzerland: A population-based study.

    PubMed

    Schindler, Matthias; Spycher, Ben D; Ammann, Roland A; Ansari, Marc; Michel, Gisela; Kuehni, Claudia E

    2016-07-15

    Survivors of childhood cancer have a higher mortality than the general population. We describe cause-specific long-term mortality in a population-based cohort of childhood cancer survivors. We included all children diagnosed with cancer in Switzerland (1976-2007) at age 0-14 years, who survived ≥5 years after diagnosis and followed survivors until December 31, 2012. We obtained causes of death (COD) from the Swiss mortality statistics and used data from the Swiss general population to calculate age-, calendar year-, and sex-standardized mortality ratios (SMR), and absolute excess risks (AER) for different COD, by Poisson regression. We included 3,965 survivors and 49,704 person years at risk. Of these, 246 (6.2%) died, which was 11 times higher than expected (SMR 11.0). Mortality was particularly high for diseases of the respiratory (SMR 14.8) and circulatory system (SMR 12.7), and for second cancers (SMR 11.6). The pattern of cause-specific mortality differed by primary cancer diagnosis, and changed with time since diagnosis. In the first 10 years after 5-year survival, 78.9% of excess deaths were caused by recurrence of the original cancer (AER 46.1). Twenty-five years after diagnosis, only 36.5% (AER 9.1) were caused by recurrence, 21.3% by second cancers (AER 5.3) and 33.3% by circulatory diseases (AER 8.3). Our study confirms an elevated mortality in survivors of childhood cancer for at least 30 years after diagnosis with an increased proportion of deaths caused by late toxicities of the treatment. The results underline the importance of clinical follow-up continuing years after the end of treatment for childhood cancer. PMID:26950898

  6. Statistical benchmark for BosonSampling

    NASA Astrophysics Data System (ADS)

    Walschaers, Mattia; Kuipers, Jack; Urbina, Juan-Diego; Mayer, Klaus; Tichy, Malte Christopher; Richter, Klaus; Buchleitner, Andreas

    2016-03-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church-Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects.

  7. BENCHMARKING OF CT FOR PATIENT EXPOSURE OPTIMISATION.

    PubMed

    Racine, Damien; Ryckx, Nick; Ba, Alexandre; Ott, Julien G; Bochud, François O; Verdun, Francis R

    2016-06-01

    Patient dose optimisation in computed tomography (CT) should be done using clinically relevant tasks when dealing with image quality assessments. In the present work, low-contrast detectability for an average patient morphology was assessed on 56 CT units, using a model observer applied on images acquired with two specific protocols of an anthropomorphic phantom containing spheres. Images were assessed using the channelised Hotelling observer (CHO) with dense difference of Gaussian channels. The results were computed by performing receiver operating characteristics analysis (ROC) and using the area under the ROC curve (AUC) as a figure of merit. The results showed a small disparity at a volume computed tomography dose index (CTDIvol) of 15 mGy depending on the CT units for the chosen image quality criterion. For 8-mm targets, AUCs were 0.999 ± 0.018 at 20 Hounsfield units (HU) and 0.927 ± 0.054 at 10 HU. For 5-mm targets, AUCs were 0.947 ± 0.059 and 0.702 ± 0.068 at 20 and 10 HU, respectively. The robustness of the CHO opens the way for CT protocol benchmarking and optimisation processes. PMID:26940439

  8. Updates to the integrated protein-protein interaction benchmarks: Docking benchmark version 5 and affinity benchmark version 2

    PubMed Central

    Vreven, Thom; Moal, Iain H.; Vangone, Anna; Pierce, Brian G.; Kastritis, Panagiotis L.; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M.J.J.; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were added to the docking benchmark, 35 of which have experimentally-measured binding affinities. These updated docking and affinity benchmarks now contain 230 and 179 entries, respectively. In particular, the number of antibody-antigen complexes has increased significantly, by 67% and 74% in the docking and affinity benchmarks, respectively. We tested previously developed docking and affinity prediction algorithms on the new cases. Considering only the top ten docking predictions per benchmark case, a prediction accuracy of 38% is achieved on all 55 cases, and up to 50% for the 32 rigid-body cases only. Predicted affinity scores are found to correlate with experimental binding energies up to r=0.52 overall, and r=0.72 for the rigid complexes. PMID:26231283

  9. Resistance and uptake of cadmium by yeast, Pichia hampshirensis 4Aer, isolated from industrial effluent and its potential use in decontamination of wastewater.

    PubMed

    Khan, Zaman; Rehman, Abdul; Hussain, Syed Z

    2016-09-01

    Pichia hampshirensis 4Aer is first ever used yeast for the bioremediation of environmental cadmium (Cd(+2)) which could maximally remove 22 mM/g and 28 mM/g Cd(+2) from aqueous medium at lab and large scales, respectively. The biosorption was found to be the function of temperature, pH of solution, initial Cd(+2) concentration and biomass dosage. Competitive biosorption was investigated in binary and multi-metal system which indicated the decrease in Cd(+2) biosorption with increasing the competitive metal ions attributed to their higher electronegativity and larger radius. FTIR analysis revealed the active participation of amide and carbonyl moieties in Cd(+2) adsorption confirmed by EDX analysis. Electron micrographs summoned further surface adsorption and increased cell size due to intracellular Cd(+2) accumulation. Cd(+2) was the causative agent of some metal binding proteins as well as prodigious increase in glutathione and other non-protein thiols levels which is the crucial for the yeast to thrive oxidative stress generated by Cd(+2). Our experimental data were consistent with Langmuir as well as Freundlich isotherm models. The yeast obeyed pseudo second order kinetic model which makes it an effective biosorbent for Cd(+2). High bioremediation potential and spontaneity and feasibility of the process make P. hampshirensis 4Aer an impending foundation for green chemistry to exterminate environmental Cd(+2). PMID:27268792

  10. Gain-of-function mutations cluster in distinct regions associated with the signalling pathway in the PAS domain of the aerotaxis receptor, Aer.

    PubMed

    Campbell, Asharie J; Watts, Kylie J; Johnson, Mark S; Taylor, Barry L

    2010-08-01

    The Aer receptor monitors internal energy (redox) levels in Escherichia coli with an FAD-containing PAS domain. Here, we randomly mutagenized the region encoding residues 14-119 of the PAS domain and found 72 aerotaxis-defective mutants, 24 of which were gain-of-function, signal-on mutants. The mutations were mapped onto an Aer homology model based on the structure of the PAS-FAD domain in NifL from Azotobacter vinlandii. Signal-on lesions clustered in the FAD binding pocket, the beta-scaffolding and in the N-cap loop. We suggest that the signal-on lesions mimic the 'signal-on' state of the PAS domain, and therefore may be markers for the signal-in and signal-out regions of this domain. We propose that the reduction of FAD rearranges the FAD binding pocket in a way that repositions the beta-scaffolding and the N-cap loop. The resulting conformational changes are likely to be conveyed directly to the HAMP domain, and on to the kinase control module. In support of this hypothesis, we demonstrated disulphide band formation between cysteines substituted at residues N98C or I114C in the PAS beta-scaffold and residue Q248C in the HAMP AS-2 helix. PMID:20545849

  11. Benchmark calculations from summarized data: an example

    SciTech Connect

    Crump, K. S.; Teeguarden, Justin G.

    2009-03-01

    Benchmark calculations often are made from data extracted from publications. Such datamay not be in a formmost appropriate for benchmark analysis, and, as a result, suboptimal and/or non-standard benchmark analyses are often applied. This problem can be mitigated in some cases using Monte Carlo computational methods that allow the likelihood of the published data to be calculated while still using an appropriate benchmark dose (BMD) definition. Such an approach is illustrated herein using data from a study of workers exposed to styrene, in which a hybrid BMD calculation is implemented from dose response data reported only as means and standard deviations of ratios of scores on neuropsychological tests from exposed subjects to corresponding scores from matched controls. The likelihood of the data is computed using a combination of analytic and Monte Carlo integration methods.

  12. Toward Scalable Benchmarks for Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1996-01-01

    This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short- and long-term behavior. These benchmarks should be both portable and scalable so they may be used on storage systems from tens of gigabytes to petabytes or more. By developing a standard set of benchmarks that reflect real user workload, we hope to encourage system designers and users to publish performance figures that can be compared with those of other systems. This will allow users to choose the system that best meets their needs and give designers a tool with which they can measure the performance effects of improvements to their systems.

  13. Criticality Benchmark Results Using Various MCNP Data Libraries

    SciTech Connect

    Stephanie C. Frankle

    1999-07-01

    A suite of 86 criticality benchmarks has been recently implemented in MCNP{trademark} as part of the nuclear data validation effort. These benchmarks have been run using two sets of MCNP continuous-energy neutron data: ENDF/B-VI based data through Release 2 (ENDF60) and the ENDF/B-V based data. New evaluations were completed for ENDF/B-VI for a number of the important nuclides such as the isotopes of H, Be, C, N, O, Fe, Ni, {sup 235,238}U, {sup 237}Np, and {sup 239,240}Pu. When examining the results of these calculations for the five manor categories of {sup 233}U, intermediate-enriched {sup 235}U (IEU), highly enriched {sup 235}U (HEU), {sup 239}Pu, and mixed metal assembles, we find the following: (1) The new evaluations for {sup 9}Be, {sup 12}C, and {sup 14}N show no net effect on k{sub eff}; (2) There is a consistent decrease in k{sub eff} for all of the solution assemblies for ENDF/B-VI due to {sup 1}H and {sup 16}O, moving k{sub eff} further from the benchmark value for uranium solutions and closer to the benchmark value for plutonium solutions; (3) k{sub eff} decreased for the ENDF/B-VI Fe isotopic data, moving the calculated k{sub eff} further from the benchmark value; (4) k{sub eff} decreased for the ENDF/B-VI Ni isotopic data, moving the calculated k{sub eff} closer to the benchmark value; (5) The W data remained unchanged and tended to calculate slightly higher than the benchmark values; (6) For metal uranium systems, the ENDF/B-VI data for {sup 235}U tends to decrease k{sub eff} while the {sup 238}U data tends to increase k{sub eff}. The net result depends on the energy spectrum and material specifications for the particular assembly; (7) For more intermediate-energy systems, the changes in the {sup 235,238}U evaluations tend to increase k{sub eff}. For the mixed graphite and normal uranium-reflected assembly, a large increase in k{sub eff} due to changes in the {sup 238}U evaluation moved the calculated k{sub eff} much closer to the benchmark value. (8

  14. The Activities of the International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    SciTech Connect

    Briggs, Joseph Blair

    2001-10-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) – Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Spain, and Israel are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled “International Handbook of Evaluated Criticality Safety Benchmark Experiments”. The 2001 Edition of the Handbook contains benchmark specifications for 2642 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data.

  15. BUGLE-93 (ENDF/B-VI) cross-section library data testing using shielding benchmarks

    SciTech Connect

    Hunter, H.T.; Slater, C.O.; White, J.E.

    1994-06-01

    Several integral shielding benchmarks were selected to perform data testing for new multigroup cross-section libraries compiled from the ENDF/B-VI data for light water reactor (LWR) shielding and dosimetry. The new multigroup libraries, BUGLE-93 and VITAMIN-B6, were studied to establish their reliability and response to the benchmark measurements by use of radiation transport codes, ANISN and DORT. Also, direct comparisons of BUGLE-93 and VITAMIN-B6 to BUGLE-80 (ENDF/B-IV) and VITAMIN-E (ENDF/B-V) were performed. Some benchmarks involved the nuclides used in LWR shielding and dosimetry applications, and some were sensitive specific nuclear data, i.e. iron due to its dominant use in nuclear reactor systems and complex set of cross-section resonances. Five shielding benchmarks (four experimental and one calculational) are described and results are presented.

  16. Benchmarking of optical dimerizer systems.

    PubMed

    Pathak, Gopal P; Strickland, Devin; Vrana, Justin D; Tucker, Chandra L

    2014-11-21

    Optical dimerizers are a powerful new class of optogenetic tools that allow light-inducible control of protein-protein interactions. Such tools have been useful for regulating cellular pathways and processes with high spatiotemporal resolution in live cells, and a growing number of dimerizer systems are available. As these systems have been characterized by different groups using different methods, it has been difficult for users to compare their properties. Here, we set about to systematically benchmark the properties of four optical dimerizer systems, CRY2/CIB1, TULIPs, phyB/PIF3, and phyB/PIF6. Using a yeast transcriptional assay, we find significant differences in light sensitivity and fold-activation levels between the red light regulated systems but similar responses between the CRY2/CIB and TULIP systems. Further comparison of the ability of the CRY2/CIB1 and TULIP systems to regulate a yeast MAPK signaling pathway also showed similar responses, with slightly less background activity in the dark observed with CRY2/CIB. In the process of developing this work, we also generated an improved blue-light-regulated transcriptional system using CRY2/CIB in yeast. In addition, we demonstrate successful application of the CRY2/CIB dimerizers using a membrane-tethered CRY2, which may allow for better local control of protein interactions. Taken together, this work allows for a better understanding of the capacities of these different dimerization systems and demonstrates new uses of these dimerizers to control signaling and transcription in yeast. PMID:25350266

  17. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, Philippe; Dalmonech, D.; Fisher, J.B.; Fisher, R.; Friedlingstein, P.; Hibbard, Kathleen A.; Hoffman, F. M.; Huntzinger, Deborah; Jones, C.; Koven, C.; Lawrence, David M.; Li, D.J.; Mahecha, M.; Niu, S.L.; Norby, Richard J.; Piao, S.L.; Qi, X.; Peylin, P.; Prentice, I.C.; Riley, William; Reichstein, M.; Schwalm, C.; Wang, Y.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-09

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  18. Implementation of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  19. Benchmarking in healthcare organizations: an introduction.

    PubMed

    Anderson-Miles, E

    1994-09-01

    Business survival is increasingly difficult in the contemporary world. In order to survive, organizations need a commitment to excellence and a means of measuring that commitment and its results. Benchmarking provides one method for doing this. As the author describes, benchmarking is a performance improvement method that has been used for centuries. Recently, it has begun to be used in the healthcare industry where it has the potential to improve significantly the efficiency, cost-effectiveness, and quality of healthcare services. PMID:10146064

  20. Big Data in AER

    NASA Astrophysics Data System (ADS)

    Kregenow, Julia M.

    2016-01-01

    Penn State University teaches Introductory Astronomy to more undergraduates than any other institution in the U.S. Using a standardized assessment instrument, we have pre-/post- tested over 20,000 students in the last 8 years in both resident and online instruction. This gives us a rare opportunity to look for long term trends in the performance of our students during a period in which online instruction has burgeoned.

  1. Machine characterization and benchmark performance prediction

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.

    1988-01-01

    From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.

  2. Action-Oriented Benchmarking: Concepts and Tools

    SciTech Connect

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  3. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  4. Benchmarking for Cost Improvement. Final report

    SciTech Connect

    Not Available

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  5. Benchmarking of Methods for Genomic Taxonomy

    PubMed Central

    Cosentino, Salvatore; Lukjancenko, Oksana; Saputra, Dhany; Rasmussen, Simon; Hasman, Henrik; Sicheritz-Pontén, Thomas; Aarestrup, Frank M.; Ussery, David W.; Lund, Ole

    2014-01-01

    One of the first issues that emerges when a prokaryotic organism of interest is encountered is the question of what it is—that is, which species it is. The 16S rRNA gene formed the basis of the first method for sequence-based taxonomy and has had a tremendous impact on the field of microbiology. Nevertheless, the method has been found to have a number of shortcomings. In the current study, we trained and benchmarked five methods for whole-genome sequence-based prokaryotic species identification on a common data set of complete genomes: (i) SpeciesFinder, which is based on the complete 16S rRNA gene; (ii) Reads2Type that searches for species-specific 50-mers in either the 16S rRNA gene or the gyrB gene (for the Enterobacteraceae family); (iii) the ribosomal multilocus sequence typing (rMLST) method that samples up to 53 ribosomal genes; (iv) TaxonomyFinder, which is based on species-specific functional protein domain profiles; and finally (v) KmerFinder, which examines the number of cooccurring k-mers (substrings of k nucleotides in DNA sequence data). The performances of the methods were subsequently evaluated on three data sets of short sequence reads or draft genomes from public databases. In total, the evaluation sets constituted sequence data from more than 11,000 isolates covering 159 genera and 243 species. Our results indicate that methods that sample only chromosomal, core genes have difficulties in distinguishing closely related species which only recently diverged. The KmerFinder method had the overall highest accuracy and correctly identified from 93% to 97% of the isolates in the evaluations sets. PMID:24574292

  6. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  7. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  8. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  9. Experts discuss how benchmarking improves the healthcare industry. Roundtable discussion.

    PubMed

    Capozzalo, G L; Hlywak, J W; Kenny, B; Krivenko, C A

    1994-09-01

    Healthcare Financial Management engaged four benchmarking experts in a discussion about benchmarking and its role in the healthcare industry. The experts agree that benchmarking by itself does not create change unless it is part of a larger continuous quality improvement program; that benchmarking works best when senior management supports it enthusiastically and when the "appropriate" people are involved; and that benchmarking, when implemented correctly, is one of the best tools available to help healthcare organizations improve their internal processes. PMID:10146069

  10. Benchmark 2 - Springback of a draw / re-draw panel: Part C: Benchmark analysis

    NASA Astrophysics Data System (ADS)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing

    2013-12-01

    Benchmark analysis is summarized for DP600 and AA 5182-O. Nine simulation results submitted for this benchmark study are compared to the physical measurement results. The details on the codes, friction parameters, mesh technology, CPU, and material models are also summarized at the end of this report with the participant information details.

  11. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    PubMed

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. PMID:23999329

  12. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    ERIC Educational Resources Information Center

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  13. Toxicological benchmarks for wildlife: 1994 Revision

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  14. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    NASA Astrophysics Data System (ADS)

    van der Marck, Steven C.

    2012-12-01

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such

  15. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    SciTech Connect

    Marck, Steven C. van der

    2012-12-15

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for {sup 6}Li, {sup 7}Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series

  16. Shielding Integral Benchmark Archive and Database (SINBAD)

    SciTech Connect

    Kirk, Bernadette Lugue; Grove, Robert E; Kodeli, I.; Sartori, Enrico; Gulliford, J.

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  17. Benchmark field study of deep neutron penetration

    NASA Astrophysics Data System (ADS)

    Morgan, J. F.; Sale, K.; Gold, R.; Roberts, J. H.; Preston, C. C.

    1991-06-01

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry.

  18. Benchmark field study of deep neutron penetration

    SciTech Connect

    Morgan, J.F.; Sale, K. ); Gold, R.; Roberts, J.H.; Preston, C.C. )

    1991-06-10

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry. 18 refs.

  19. NAS Parallel Benchmarks, Multi-Zone Versions

    NASA Technical Reports Server (NTRS)

    vanderWijngaart, Rob F.; Haopiang, Jin

    2003-01-01

    We describe an extension of the NAS Parallel Benchmarks (NPB) suite that involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy, which is common among structured-mesh production flow solver codes in use at NASA Ames and elsewhere, provides relatively easily exploitable coarse-grain parallelism between meshes. Since the individual application benchmarks also allow fine-grain parallelism themselves, this NPB extension, named NPB Multi-Zone (NPB-MZ), is a good candidate for testing hybrid and multi-level parallelization tools and strategies.

  20. Analysis of ANS LWR physics benchmark problems.

    SciTech Connect

    Taiwo, T. A.

    1998-07-29

    Various Monte Carlo and deterministic solutions to the three PWR Lattice Benchmark Problems recently defined by the ANS Ad Hoc Committee on Reactor Physics Benchmarks are presented. These solutions were obtained using the VIM continuous-energy Monte Carlo code and the DIF3D/WIMS-D4M code package implemented at the Argonne National Laboratory. The code results for the K{sub eff} and relative pin power distribution are compared to measured values. Additionally, code results for the three benchmark-prescribed infinite lattice configurations are also intercompared. The results demonstrate that the codes produce very good estimates of both the K{sub eff} and power distribution for the critical core and the lattice parameters of the infinite lattice configuration.

  1. Energy benchmarking of South Australian WWTPs.

    PubMed

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment. PMID:23656950

  2. New Test Set for Video Quality Benchmarking

    NASA Astrophysics Data System (ADS)

    Raventos, Joaquin

    A new test set design and benchmarking approach (US Patent pending) allows a "standard observer" to assess the end-to-end image quality characteristics of video imaging systems operating in day time or low-light conditions. It uses randomized targets based on extensive application of Photometry, Geometrical Optics, and Digital Media. The benchmarking takes into account the target's contrast sensitivity, its color characteristics, and several aspects of human vision such as visual acuity and dynamic response. The standard observer is part of the "extended video imaging system" (EVIS). The new test set allows image quality benchmarking by a panel of standard observers at the same time. The new approach shows that an unbiased assessment can be guaranteed. Manufacturers, system integrators, and end users will assess end-to-end performance by simulating a choice of different colors, luminance levels, and dynamic conditions in the laboratory or in permanent video systems installations.

  3. Standardized benchmarking in the quest for orthologs.

    PubMed

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  4. Benchmarks for the point kinetics equations

    SciTech Connect

    Ganapol, B.; Picca, P.; Previti, A.; Mostacci, D.

    2013-07-01

    A new numerical algorithm is presented for the solution to the point kinetics equations (PKEs), whose accurate solution has been sought for over 60 years. The method couples the simplest of finite difference methods, a backward Euler, with Richardsons extrapolation, also called an acceleration. From this coupling, a series of benchmarks have emerged. These include cases from the literature as well as several new ones. The novelty of this presentation lies in the breadth of reactivity insertions considered, covering both prescribed and feedback reactivities, and the extreme 8- to 9- digit accuracy achievable. The benchmarks presented are to provide guidance to those who wish to develop further numerical improvements. (authors)

  5. Benchmark testing of {sup 233}U evaluations

    SciTech Connect

    Wright, R.Q.; Leal, L.C.

    1997-07-01

    In this paper we investigate the adequacy of available {sup 233}U cross-section data (ENDF/B-VI and JENDL-3) for calculation of critical experiments. An ad hoc revised {sup 233}U evaluation is also tested and appears to give results which are improved relative to those obtained with either ENDF/B-VI or JENDL-3 cross sections. Calculations of k{sub eff} were performed for ten fast benchmarks and six thermal benchmarks using the three cross-section sets. Central reaction-rate-ratio calculations were also performed.

  6. Benchmarking: implementing the process in practice.

    PubMed

    Stark, Sheila; MacHale, Anita; Lennon, Eileen; Shaw, Lynne

    Government guidance and policy promotes the use of benchmarks as measures against which practice and care can be measured. This provides the motivation for practitioners to make changes to improve patient care. Adopting a systematic approach, practitioners can implement changes in practice quickly. The process requires motivation and communication between professionals of all disciplines. It provides a forum for sharing good practice and developing a support network. In this article the authors outline the initial steps taken by three PCGs in implementing the benchmarking process as they move towards primary care trust status. PMID:12212335

  7. Los Alamos National Laboratory computer benchmarking 1982

    SciTech Connect

    Martin, J.L.

    1983-06-01

    Evaluating the performance of computing machinery is a continual effort of the Computer Research and Applications Group of the Los Alamos National Laboratory. This report summarizes the results of the group's benchmarking activities performed between October 1981 and September 1982, presenting compilation and execution times as well as megaflop rates for a set of benchmark codes. Tests were performed on the following computers: Cray Research, Inc. (CRI) Cray-1S; Control Data Corporation (CDC) 7600, 6600, Cyber 73, Cyber 825, Cyber 835, Cyber 855, and Cyber 205; Digital Equipment Corporation (DEC) VAX 11/780 and VAX 11/782; and Apollo Computer, Inc., Apollo.

  8. Toxicological benchmarks for wildlife: 1996 Revision

    SciTech Connect

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  9. Benchmark 4 - Wrinkling during cup drawing

    NASA Astrophysics Data System (ADS)

    Dick, Robert; Cardoso, Rui; Paulino, Mariana; Yoon, Jeong Whan

    2013-12-01

    Benchmark-4 is designed to predict wrinkling during cup drawing. Two different punch geometries have been selected in order to investigate the changes of wrinkling amplitude and wave. To study the effect of material on wrinkling, two distinct materials including AA 5042 and AKDQ steel are also considered in the benchmark. Problem description, material properties, and simulation reports with experimental data are summarized. At the request of the author, and Proceedings Editor, a corrected and updated version of this paper was published on January 2, 2014. The Corrigendum attached to the updated article PDF contains a list of the changes made to the original published version.

  10. Developing and Using Benchmarks for Eddy Current Simulation Codes Validation to Address Industrial Issues

    NASA Astrophysics Data System (ADS)

    Mayos, M.; Buvat, F.; Costan, V.; Moreau, O.; Gilles-Pascaud, C.; Reboud, C.; Foucher, F.

    2011-06-01

    To achieve performance demonstration, which is a legal requirement for the qualification of NDE processes applied on French nuclear power plants, the use of modeling tools is a valuable support, provided that the employed models have been previously validated. To achieve this, in particular for eddy current modeling, a validation methodology based on the use of specific benchmarks close to the actual industrial issue has to be defined. Nonetheless, considering the high variability in code origin and complexity, the feedback from experience on actual cases has shown that it was critical to define simpler generic and public benchmarks in order to perform a preliminary selection. A specific Working Group has been launched in the frame of COFREND, the French Association for NDE, resulting in the definition of several benchmark problems. This action is now ready for mutualization with similar international approaches.

  11. DICE: Database for the International Criticality Safety Benchmark Evaluation Program Handbook

    SciTech Connect

    Nouri, Ali; Nagel, Pierre; Briggs, J. Blair; Ivanova, Tatiana

    2003-09-15

    The 2002 edition of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) spans more than 26 000 pages and contains 330 evaluations with benchmark specifications for 2881 critical or near-critical configurations. With such a large content, it became evident that the users needed more than a broad and qualitative classification of experiments to make efficient use of the ICSBEP Handbook. This paper describes the features of Database for the International Handbook of Evaluated Criticality Safety Benchmark Experiments (DICE), which is a database for the ICSBEP Handbook. The DICE program contains a relational database loaded with selected information from each configuration and a users' interface that enables one to query the database and to extract specific parameters. Summary descriptions of each experimental configuration can also be obtained. In addition, plotting capabilities provide the means of comparing neutron spectra and sensitivity coefficients for a set of configurations.

  12. Overview of TPC Benchmark E: The Next Generation of OLTP Benchmarks

    NASA Astrophysics Data System (ADS)

    Hogan, Trish

    Set to replace the aging TPC-C, the TPC Benchmark E is the next generation OLTP benchmark, which more accurately models client database usage. TPC-E addresses the shortcomings of TPC-C. It has a much more complex workload, requires the use of RAID-protected storage, generates much less I/O, and is much cheaper and easier to set up, run, and audit. After a period of overlap, it is expected that TPC-E will become the de facto OLTP benchmark.

  13. VLSI Implementation of a 2.8 Gevent/s Packet-Based AER Interface with Routing and Event Sorting Functionality.

    PubMed

    Scholze, Stefan; Schiefer, Stefan; Partzsch, Johannes; Hartmann, Stephan; Mayr, Christian Georg; Höppner, Sebastian; Eisenreich, Holger; Henker, Stephan; Vogginger, Bernhard; Schüffny, Rene

    2011-01-01

    State-of-the-art large-scale neuromorphic systems require sophisticated spike event communication between units of the neural network. We present a high-speed communication infrastructure for a waferscale neuromorphic system, based on application-specific neuromorphic communication ICs in an field programmable gate arrays (FPGA)-maintained environment. The ICs implement configurable axonal delays, as required for certain types of dynamic processing or for emulating spike-based learning among distant cortical areas. Measurements are presented which show the efficacy of these delays in influencing behavior of neuromorphic benchmarks. The specialized, dedicated address-event-representation communication in most current systems requires separate, low-bandwidth configuration channels. In contrast, the configuration of the waferscale neuromorphic system is also handled by the digital packet-based pulse channel, which transmits configuration data at the full bandwidth otherwise used for pulse transmission. The overall so-called pulse communication subgroup (ICs and FPGA) delivers a factor 25-50 more event transmission rate than other current neuromorphic communication infrastructures. PMID:22016720

  14. VLSI Implementation of a 2.8 Gevent/s Packet-Based AER Interface with Routing and Event Sorting Functionality

    PubMed Central

    Scholze, Stefan; Schiefer, Stefan; Partzsch, Johannes; Hartmann, Stephan; Mayr, Christian Georg; Höppner, Sebastian; Eisenreich, Holger; Henker, Stephan; Vogginger, Bernhard; Schüffny, Rene

    2011-01-01

    State-of-the-art large-scale neuromorphic systems require sophisticated spike event communication between units of the neural network. We present a high-speed communication infrastructure for a waferscale neuromorphic system, based on application-specific neuromorphic communication ICs in an field programmable gate arrays (FPGA)-maintained environment. The ICs implement configurable axonal delays, as required for certain types of dynamic processing or for emulating spike-based learning among distant cortical areas. Measurements are presented which show the efficacy of these delays in influencing behavior of neuromorphic benchmarks. The specialized, dedicated address-event-representation communication in most current systems requires separate, low-bandwidth configuration channels. In contrast, the configuration of the waferscale neuromorphic system is also handled by the digital packet-based pulse channel, which transmits configuration data at the full bandwidth otherwise used for pulse transmission. The overall so-called pulse communication subgroup (ICs and FPGA) delivers a factor 25–50 more event transmission rate than other current neuromorphic communication infrastructures. PMID:22016720

  15. NAS Parallel Benchmarks Results 3-95

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Walter, Howard (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion, i.e., the complete details of the problem are given in a NAS technical document. Except for a few restrictions, benchmark implementors are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: CRAY C90, CRAY T90 and Fujitsu VPP500; (b) Highly Parallel Processors: CRAY T3D, IBM SP2-WN (Wide Nodes), and IBM SP2-TN2 (Thin Nodes 2); and (c) Symmetric Multiprocessors: Convex Exemplar SPPIOOO, CRAY J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL (75 MHz). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention future NAS plans for the NPB.

  16. Benchmarking 2011: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  17. Benchmarking 2010: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  18. Benchmarking Year Five Students' Reading Abilities

    ERIC Educational Resources Information Center

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  19. Benchmarking: A New Approach to Space Planning.

    ERIC Educational Resources Information Center

    Fink, Ira

    1999-01-01

    Questions some fundamental assumptions of historical methods of space guidelines in college facility planning, and offers an alternative approach to space projections based on a new benchmarking method. The method, currently in use at several institutions, uses space per faculty member as the basis for prediction of need and space allocation. (MSE)

  20. Issues in Benchmarking and Assessing Institutional Engagement

    ERIC Educational Resources Information Center

    Furco, Andrew; Miller, William

    2009-01-01

    The process of assessing and benchmarking community engagement can take many forms. To date, more than two dozen assessment tools for measuring community engagement institutionalization have been published. These tools vary substantially in purpose, level of complexity, scope, process, structure, and focus. While some instruments are designed to…

  1. Benchmarking Peer Production Mechanisms, Processes & Practices

    ERIC Educational Resources Information Center

    Fischer, Thomas; Kretschmer, Thomas

    2008-01-01

    This deliverable identifies key approaches for quality management in peer production by benchmarking peer production practices and processes in other areas. (Contains 29 footnotes, 13 figures and 2 tables.)[This report has been authored with contributions of: Kaisa Honkonen-Ratinen, Matti Auvinen, David Riley, Jose Pinzon, Thomas Fischer, Thomas…

  2. Sequenced Benchmarks for K-8 Science.

    ERIC Educational Resources Information Center

    Kendall, John S.; DeFrees, Keri L.; Richardson, Amy

    This document describes science benchmarks for grades K-8 in Earth and Space Science, Life Science, and Physical Science. Each subject area is divided into topics followed by a short content description and grade level information. Source documents for this paper included science content guides from California, Ohio, South Carolina, and South…

  3. Cleanroom Energy Efficiency: Metrics and Benchmarks

    SciTech Connect

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  4. Benchmark graphs for testing community detection algorithms

    NASA Astrophysics Data System (ADS)

    Lancichinetti, Andrea; Fortunato, Santo; Radicchi, Filippo

    2008-10-01

    Community structure is one of the most important features of real networks and reveals the internal organization of the nodes. Many algorithms have been proposed but the crucial issue of testing, i.e., the question of how good an algorithm is, with respect to others, is still open. Standard tests include the analysis of simple artificial graphs with a built-in community structure, that the algorithm has to recover. However, the special graphs adopted in actual tests have a structure that does not reflect the real properties of nodes and communities found in real networks. Here we introduce a class of benchmark graphs, that account for the heterogeneity in the distributions of node degrees and of community sizes. We use this benchmark to test two popular methods of community detection, modularity optimization, and Potts model clustering. The results show that the benchmark poses a much more severe test to algorithms than standard benchmarks, revealing limits that may not be apparent at a first analysis.

  5. A MULTIMODEL APPROACH FOR CALCULATING BENCHMARK DOSE

    EPA Science Inventory


    A Multimodel Approach for Calculating Benchmark Dose
    Ramon I. Garcia and R. Woodrow Setzer

    In the assessment of dose response, a number of plausible dose- response models may give fits that are consistent with the data. If no dose response formulation had been speci...

  6. MHEC Survey Establishes Midwest Property Insurance Benchmarks.

    ERIC Educational Resources Information Center

    Midwestern Higher Education Commission Risk Management Institute Research Bulletin, 1994

    1994-01-01

    This publication presents the results of a survey of over 200 midwestern colleges and universities on their property insurance programs and establishes benchmarks to help these institutions evaluate their insurance programs. Findings included the following: (1) 51 percent of respondents currently purchase their property insurance as part of a…

  7. What Is the Impact of Subject Benchmarking?

    ERIC Educational Resources Information Center

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  8. Environmental radiation: risk benchmarks or benchmarking risk assessment.

    PubMed

    Bates, Matthew E; Valverde, L James; Vogel, John T; Linkov, Igor

    2011-07-01

    In the wake of the compound March 2011 nuclear disaster at the Fukushima I nuclear power plant in Japan, international public dialogue has repeatedly turned to questions of the accuracy of current risk assessment processes to assess nuclear risks and the adequacy of existing regulatory risk thresholds to protect us from nuclear harm. We confront these issues with an emphasis on learning from the incident in Japan for future US policy discussions. Without delving into a broader philosophical discussion of the general social acceptance of the risk, the relative adequacy of existing US Nuclear Regulatory Commission (NRC) risk thresholds is assessed in comparison with the risk thresholds of federal agencies not currently under heightened public scrutiny. Existing NRC thresholds are found to be among the most conservative in the comparison, suggesting that the agency's current regulatory framework is consistent with larger societal ideals. In turning to risk assessment methodologies, the disaster in Japan does indicate room for growth. Emerging lessons seem to indicate an opportunity to enhance resilience through systemic levels of risk aggregation. Specifically, we believe bringing systemic reasoning to the risk management process requires a framework that (i) is able to represent risk-based knowledge and information about a panoply of threats; (ii) provides a systemic understanding (and representation) of the natural and built environments of interest and their dependencies; and (iii) allows for the rational and coherent valuation of a range of outcome variables of interest, both tangible and intangible. Rather than revisiting the thresholds themselves, we see the goal of future nuclear risk management in adopting and implementing risk assessment techniques that systemically evaluate large-scale socio-technical systems with a view toward enhancing resilience and minimizing the potential for surprise. PMID:21608107

  9. Using Benchmarking To Influence Tuition and Fee Decisions.

    ERIC Educational Resources Information Center

    Hubbell, Loren W. Loomis; Massa, Robert J.; Lapovsky, Lucie

    2002-01-01

    Discusses the use of benchmarking in managing enrollment. Using a case study, illustrates how benchmarking can help administrators develop strategies for planning and implementing admissions and pricing practices. (EV)

  10. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  11. Two-dimensional benchmark calculations for PNL-30 through PNL-35

    SciTech Connect

    Mosteller, R.D.

    1997-09-01

    Interest in critical experiments with lattices of mixed-oxide (MOX) fuel pins has been revived by the possibility that light water reactors will be used for disposition of weapons-grade plutonium. A series of six experiments with MOX lattices, designated PNL-30 through PNL-35, was performed at Pacific Northwest Laboratories in 1975 and 1976, and a set of benchmark specifications for these experiments subsequently was adopted by the Cross Section Evaluation Working Group (CSEWG). Although there appear to be some problems with these experiments, they remain the only CSEWG benchmarks for MOX lattices. The number of fuel pins in these experiments is relatively low, corresponding to fewer than 4 typical pressurized-water-reactor fuel assemblies. Accordingly, they are more appropriate as benchmarks for lattice-physics codes than for reactor-core simulator codes. Unfortunately, the CSEWG specifications retain the full three-dimensional (3D) detail of the experiments, while lattice-physics codes almost universally are limited to two dimensions (2D). This paper proposes an extension of the benchmark specifications to include a 2D model, and it justifies that extension by comparing results from the MCNP Monte Carlo code for the 2D and 3D specifications.

  12. Developing of Indicators of an E-Learning Benchmarking Model for Higher Education Institutions

    ERIC Educational Resources Information Center

    Sae-Khow, Jirasak

    2014-01-01

    This study was the development of e-learning indicators used as an e-learning benchmarking model for higher education institutes. Specifically, it aimed to: 1) synthesize the e-learning indicators; 2) examine content validity by specialists; and 3) explore appropriateness of the e-learning indicators. Review of related literature included…

  13. Using Grid Benchmarks for Dynamic Scheduling of Grid Applications

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Hood, Robert

    2003-01-01

    Navigation or dynamic scheduling of applications on computational grids can be improved through the use of an application-specific characterization of grid resources. Current grid information systems provide a description of the resources, but do not contain any application-specific information. We define a GridScape as dynamic state of the grid resources. We measure the dynamic performance of these resources using the grid benchmarks. Then we use the GridScape for automatic assignment of the tasks of a grid application to grid resources. The scalability of the system is achieved by limiting the navigation overhead to a few percent of the application resource requirements. Our task submission and assignment protocol guarantees that the navigation system does not cause grid congestion. On a synthetic data mining application we demonstrate that Gridscape-based task assignment reduces the application tunaround time.

  14. 2D and 3D turbulent reconnection as a benchmark within the SWIFF project

    NASA Astrophysics Data System (ADS)

    Lapenta, G.; Markidis, S.; Bettarini, L.

    2012-04-01

    The goals of SWIFF (swiff.eu/) are: * Zero-in on the physics of all aspects of space weather and design mathematical models that can address them. * Develop specific computational models that are especially suited to handling the great complexity of space weather events where the range of time evolutions and of spatial variations are so much more challenging than in regular meteorological models. * Develop the software needed to implement such computational models on the modern supercomputers available now in Europe. Within Swiff a rigorous benchmarking acrtivity is taking place that will be reported here. A full description is available at: swiff.eu/wiki/index.php?title=Main_Page#Benchmark_Activities

  15. Two-dimensional benchmark calculations for PNL-30 through PNL-35

    SciTech Connect

    Mosteller, R.D.

    1997-12-01

    Interest in critical experiments with lattices of mixed-oxide (MOX) fuel pins has been revived by the possibility that light water reactors will be used for the disposition of weapons-grade plutonium. A series of six experiments with MOX lattices, designated PNL-30 through PNL-35, was performed at Pacific Northwest Laboratories in 1975 and 1976, and a set of benchmark specifications for these experiments subsequently was adopted by the Cross Section Evaluation Working Group (CSEWG). Although there appear to be some problems with these experiments, they remain the only CSEWG benchmarks for MOX lattices.

  16. Review of the GMD Benchmark Event in TPL-007-1

    SciTech Connect

    Backhaus, Scott N.; Rivera, Michael Kelly

    2015-07-21

    Los Alamos National Laboratory (LANL) examined the approaches suggested in NERC Standard TPL-007-1 for defining the geo-electric field for the Benchmark Geomagnetic Disturbance (GMD) Event. Specifically; 1. Estimating 100-year exceedance geo-electric field magnitude; The scaling of the GMD Benchmark Event to geomagnetic latitudes below 60 degrees north; and 3. The effect of uncertainties in earth conductivity data on the conversion from geomagnetic field to geo-electric field. This document summarizes the review and presents recommendations for consideration

  17. A Privacy-Preserving Platform for User-Centric Quantitative Benchmarking

    NASA Astrophysics Data System (ADS)

    Herrmann, Dominik; Scheuer, Florian; Feustel, Philipp; Nowey, Thomas; Federrath, Hannes

    We propose a centralised platform for quantitative benchmarking of key performance indicators (KPI) among mutually distrustful organisations. Our platform offers users the opportunity to request an ad-hoc benchmarking for a specific KPI within a peer group of their choice. Architecture and protocol are designed to provide anonymity to its users and to hide the sensitive KPI values from other clients and the central server. To this end, we integrate user-centric peer group formation, exchangeable secure multi-party computation protocols, short-lived ephemeral key pairs as pseudonyms, and attribute certificates. We show by empirical evaluation of a prototype that the performance is acceptable for reasonably sized peer groups.

  18. 42 CFR 414.1255 - Benchmarks for cost measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Benchmarks for cost measures. 414.1255 Section 414... Payment Modifier Under the Physician Fee Schedule § 414.1255 Benchmarks for cost measures. The benchmark for each cost measure is the national mean of the performance rates calculated among all groups...

  19. 42 CFR 414.1255 - Benchmarks for cost measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Benchmarks for cost measures. 414.1255 Section 414... Payment Modifier Under the Physician Fee Schedule § 414.1255 Benchmarks for cost measures. (a) For the CY 2015 payment adjustment period, the benchmark for each cost measure is the national mean of...

  20. Taking Stock of Corporate Benchmarking Practices: Panacea or Pandora's Box?

    ERIC Educational Resources Information Center

    Fleisher, Craig S.; Burton, Sara

    1995-01-01

    Discusses why corporate communications/public relations (cc/pr) should be benchmarked (an approach used by cc/pr managers to demonstrate the value of their activities to skeptical organizational executives). Discusses myths about cc/pr benchmarking; types, targets, and focus of cc/pr benchmarking; a process model; and critical decisions about…

  1. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    ERIC Educational Resources Information Center

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  2. Benchmark Calculations of Interaction Energies in Noncovalent Complexes and Their Applications.

    PubMed

    Řezáč, Jan; Hobza, Pavel

    2016-05-11

    Data sets of benchmark interaction energies in noncovalent complexes are an important tool for quantifying the accuracy of computational methods used in this field, as well as for the development of new computational approaches. This review is intended as a guide to conscious use of these data sets. We discuss their construction and accuracy, list the data sets available in the literature, and demonstrate their application to validation and parametrization of quantum-mechanical computational methods. In practical model systems, the benchmark interaction energies are usually obtained using composite CCSD(T)/CBS schemes. To use these results as a benchmark, their accuracy should be estimated first. We analyze the errors of this methodology with respect to both the approximations involved and the basis set size. We list the most prominent data sets covering various aspects of the field, from general ones to sets focusing on specific types of interactions or systems. The benchmark data are then used to validate more efficient computational approaches, including those based on explicitly correlated methods. Special attention is paid to the transition to large systems, where accurate benchmarking is difficult or impossible, and to the importance of nonequilibrium geometries in parametrization of more approximate methods. PMID:26943241

  3. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE PAGESBeta

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less

  4. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    SciTech Connect

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.

  5. CFD validation in OECD/NEA t-junction benchmark.

    SciTech Connect

    Obabko, A. V.; Fischer, P. F.; Tautges, T. J.; Karabasov, S.; Goloviznin, V. M.; Zaytsev, M. A.; Chudanov, V. V.; Pervichko, V. A.; Aksenova, A. E.

    2011-08-23

    When streams of rapidly moving flow merge in a T-junction, the potential arises for large oscillations at the scale of the diameter, D, with a period scaling as O(D/U), where U is the characteristic flow velocity. If the streams are of different temperatures, the oscillations result in experimental fluctuations (thermal striping) at the pipe wall in the outlet branch that can accelerate thermal-mechanical fatigue and ultimately cause pipe failure. The importance of this phenomenon has prompted the nuclear energy modeling and simulation community to establish a benchmark to test the ability of computational fluid dynamics (CFD) codes to predict thermal striping. The benchmark is based on thermal and velocity data measured in an experiment designed specifically for this purpose. Thermal striping is intrinsically unsteady and hence not accessible to steady state simulation approaches such as steady state Reynolds-averaged Navier-Stokes (RANS) models.1 Consequently, one must consider either unsteady RANS or large eddy simulation (LES). This report compares the results for three LES codes: Nek5000, developed at Argonne National Laboratory (USA), and Cabaret and Conv3D, developed at the Moscow Institute of Nuclear Energy Safety at (IBRAE) in Russia. Nek5000 is based on the spectral element method (SEM), which is a high-order weighted residual technique that combines the geometric flexibility of the finite element method (FEM) with the tensor-product efficiencies of spectral methods. Cabaret is a 'compact accurately boundary-adjusting high-resolution technique' for fluid dynamics simulation. The method is second-order accurate on nonuniform grids in space and time, and has a small dispersion error and computational stencil defined within one space-time cell. The scheme is equipped with a conservative nonlinear correction procedure based on the maximum principle. CONV3D is based on the immersed boundary method and is validated on a wide set of the experimental and

  6. Performance Comparison of HPF and MPI Based NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash

    1997-01-01

    Compilers supporting High Performance Form (HPF) features first appeared in late 1994 and early 1995 from Applied Parallel Research (APR), Digital Equipment Corporation, and The Portland Group (PGI). IBM introduced an HPF compiler for the IBM RS/6000 SP2 in April of 1996. Over the past two years, these implementations have shown steady improvement in terms of both features and performance. The performance of various hardware/ programming model (HPF and MPI) combinations will be compared, based on latest NAS Parallel Benchmark results, thus providing a cross-machine and cross-model comparison. Specifically, HPF based NPB results will be compared with MPI based NPB results to provide perspective on performance currently obtainable using HPF versus MPI or versus hand-tuned implementations such as those supplied by the hardware vendors. In addition, we would also present NPB, (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu CAPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, and SGI Origin2000. We would also present sustained performance per dollar for Class B LU, SP and BT benchmarks.

  7. The rotating movement of three immiscible fluids - A benchmark problem

    USGS Publications Warehouse

    Bakker, M.; Oude, Essink G.H.P.; Langevin, C.D.

    2004-01-01

    A benchmark problem involving the rotating movement of three immiscible fluids is proposed for verifying the density-dependent flow component of groundwater flow codes. The problem consists of a two-dimensional strip in the vertical plane filled with three fluids of different densities separated by interfaces. Initially, the interfaces between the fluids make a 45??angle with the horizontal. Over time, the fluids rotate to the stable position whereby the interfaces are horizontal; all flow is caused by density differences. Two cases of the problem are presented, one resulting in a symmetric flow field and one resulting in an asymmetric flow field. An exact analytical solution for the initial flow field is presented by application of the vortex theory and complex variables. Numerical results are obtained using three variable-density groundwater flow codes (SWI, MOCDENS3D, and SEAWAT). Initial horizontal velocities of the interfaces, as simulated by the three codes, compare well with the exact solution. The three codes are used to simulate the positions of the interfaces at two times; the three codes produce nearly identical results. The agreement between the results is evidence that the specific rotational behavior predicted by the models is correct. It also shows that the proposed problem may be used to benchmark variable-density codes. It is concluded that the three models can be used to model accurately the movement of interfaces between immiscible fluids, and have little or no numerical dispersion. ?? 2003 Elsevier B.V. All rights reserved.

  8. OCTALIS benchmarking: comparison of four watermarking techniques

    NASA Astrophysics Data System (ADS)

    Piron, Laurent; Arnold, Michael; Kutter, Martin; Funk, Wolfgang; Boucqueau, Jean M.; Craven, Fiona

    1999-04-01

    In this paper, benchmarking results of watermarking techniques are presented. The benchmark includes evaluation of the watermark robustness and the subjective visual image quality. Four different algorithms are compared, and exhaustively tested. One goal of these tests is to evaluate the feasibility of a Common Functional Model (CFM) developed in the European Project OCTALIS and determine parameters of this model, such as the length of one watermark. This model solves the problem of image trading over an insecure network, such as Internet, and employs hybrid watermarking. Another goal is to evaluate the resistance of the watermarking techniques when subjected to a set of attacks. Results show that the tested techniques do not have the same behavior and that no tested methods has optimal characteristics. A last conclusion is that, as for the evaluation of compression techniques, clear guidelines are necessary to evaluate and compare watermarking techniques.

  9. Benchmark West Texas Intermediate crude assayed

    SciTech Connect

    Rhodes, A.K.

    1994-08-15

    The paper gives an assay of West Texas Intermediate, one of the world's market crudes. The price of this crude, known as WTI, is followed by market analysts, investors, traders, and industry managers around the world. WTI price is used as a benchmark for pricing all other US crude oils. The 41[degree] API < 0.34 wt % sulfur crude is gathered in West Texas and moved to Cushing, Okla., for distribution. The WTI posted prices is the price paid for the crude at the wellhead in West Texas and is the true benchmark on which other US crudes are priced. The spot price is the negotiated price for short-term trades of the crude. And the New York Mercantile Exchange, or Nymex, price is a futures price for barrels delivered at Cushing.

  10. Toxicological benchmarks for wildlife. Environmental Restoration Program

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  11. Benchmark On Sensitivity Calculation (Phase III)

    SciTech Connect

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James; Mennerdahl, Dennis; Golovko, Yury; Raskach, Kirill; Tsiboulia, Anatoly; Lee, Gil Soo; Woo, Sweng-Woong; Bidaud, Adrien; Patel, Amrit; Bledsoe, Keith C; Rearden, Bradley T; Gulliford, J.

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.

  12. TsunaFLASH Benchmark and Its Verifications

    NASA Astrophysics Data System (ADS)

    Pranowo, Widodo; Behrens, Joern

    2010-05-01

    In the end of year 2008 TsunAWI (Tsunami unstructured mesh finite element model developed at Alfred Wegener Institute) by Behrens et al. (2006 - 2008) [Behrens, 2008], had been launched as an operational model in the German - Indonesian Tsunami EarlyWarning System (GITEWS) framework. This model has been benchmarked and verified with 2004 Sumatra-Andaman mega tsunami event [Harig et al., 2008]. A new development uses adaptive mesh refinement to improve computational efficiency and accuracy, this approach is called TsunaFLASH [Pranowo et al., 2008]. After the initial development and verification phase with stabilization efforts, and study of refinement criteria, the code is now mature enough to be validated with data. This presentation will demonstrate results of TsunaFLASH for the experiments with diverse mesh refinement criteria, and benchmarks; in particular the problem set-1 of IWLRM, and field data of the Sumatra-Andaman 2004 event.

  13. EXPERIMENTAL BENCHMARKING OF THE MAGNETIZED FRICTION FORCE.

    SciTech Connect

    FEDOTOV, A.V.; GALNANDER, B.; LITVINENKO, V.N.; LOFNES, T.; SIDORIN, A.O.; SMIRNOV, A.V.; ZIEMANN, V.

    2005-09-18

    High-energy electron cooling, presently considered as essential tool for several applications in high-energy and nuclear physics, requires accurate description of the friction force. A series of measurements were performed at CELSIUS with the goal to provide accurate data needed for the benchmarking of theories and simulations. Some results of accurate comparison of experimental data with the friction force formulas are presented.

  14. Reactor calculation benchmark PCA blind test results

    SciTech Connect

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables.

  15. Collection of Neutronic VVER Reactor Benchmarks.

    Energy Science and Technology Software Center (ESTSC)

    2002-01-30

    Version 00 A system of computational neutronic benchmarks has been developed. In this CD-ROM report, the data generated in the course of the project are reproduced in their integrity with minor corrections. The editing that was performed on the various documents comprising this report was primarily meant to facilitate the production of the CD-ROM and to enable electronic retrieval of the information. The files are electronically navigable.

  16. Experimental Benchmarking of the Magnetized Friction Force

    SciTech Connect

    Fedotov, A. V.; Litvinenko, V. N.; Galnander, B.; Lofnes, T.; Ziemann, V.; Sidorin, A. O.; Smirnov, A. V.

    2006-03-20

    High-energy electron cooling, presently considered as essential tool for several applications in high-energy and nuclear physics, requires accurate description of the friction force. A series of measurements were performed at CELSIUS with the goal to provide accurate data needed for the benchmarking of theories and simulations. Some results of accurate comparison of experimental data with the friction force formulas are presented.

  17. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    NASA Technical Reports Server (NTRS)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  18. Measurements and ALE3D Simulations for Violence in a Scaled Thermal Explosion Experiment with LX-10 and AerMet 100 Steel

    SciTech Connect

    McClelland, M A; Maienschein, J L; Yoh, J J; deHaven, M R; Strand, O T

    2005-06-03

    We completed a Scaled Thermal Explosion Experiment (STEX) and performed ALE3D simulations for the HMX-based explosive, LX-10, confined in an AerMet 100 (iron-cobalt-nickel alloy) vessel. The explosive was heated at 1 C/h until cookoff at 182 C using a controlled temperature profile. During the explosion, the expansion of the tube and fragment velocities were measured with strain gauges, Photonic-Doppler-Velocimeters (PDVs), and micropower radar units. These results were combined to produce a single curve describing 15 cm of tube wall motion. A majority of the metal fragments were captured and cataloged. A fragment size distribution was constructed, and a typical fragment had a length scale of 2 cm. Based on these results, the explosion was considered to be a violent deflagration. ALE3D models for chemical, thermal, and mechanical behavior were developed for the heating and explosive processes. A four-step chemical kinetics model is employed for the HMX while a one-step model is used for the Viton. A pressure-dependent deflagration model is employed during the expansion. The mechanical behavior of the solid constituents is represented by a Steinberg-Guinan model while polynomial and gamma-law expressions are used for the equation of state of the solid and gas species, respectively. A gamma-law model is employed for the air in gaps, and a mixed material model is used for the interface between air and explosive. A Johnson-Cook model with an empirical rule for failure strain is used to describe fracture behavior. Parameters for the kinetics model were specified using measurements of the One-Dimensional-Time-to-Explosion (ODTX), while measurements for burn rate were employed to determine parameters in the burn front model. The ALE3D models provide good predictions for the thermal behavior and time to explosion, but the predicted wall expansion curve is higher than the measured curve. Possible contributions to this discrepancy include inaccuracies in the chemical models

  19. Benchmarking and accounting for the (private) cloud

    NASA Astrophysics Data System (ADS)

    Belleman, J.; Schwickerath, U.

    2015-12-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible; the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to have an estimation of the performance of worker nodes also in a very dynamic farm with worker nodes coming and going at a high rate, without the need to benchmark each new node again. An extension to public cloud resources is possible if all conditions under which the benchmark numbers have been obtained are fulfilled.

  20. Benchmarking numerical freeze/thaw models

    NASA Astrophysics Data System (ADS)

    Rühaak, Wolfram; Anbergen, Hauke; Molson, John; Grenier, Christophe; Sass, Ingo

    2015-04-01

    The modeling of freezing and thawing of water in porous media is of increasing interest, and for which very different application areas exist. For instance, the modeling of permafrost regression with respect to climate change issues is one area, while others include geotechnical applications in tunneling and for borehole heat exchangers which operate at temperatures below the freezing point. The modeling of these processes requires the solution of a coupled non-linear system of partial differential equations for flow and heat transport in space and time. Different code implementations have been developed in the past. Analytical solutions exist only for simple cases. Consequently, an interest has arisen in benchmarking different codes with analytical solutions, experiments and purely numerical results, similar to the long-standing DECOVALEX and the more recent "Geothermal Code Comparison" activities. The name for this freezing/ thawing benchmark consortium is INTERFROST. In addition to the well-known so-called Lunardini solution for a 1D case (case T1), two different 2D problems will be presented, one which represents melting of a frozen inclusion (case TH2) and another which represents the growth or thaw of permafrost around a talik (case TH3). These talik regions are important for controlling groundwater movement within a mainly frozen ground. First results of the different benchmark results will be shown and discussed.

  1. Introduction to the HPC Challenge Benchmark Suite

    SciTech Connect

    Luszczek, Piotr; Dongarra, Jack J.; Koester, David; Rabenseifner,Rolf; Lucas, Bob; Kepner, Jeremy; McCalpin, John; Bailey, David; Takahashi, Daisuke

    2005-04-25

    The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. HPC Challenge is a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. Thus, the suite is designed to augment the Top500 list, providing benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and providing a framework for including additional tests. In particular, the suite is composed of several well known computational kernels (STREAM, HPL, matrix multiply--DGEMM, parallel matrix transpose--PTRANS, FFT, RandomAccess, and bandwidth/latency tests--b{sub eff}) that attempt to span high and low spatial and temporal locality space. By design, the HPC Challenge tests are scalable with the size of data sets being a function of the largest HPL matrix for the tested system.

  2. Middleware Evaluation and Benchmarking for Use in Mission Operations Centers

    NASA Technical Reports Server (NTRS)

    Antonucci, Rob; Waktola, Waka

    2005-01-01

    Middleware technologies have been promoted as timesaving, cost-cutting alternatives to the point-to-point communication used in traditional mission operations systems. However, missions have been slow to adopt the new technology. The lack of existing middleware-based missions has given rise to uncertainty about middleware's ability to perform in an operational setting. Most mission architects are also unfamiliar with the technology and do not know the benefits and detriments to architectural choices - or even what choices are available. We will present the findings of a study that evaluated several middleware options specifically for use in a mission operations system. We will address some common misconceptions regarding the applicability of middleware-based architectures, and we will identify the design decisions and tradeoffs that must be made when choosing a middleware solution. The Middleware Comparison and Benchmark Study was conducted at NASA Goddard Space Flight Center to comprehensively evaluate candidate middleware products, compare and contrast the performance of middleware solutions with the traditional point- to-point socket approach, and assess data delivery and reliability strategies. The study focused on requirements of the Global Precipitation Measurement (GPM) mission, validating the potential use of middleware in the GPM mission ground system. The study was jointly funded by GPM and the Goddard Mission Services Evolution Center (GMSEC), a virtual organization for providing mission enabling solutions and promoting the use of appropriate new technologies for mission support. The study was broken into two phases. To perform the generic middleware benchmarking and performance analysis, a network was created with data producers and consumers passing data between themselves. The benchmark monitored the delay, throughput, and reliability of the data as the characteristics were changed. Measurements were taken under a variety of topologies, data demands

  3. Toward real-time performance benchmarks for Ada

    NASA Technical Reports Server (NTRS)

    Clapp, Russell M.; Duchesneau, Louis; Volz, Richard A.; Mudge, Trevor N.; Schultze, Timothy

    1986-01-01

    The issue of real-time performance measurements for the Ada programming language through the use of benchmarks is addressed. First, the Ada notion of time is examined and a set of basic measurement techniques are developed. Then a set of Ada language features believed to be important for real-time performance are presented and specific measurement methods discussed. In addition, other important time related features which are not explicitly part of the language but are part of the run-time related features which are not explicitly part of the language but are part of the run-time system are also identified and measurement techniques developed. The measurement techniques are applied to the language and run-time system features and the results are presented.

  4. How can bedside rationing be justified despite coexisting inefficiency? The need for 'benchmarks of efficiency'.

    PubMed

    Strech, Daniel; Danis, Marion

    2014-02-01

    Imperfect efficiency in healthcare delivery is sometimes given as a justification for refusing to ration or even discuss how to pursue fair rationing. This paper aims to clarify the relationship between inefficiency and rationing, and the conditions under which bedside rationing can be justified despite coexisting inefficiency. This paper first clarifies several assumptions that underlie the classification of a clinical practice as being inefficient. We then suggest that rationing is difficult to justify in circumstances where the rationing agent is or should be aware of and contributes to clinical inefficiency. We further explain the different ethical implications of this suggestion for rationing decisions made by clinicians. We argue that rationing is more legitimate when sufficient efforts are undertaken to decrease inefficiency in parallel with efforts to pursue unavoidable but fair rationing. While the qualifier 'sufficient' is crucial here, we explain why 'sufficient efforts' should be translated into 'benchmarks of efficiency' that address specific healthcare activities where clinical inefficiency can be decreased. Referring to recent consensus papers, we consider some examples of specific clinical situations where improving clinical inefficiency has been recommended and consider how benchmarks for efficiency might apply. These benchmarks should state explicitly how much inefficiency shall be reduced in a reasonable time range and why these efforts are 'sufficient'. Possible strategies for adherence to benchmarks are offered to address the possibility of non-compliance. PMID:23258082

  5. Benchmarking NSP Reactors with CORETRAN-01

    SciTech Connect

    Hines, Donald D.; Grow, Rodney L.; Agee, Lance J

    2004-10-15

    As part of an overall verification and validation effort, the Electric Power Research Institute's (EPRIs) CORETRAN-01 has been benchmarked against Northern States Power's Prairie Island and Monticello reactors through 12 cycles of operation. The two Prairie Island reactors are Westinghouse 2-loop units with 121 asymmetric 14 x 14 lattice assemblies utilizing up to 8 wt% gadolinium while Monticello is a General Electric 484 bundle boiling water reactor. All reactor cases were executed in full core utilizing 24 axial nodes per assembly in the fuel with 1 additional reflector node above, below, and around the perimeter of the core. Cross-section sets used in this benchmark effort were generated by EPRI's CPM-3 as well as Studsvik's CASMO-3 and CASMO-4 to allow for separation of the lattice calculation effect from the nodal simulation method. These cases exercised the depletion-shuffle-depletion sequence through four cycles for each unit using plant data to follow actual operations. Flux map calculations were performed for comparison to corresponding measurement statepoints. Additionally, start-up physics testing cases were used to predict cycle physics parameters for comparison to existing plant methods and measurements.These benchmark results agreed well with both current analysis methods and plant measurements, indicating that CORETRAN-01 may be appropriate for steady-state physics calculations of both the Prairie Island and Monticello reactors. However, only the Prairie Island results are discussed in this paper since Monticello results were of similar quality and agreement. No attempt was made in this work to investigate CORETRAN-01 kinetics capability by analyzing plant transients, but these steady-state results form a good foundation for moving in that direction.

  6. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  7. A Uranium Bioremediation Reactive Transport Benchmark

    SciTech Connect

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  8. Strategy of DIN-PACS benchmark testing

    NASA Astrophysics Data System (ADS)

    Norton, Gary S.; Lyche, David K.; Richardson, Nancy E.; Thomas, Jerry A.; Romlein, John R.; Cawthon, Michael A.; Lawrence, David P.; Shelton, Philip D.; Parr, Laurence F.; Richardson, Ronald R., Jr.; Johnson, Steven L.

    1998-07-01

    The Digital Imaging Network -- Picture Archive and Communication System (DIN-PACS) procurement is the Department of Defense's (DoD) effort to bring military medical treatment facilities into the twenty-first century with nearly filmless digital radiology departments. The DIN-PACS procurement is unique from most of the previous PACS acquisitions in that the Request for Proposals (RFP) required extensive benchmark testing prior to contract award. The strategy for benchmark testing was a reflection of the DoD's previous PACS and teleradiology experiences. The DIN-PACS Technical Evaluation Panel (TEP) consisted of DoD and civilian radiology professionals with unique clinical and technical PACS expertise. The TEP considered nine items, key functional requirements to the DIN-PACS acquisition: (1) DICOM Conformance, (2) System Storage and Archive, (3) Workstation Performance, (4) Network Performance, (5) Radiology Information System (RIS) functionality, (6) Hospital Information System (HIS)/RIS Interface, (7) Teleradiology, (8) Quality Control, and (9) System Reliability. The development of a benchmark test to properly evaluate these key requirements would require the TEP to make technical, operational, and functional decisions that had not been part of a previous PACS acquisition. Developing test procedures and scenarios that simulated inputs from radiology modalities and outputs to soft copy workstations, film processors, and film printers would be a major undertaking. The goals of the TEP were to fairly assess each vendor's proposed system and to provide an accurate evaluation of each system's capabilities to the source selection authority, so the DoD could purchase a PACS that met the requirements in the RFP.

  9. Benchmark simulations of ICRF antenna coupling

    NASA Astrophysics Data System (ADS)

    Louche, F.; Lamalle, P. U.; Messiaen, A. M.; Van Compernolle, B.; Milanesio, D.; Maggiora, R.

    2007-09-01

    The paper reports on ongoing benchmark numerical simulations of antenna input impedance parameters in the ion cyclotron range of frequencies with different coupling codes: CST Microwave Studio, TOPICA and ANTITER 2. In particular we study the validity of the approximation of a magnetized plasma slab by a dielectric medium of suitably chosen permittivity. Different antenna models are considered: a single-strap antenna, a 4-strap antenna and the 24-strap ITER antenna array. Whilst the diagonal impedances are mostly in good agreement, some differences between the mutual terms predicted by Microwave Studio and TOPICA have yet to be resolved.

  10. Benchmark simulations of ICRF antenna coupling

    SciTech Connect

    Louche, F.; Lamalle, P. U.; Messiaen, A. M.; Compernolle, B. van; Milanesio, D.; Maggiora, R.

    2007-09-28

    The paper reports on ongoing benchmark numerical simulations of antenna input impedance parameters in the ion cyclotron range of frequencies with different coupling codes: CST Microwave Studio, TOPICA and ANTITER 2. In particular we study the validity of the approximation of a magnetized plasma slab by a dielectric medium of suitably chosen permittivity. Different antenna models are considered: a single-strap antenna, a 4-strap antenna and the 24-strap ITER antenna array. Whilst the diagonal impedances are mostly in good agreement, some differences between the mutual terms predicted by Microwave Studio and TOPICA have yet to be resolved.

  11. NAS Parallel Benchmarks. 2.4

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We describe a new problem size, called Class D, for the NAS Parallel Benchmarks (NPB), whose MPI source code implementation is being released as NPB 2.4. A brief rationale is given for how the new class is derived. We also describe the modifications made to the MPI (Message Passing Interface) implementation to allow the new class to be run on systems with 32-bit integers, and with moderate amounts of memory. Finally, we give the verification values for the new problem size.

  12. Benchmarking East Tennessee`s economic capacity

    SciTech Connect

    1995-04-20

    This presentation is comprised of viewgraphs delineating major economic factors operating in 15 counties in East Tennessee. The purpose of the information presented is to provide a benchmark analysis of economic conditions for use in guiding economic growth in the region. The emphasis of the presentation is economic infrastructure, which is classified into six categories: human resources, technology, financial resources, physical infrastructure, quality of life, and tax and regulation. Data for analysis of key indicators in each of the categories are presented. Preliminary analyses, in the form of strengths and weaknesses and comparison to reference groups, are given.

  13. Development of a California commercial building benchmarking database

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  14. Learning Through Benchmarking: Developing a Relational, Prospective Approach to Benchmarking ICT in Learning and Teaching

    ERIC Educational Resources Information Center

    Ellis, Robert A.; Moore, Roger R.

    2006-01-01

    This study discusses benchmarking the use of information and communication technologies (ICT) in teaching and learning between two universities with different missions: one an Australian campus-based metropolitan university and the other a British distance-education provider. It argues that the differences notwithstanding, it is possible to…

  15. Using the Canadian Language Benchmarks (CLB) to Benchmark College Programs/Courses and Language Proficiency Tests.

    ERIC Educational Resources Information Center

    Epp, Lucy; Stawychny, Mary

    2001-01-01

    Describes a process developed by the Language Training Centre at Red River College (RRC) to use the Canadian language benchmarks in analyzing the language levels used in programs and courses at RRC to identify appropriate entry-level language proficiency and the levels that second language students need in order to meet college or university…

  16. The NAS Parallel Benchmarks 2.1 Results

    NASA Technical Reports Server (NTRS)

    Saphir, William; Woo, Alex; Yarrow, Maurice

    1996-01-01

    We present performance results for version 2.1 of the NAS Parallel Benchmarks (NPB) on the following architectures: IBM SP2/66 MHz; SGI Power Challenge Array/90 MHz; Cray Research T3D; and Intel Paragon. The NAS Parallel Benchmarks are a widely-recognized suite of benchmarks originally designed to compare the performance of highly parallel computers with that of traditional supercomputers.

  17. Experimentally Relevant Benchmarks for Gyrokinetic Codes

    NASA Astrophysics Data System (ADS)

    Bravenec, Ronald

    2010-11-01

    Although benchmarking of gyrokinetic codes has been performed in the past, e.g., The Numerical Tokamak, The Cyclone Project, The Plasma Microturbulence Project, and various informal activities, these efforts have typically employed simple plasma models. For example, the Cyclone ``base case'' assumed shifted-circle flux surfaces, no magnetic transport, adiabatic electrons, no collisions nor impurities, ρi << a (ρi the ion gyroradius and a the minor radius), and no ExB flow shear. This work presents comparisons of linear frequencies and nonlinear fluxes from GYRO and GS2 with none of the above approximations except ρi << a and no ExB flow shear. The comparisons are performed at two radii of a DIII-D plasma, one in the confinement region (r/a = 0.5) and the other closer to the edge (r/a = 0.7). Many of the plasma parameters differ by a factor of two between these two locations. Good agreement between GYRO and GS2 is found when neglecting collisions. However, differences are found when including e-i collisions (Lorentz model). The sources of the discrepancy are unknown as of yet. Nevertheless, two collisionless benchmarks have been formulated with considerably different plasma parameters. Acknowledgements to J. Candy, E. Belli, and M. Barnes.

  18. REVISED STREAM CODE AND WASP5 BENCHMARK

    SciTech Connect

    Chen, K

    2005-05-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within {+-}20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within {+-}3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls.

  19. Direct data access protocols benchmarking on DPM

    NASA Astrophysics Data System (ADS)

    Furano, Fabrizio; Devresse, Adrien; Keeble, Oliver; Mancinelli, Valentina

    2015-12-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring information about any data access protocol to the same monitoring infrastructure that is used to monitor the Xrootd deployments. Our goal is to evaluate under which circumstances the HTTP-based protocols can be good enough for batch or interactive data access. In this contribution we show and discuss the results that our test systems have collected under the circumstances that include ROOT analyses using TTreeCache and stress tests on the metadata performance.

  20. Parallel Ada benchmarks for the SVMS

    NASA Technical Reports Server (NTRS)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  1. Simple mathematical law benchmarks human confrontations.

    PubMed

    Johnson, Neil F; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-01-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a 'lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds. PMID:24322528

  2. Simple mathematical law benchmarks human confrontations

    NASA Astrophysics Data System (ADS)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  3. EVA Health and Human Performance Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  4. Revised STREAM code and WASP5 benchmark

    SciTech Connect

    Chen, K.F.

    1995-05-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within {+-} 20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within {+-} 3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls.

  5. Benchmarking longwave multiple scattering in cirrus environments

    NASA Astrophysics Data System (ADS)

    Kuo, C.; Feldman, D.; Yang, P.; Flanner, M.; Huang, X.

    2015-12-01

    Many global climate models currently assume that longwave photons are non-scattering in clouds, and also have overly simplistic treatments of surface emissivity. Multiple scattering of longwave radiation and non-unit emissivity could lead to substantial discrepancies between the actual Earth's radiation budget and its parameterized representation in the infrared, especially at wavelengths longer than 15 µm. The evaluation of the parameterization of longwave spectral multiple scattering in radiative transfer codes for global climate models is critical and will require benchmarking across a wide range atmospheric conditions with more accurate, though computationally more expensive, multiple scattering models. We therefore present a line-by-line radiative transfer solver that includes scattering, run on a supercomputer from the National Energy Research Scientific Computing that exploits the embarrassingly parallel nature of 1-D radiative transfer solutions with high effective throughput. When paired with an advanced ice-particle optical property database with spectral values ranging from the 0.2 to 100 μm, a particle size and habit distribution derived from MODIS Collection 6, and a database for surface emissivity which extends to 100 μm, this benchmarking result can densely sample the thermodynamic and condensate parameter-space, and therefore accelerate the development of an advanced infrared radiative parameterization for climate models, which could help disentangle forcings and feedbacks in CMIP6.

  6. Benchmarking database performance for genomic data.

    PubMed

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc. PMID:25560631

  7. Recommendations for Benchmarking Preclinical Studies of Nanomedicines.

    PubMed

    Dawidczyk, Charlene M; Russell, Luisa M; Searson, Peter C

    2015-10-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small-molecule drug therapy for cancer and to achieve both therapeutic and diagnostic functions in the same platform. Preclinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of preclinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of preclinical trials and propose a protocol for benchmarking that we recommend be included in in vivo preclinical studies of drug-delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies. PMID:26249177

  8. Simple mathematical law benchmarks human confrontations

    PubMed Central

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-01-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another – from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a ‘lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds. PMID:24322528

  9. Computational Thermochemistry and Benchmarking of Reliable Methods

    SciTech Connect

    Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

    2006-06-20

    During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

  10. Improving Mass Balance Modeling of Benchmark Glaciers

    NASA Astrophysics Data System (ADS)

    van Beusekom, A. E.; March, R. S.; O'Neel, S.

    2009-12-01

    The USGS monitors long-term glacier mass balance at three benchmark glaciers in different climate regimes. The coastal and continental glaciers are represented by Wolverine and Gulkana Glaciers in Alaska, respectively. Field measurements began in 1966 and continue. We have reanalyzed the published balance time series with more modern methods and recomputed reference surface and conventional balances. Addition of the most recent data shows a continuing trend of mass loss. We compare the updated balances to the previously accepted balances and discuss differences. Not all balance quantities can be determined from the field measurements. For surface processes, we model missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernize the traditional degree-day model as well as derive new degree-day factors in an effort to closer match the balance time series and thus better predict the future state of the benchmark glaciers. For subsurface processes, we model the refreezing of meltwater for internal accumulation. We examine the sensitivity of the balance time series to the subsurface process of internal accumulation, with the goal of determining the best way to include internal accumulation into balance estimates.

  11. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; Fatoohi, Rod

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  12. Hospital Energy Benchmarking Guidance - Version 1.0

    SciTech Connect

    Singer, Brett C.

    2009-09-08

    This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

  13. Using benchmarks for radiation testing of microprocessors and FPGAs

    SciTech Connect

    Quinn, Heather; Robinson, William H.; Rech, Paolo; Aguirre, Miguel; Barnard, Arno; Desogus, Marco; Entrena, Luis; Garcia-Valderas, Mario; Guertin, Steven M.; Kaeli, David; Kastensmidt, Fernanda Lima; Kiddie, Bradley T.; Sanchez-Clemente, Antonio; Reorda, Matteo Sonza; Sterpone, Luca; Wirthlin, Michael

    2015-12-17

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for the hardware and software benchmarks.

  14. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    PubMed

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation. PMID:26513790

  15. Benchmarking novel approaches for modelling species range dynamics

    PubMed Central

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.

    2016-01-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  16. Benchmarking and improving microbial-explicit soil biogeochemistry models

    NASA Astrophysics Data System (ADS)

    Wieder, W. R.; Bonan, G. B.; Hartman, M. D.; Sulman, B. N.; Wang, Y.

    2015-12-01

    Earth system models that are designed to project future carbon (C) cycle - climate feedbacks exhibit notably poor representation of soil biogeochemical processes and generate highly uncertain projections about the fate of the largest terrestrial C pool on Earth. Given these shortcomings there has been intense interest in soil biogeochemical model development, but parallel efforts to create the analytical tools to characterize, improve and benchmark these models have thus far lagged behind. A long-term goal of this work is to develop a framework to compare, evaluate and improve the process-level representation of soil biogeochemical models that could be applied in global land surface models. Here, we present a newly developed global model test bed that is built on the Carnegie Ames Stanford Approach model (CASA-CNP) that can rapidly integrate different soil biogeochemical models that are forced with consistent driver datasets. We focus on evaluation of two microbial explicit soil biogeochemical models that function at global scales: the MIcrobial-MIneral Carbon Stabilization model (MIMICS) and Carbon, Organisms, Rhizosphere, and Protection in the Soil Environment (CORPSE) model. Using the global model test bed coupled to MIMICS and CORPSE we quantify the uncertainty in potential C cycle - climate feedbacks that may be expected with these microbial explicit models, compared with a conventional first-order, linear model. By removing confounding variation of climate and vegetation drivers, our model test bed allows us to isolate key differences among different soil model structure and parameterizations that can be evaluated with further study. Specifically, the global test bed also identifies key parameters that can be estimated using cross-site observations. In global simulations model results are evaluated with steady state litter, microbial biomass, and soil C pools and benchmarked against independent globally gridded data products.

  17. Synthetic neuronal datasets for benchmarking directed functional connectivity metrics

    PubMed Central

    Andrade, Alexandre

    2015-01-01

    Background. Datasets consisting of synthetic neural data generated with quantifiable and controlled parameters are a valuable asset in the process of testing and validating directed functional connectivity metrics. Considering the recent debate in the neuroimaging community concerning the use of these metrics for fMRI data, synthetic datasets that emulate the BOLD signal dynamics have played a central role by supporting claims that argue in favor or against certain choices. Generative models often used in studies that simulate neuronal activity, with the aim of gaining insight into specific brain regions and functions, have different requirements from the generative models for benchmarking datasets. Even though the latter must be realistic, there is a tradeoff between realism and computational demand that needs to be contemplated and simulations that efficiently mimic the real behavior of single neurons or neuronal populations are preferred, instead of more cumbersome and marginally precise ones. Methods. This work explores how simple generative models are able to produce neuronal datasets, for benchmarking purposes, that reflect the simulated effective connectivity and, how these can be used to obtain synthetic recordings of EEG and fMRI BOLD signals. The generative models covered here are AR processes, neural mass models consisting of linear and nonlinear stochastic differential equations and populations with thousands of spiking units. Forward models for EEG consist in the simple three-shell head model while the fMRI BOLD signal is modeled with the Balloon-Windkessel model or by convolution with a hemodynamic response function. Results. The simulated datasets are tested for causality with the original spectral formulation for Granger causality. Modeled effective connectivity can be detected in the generated data for varying connection strengths and interaction delays. Discussion. All generative models produce synthetic neuronal data with detectable causal

  18. Benchmarking novel approaches for modelling species range dynamics.

    PubMed

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  19. Benchmarking farmer performance as an incentive for sustainable farming: environmental impacts of pesticides.

    PubMed

    Kragten, S; De Snoo, G R

    2003-01-01

    Pesticide use in The Netherlands is very high, and pesticides are found across all environmental compartments. Among individual farmers, though, there is wide variation in both pesticide use and the potential environmental impact of that use, providing policy leverage for environmental protection. This paper reports on a benchmarking tool with which farmers can compare their environmental and economic performance with that of other farmers, thereby serving as an incentive for them to adopt more sustainable methods of food production methods. The tool is also designed to provide farmers with a more detailed picture of the environmental impacts of their methods of pest management. It is interactive and available on the internet: www.agriwijzer.nl. The present version has been developed specifically for arable farmers, but it is to be extended to encompass other agricultural sectors, in particular horticulture (bulb flowers, stem fruits), as well as various other aspects of sustainability (nutrient inputs, 'on-farm' biodiversity, etc.). The benchmarking methodology was tested on a pilot group of 20 arable farmers, whose general response was positive. They proved to be more interested in comparative performance in terms of economic rather than environmental indicators. In their judgment the benchmarking tool can serve a useful purpose in steering them towards more sustainable forms of agricultural production. The benchmarking results can also be used by other actors in the agroproduction chain, such as food retailers and the food industry. PMID:15151309

  20. The OECD/NEA/NSC PBMR coupled neutronics/thermal hydraulics transient benchmark: The PBMR-400 core design

    SciTech Connect

    Reitsma, F.; Ivanov, K.; Downar, T.; De Haas, H.; Gougar, H. D.

    2006-07-01

    The Pebble Bed Modular Reactor (PBMR) is a High-Temperature Gas-cooled Reactor (HTGR) concept to be built in South Africa. As part of the verification and validation program the definition and execution of code-to-code benchmark exercises are important. The Nuclear Energy Agency (NEA) of the Organisation for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor (PBMR) coupled neutronics/thermal hydraulics transient benchmark problem in its program. The OECD benchmark defines steady-state and transients cases, including reactivity insertion transients. It makes use of a common set of cross sections (to eliminate uncertainties between different codes) and includes specific simplifications to the design to limit the need for participants to introduce approximations in their models. In this paper the detailed specification is explained, including the test cases to be calculated and the results required from participants. (authors)

  1. Benchmarking Soft Costs for PV Systems in the United States (Presentation)

    SciTech Connect

    Ardani, K.

    2012-06-01

    This paper presents results from the first U.S. based data collection effort to quantify non-hardware, business process costs for PV systems at the residential and commercial scales, using a bottom-up approach. Annual expenditure and labor hour productivity data are analyzed to benchmark business process costs in the specific areas of: (1) customer acquisition; (2) permitting, inspection, and interconnection; (3) labor costs of third party financing; and (4) installation labor.

  2. Future of benchmarking: more data, more sharing, and better patient care.

    PubMed

    2001-05-01

    Automated systems that provide whatever regulatory information is needed when it is needed; sharing of data to improve quality; data mined for specific groups of patients: Those are just a few of the trends predicted by health care experts asked to comment on the future of benchmarking and data strategies. Such improvements are needed; many hospitals continually run into problems when it comes to finding the right data sets for targeted patient groups. PMID:11372493

  3. REVIEW OF RESULTS FOR THE OECD/NEA PHASE VII BENCHMARK: STUDY OF SPENT FUEL COMPOSITIONS FOR LONG TERM DISPOSAL

    SciTech Connect

    Radulescu, Georgeta; Wagner, John C

    2011-01-01

    This paper summarizes the problem specification and compares participants results for the OECD/NEA/WPNCS Expert Group on Burn-up Credit Criticality Safety Phase VII Benchmark Study of Spent Fuel Compositions for Long-Term Disposal. The Phase VII benchmark was developed to study the ability of relevant computer codes and associated nuclear data to predict spent fuel isotopic compositions and corresponding keff values in a cask configuration over the time duration relevant to spent nuclear fuel (SNF) disposal. The benchmark was divided into two sets of calculations: (1) decay calculations out to 1,000,000 years for provided pressurized-water-reactor (PWR) UO2 discharged fuel compositions and (2) burnup credit criticality calculations for a representative cask model at selected time steps. Contributions from 15 organizations and companies in 10 countries were submitted to the Phase VII benchmark exercise. This paper provides a description of the Phase VII benchmark and detailed comparisons of the participants isotopic compositions and keff values that were calculated with a diversity of computer codes and nuclear data sets. Differences observed in the calculated time-dependent nuclide densities are attributed to different decay data or code-specific numerical approximations. The variability of the keff results is consistent with the evaluated uncertainty associated with cross-section data.

  4. Shielding integral benchmark archive and database (SINBAD)

    SciTech Connect

    Kirk, B.L.; Grove, R.E.; Kodeli, I.; Gulliford, J.; Sartori, E.

    2011-07-01

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  5. NASA Indexing Benchmarks: Evaluating Text Search Engines

    NASA Technical Reports Server (NTRS)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  6. Gatemon Benchmarking and Two-Qubit Operations

    NASA Astrophysics Data System (ADS)

    Casparis, L.; Larsen, T. W.; Olsen, M. S.; Kuemmeth, F.; Krogstrup, P.; Nygârd, J.; Petersson, K. D.; Marcus, C. M.

    2016-04-01

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability characteristic of semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors below 0.7% for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent swap operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of 91%, demonstrating the potential of gatemon qubits for building scalable quantum processors.

  7. LHC benchmarks from flavored gauge mediation

    NASA Astrophysics Data System (ADS)

    Ierushalmi, N.; Iwamoto, S.; Lee, G.; Nepomnyashy, V.; Shadmi, Y.

    2016-07-01

    We present benchmark points for LHC searches from flavored gauge mediation models, in which messenger-matter couplings give flavor-dependent squark masses. Our examples include spectra in which a single squark — stop, scharm, or sup — is much lighter than all other colored superpartners, motivating improved quark flavor tagging at the LHC. Many examples feature flavor mixing; in particular, large stop-scharm mixing is possible. The correct Higgs mass is obtained in some examples by virtue of the large stop A-term. We also revisit the general flavor and CP structure of the models. Even though the A-terms can be substantial, their contributions to EDM's are very suppressed, because of the particular dependence of the A-terms on the messenger coupling. This holds regardless of the messenger-coupling texture. More generally, the special structure of the soft terms often leads to stronger suppression of flavor- and CP-violating processes, compared to naive estimates.

  8. Benchmark cyclic plastic notch strain measurements

    NASA Technical Reports Server (NTRS)

    Sharpe, W. N., Jr.; Ward, M.

    1983-01-01

    Plastic strains at the roots of notched specimens of Inconel 718 subjected to tension-compression cycling at 650 C are reported. These strains were measured with a laser-based technique over a gage length of 0.1 mm and are intended to serve as 'benchmark' data for further development of experimental, analytical, and computational approaches. The specimens were 250 mm by 2.5 mm in the test section with double notches of 4.9 mm radius subjected to axial loading sufficient to cause yielding at the notch root on the tensile portion of the first cycle. The tests were run for 1000 cycles at 10 cpm or until cracks initiated at the notch root. The experimental techniques are described, and then representative data for the various load spectra are presented. All the data for each cycle of every test are available on floppy disks from NASA.

  9. Gatemon Benchmarking and Two-Qubit Operation

    NASA Astrophysics Data System (ADS)

    Casparis, Lucas; Larsen, Thorvald; Olsen, Michael; Petersson, Karl; Kuemmeth, Ferdinand; Krogstrup, Peter; Nygard, Jesper; Marcus, Charles

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability singular to semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors of ~0.5 % for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent SWAP operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of ~91 %, demonstrating the potential of gatemon qubits for building scalable quantum processors. We acknowledge financial support from Microsoft Project Q and the Danish National Research Foundation.

  10. Gatemon Benchmarking and Two-Qubit Operations.

    PubMed

    Casparis, L; Larsen, T W; Olsen, M S; Kuemmeth, F; Krogstrup, P; Nygård, J; Petersson, K D; Marcus, C M

    2016-04-15

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability characteristic of semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors below 0.7% for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent swap operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of 91%, demonstrating the potential of gatemon qubits for building scalable quantum processors. PMID:27127949

  11. Clinical Benchmark for Gastric Stapling Procedures.

    PubMed

    Graves

    1994-08-01

    To help answer the call to cut costs of surgical care, hospitals and physicians have joined to compare methods of care for the more common Diagnosis Related Group (DRG) diagnoses to form a Benchmark. Since many bariatric surgeons are the only ones performing this surgery in their primary hospitals, they do not have two or more surgical routines for comparison. This presentation compares data for the preoperative work-up, operating-room, and methods of postoperative care used by 29 members of the American Society for Bariatric Surgery (ASBS). There was representation of both academic and private surgeons and hospitals. To target areas for possible savings, the hospital bills of 16 patients without complication were compared. The synthesis of this information revealed significant differences in the extent and cost of preoperative work-up, antibiotic coverage, other postoperative care, and length of stay. These differences are examined under the assumption that patient outcome was the same. PMID:10742779

  12. Benchmark physics experiments for SP-100

    NASA Astrophysics Data System (ADS)

    Olsen, David N.; Carpenter, Stuart G.; Grasseschi, Gary L.; Smith, Dale M.

    A space nuclear power system (SNPS) benchmark reactor physics program was performed at Argonne's Zero Power Physics Reactor (ZPPR). Two uranium fuelled, BeO reflected reactors were assembled to test 300 kWe conceptual designs considered for the SP-100. The major difference between configurations was the reactivity control concept. Program goals were to aid designers in evaluating SP-100 designs and provide guidance in defining a series of engineering mockup criticals to be performed in support of the ground engineering test. ZPPR-16 was a short program aimed at providing basic physics data for cores representing early SP-100 designs. All measurement results from the experimental program are available. Initial analysis, using standard deterministic methods, shows significant errors when compared against the measurements. Calculational difficulties are enhanced by the need to model a natural B4C/graphite room-return shield used in the ZPPR experiments.

  13. Information-Theoretic Benchmarking of Land Surface Models

    NASA Astrophysics Data System (ADS)

    Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong

    2016-04-01

    Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed

  14. Benchmarking Competitiveness: Is America's Technological Hegemony Waning?

    NASA Astrophysics Data System (ADS)

    Lubell, Michael S.

    2006-03-01

    For more than half a century, by almost every standard, the United States has been the world's leader in scientific discovery, innovation and technological competitiveness. To a large degree, that dominant position stemmed from the circumstances our nation inherited at the conclusion of the World War Two: we were, in effect, the only major nation left standing that did not have to repair serious war damage. And we found ourselves with an extraordinary science and technology base that we had developed for military purposes. We had the laboratories -- industrial, academic and government -- as well as the scientific and engineering personnel -- many of them immigrants who had escaped from war-time Europe. What remained was to convert the wartime machinery into peacetime uses. We adopted private and public policies that accomplished the transition remarkably well, and we have prospered ever since. Our higher education system, our protection of intellectual property rights, our venture capital system, our entrepreneurial culture and our willingness to commit government funds for the support of science and engineering have been key components to our success. But recent competitiveness benchmarks suggest that our dominance is waning rapidly, in part because other nations have begun to emulate our successful model, in part because globalization has ``flattened'' the world and in part because we have been reluctant to pursue the public policies that are necessary to ensure our leadership. We will examine these benchmarks and explore the policy changes that are needed to keep our nation's science and technology enterprise vibrant and our economic growth on an upward trajectory.

  15. Benchmarking and testing the "Sea Level Equation

    NASA Astrophysics Data System (ADS)

    Spada, G.; Barletta, V. R.; Klemann, V.; van der Wal, W.; James, T. S.; Simon, K.; Riva, R. E. M.; Martinec, Z.; Gasperini, P.; Lund, B.; Wolf, D.; Vermeersen, L. L. A.; King, M. A.

    2012-04-01

    The study of the process of Glacial Isostatic Adjustment (GIA) and of the consequent sea level variations is gaining an increasingly important role within the geophysical community. Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying GIA can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, we do not have a suitably large set of agreed numerical results through which the methods may be validated. Following the example of the mantle convection community and our recent successful Benchmark for Post Glacial Rebound codes (Spada et al., 2011, doi: 10.1111/j.1365-246X.2011.04952.x), here we present the results of a benchmark study of independently developed codes designed to solve the SLE. This study has taken place within a collaboration facilitated through the European Cooperation in Science and Technology (COST) Action ES0701. The tests involve predictions of past and current sea level variations, and 3D deformations of the Earth surface. In spite of the signi?cant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found, which can be often attributed to the different numerical algorithms employed within the community, help to constrain the intrinsic errors in model predictions. These are of fundamental importance for a correct interpretation of the geodetic variations observed today, and

  16. Performance Evaluation and Benchmarking of Intelligent Systems

    SciTech Connect

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    2009-09-01

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents

  17. A Benchmark Study on Casting Residual Stress

    SciTech Connect

    Johnson, Eric M.; Watkins, Thomas R; Schmidlin, Joshua E; Dutler, S. A.

    2012-01-01

    Stringent regulatory requirements, such as Tier IV norms, have pushed the cast iron for automotive applications to its limit. The castings need to be designed with closer tolerances by incorporating hitherto unknowns, such as residual stresses arising due to thermal gradients, phase and microstructural changes during solidification phenomenon. Residual stresses were earlier neglected in the casting designs by incorporating large factors of safety. Experimental measurement of residual stress in a casting through neutron or X-ray diffraction, sectioning or hole drilling, magnetic, electric or photoelastic measurements is very difficult and time consuming exercise. A detailed multi-physics model, incorporating thermo-mechanical and phase transformation phenomenon, provides an attractive alternative to assess the residual stresses generated during casting. However, before relying on the simulation methodology, it is important to rigorously validate the prediction capability by comparing it to experimental measurements. In the present work, a benchmark study was undertaken for casting residual stress measurements through neutron diffraction, which was subsequently used to validate the accuracy of simulation prediction. The stress lattice specimen geometry was designed such that subsequent castings would generate adequate residual stresses during solidification and cooling, without any cracks. The residual stresses in the cast specimen were measured using neutron diffraction. Considering the difficulty in accessing the neutron diffraction facility, these measurements can be considered as benchmark for casting simulation validations. Simulations were performed using the identical specimen geometry and casting conditions for predictions of residual stresses. The simulation predictions were found to agree well with the experimentally measured residual stresses. The experimentally validated model can be subsequently used to predict residual stresses in different cast

  18. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    SciTech Connect

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  19. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGESBeta

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greatermore » than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  20. Nomenclatural Benchmarking: The roles of digital typification and telemicroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The process of nomenclatural benchmarking is the examination of type specimens of all available names to ascertain which currently accepted species the specimen bearing the name falls within. We propose a strategy for addressing four challenges for nomenclatural benchmarking. First, there is the mat...

  1. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance... interoffice transmission using the telephone company's DS1 special access rates. (b) Initial transport...

  2. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance... interoffice transmission using the telephone company's DS1 special access rates. (b) Initial transport...

  3. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance... interoffice transmission using the telephone company's DS1 special access rates. (b) Initial transport...

  4. Higher Education Ranking and Leagues Tables: Lessons Learned from Benchmarking

    ERIC Educational Resources Information Center

    Proulx, Roland

    2007-01-01

    The paper intends to contribute to the debate on ranking and league tables by adopting a critical approach to ranking methodologies from the point of view of a university benchmarking exercise. The absence of a strict benchmarking exercise in the ranking process has been, in the opinion of the author, one of the major problems encountered in the…

  5. Practical Considerations when Using Benchmarking for Accountability in Higher Education

    ERIC Educational Resources Information Center

    Achtemeier, Sue D.; Simpson, Ronald D.

    2005-01-01

    The qualitative study on which this article is based examined key individuals' perceptions, both within a research university community and beyond in its external governing board, of how to improve benchmarking as an accountability method in higher education. Differing understanding of benchmarking revealed practical implications for using it as…

  6. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... making up the historical benchmark, determines national growth rates and trends expenditures for each... amount of growth in national per capita expenditures for Parts A and B services under the...

  7. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... making up the historical benchmark, determines national growth rates and trends expenditures for each... amount of growth in national per capita expenditures for Parts A and B services under the...

  8. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 3 2012-10-01 2012-10-01 false Establishing the benchmark. 425.602 Section 425.602 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM (CONTINUED) MEDICARE SHARED SAVINGS PROGRAM Shared Savings and Losses § 425.602 Establishing the benchmark. (a)...

  9. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    ERIC Educational Resources Information Center

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  10. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance... interoffice transmission using the telephone company's DS1 special access rates. (b) Initial transport...

  11. What Are the ACT College Readiness Benchmarks? Information Brief

    ERIC Educational Resources Information Center

    ACT, Inc., 2013

    2013-01-01

    The ACT College Readiness Benchmarks are the minimum ACT® college readiness assessment scores required for students to have a high probability of success in credit-bearing college courses--English Composition, social sciences courses, College Algebra, or Biology. This report identifies the College Readiness Benchmarks on the ACT Compass scale…

  12. International Benchmarking: State and National Education Performance Standards

    ERIC Educational Resources Information Center

    Phillips, Gary W.

    2014-01-01

    This report uses international benchmarking as a common metric to examine and compare what students are expected to learn in some states with what students are expected to learn in other states. The performance standards in each state were compared with the international benchmarks used in two international assessments, and it was assumed that…

  13. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... benchmark plans described in 45 CFR 156.100. (1) States wishing to elect Secretary-approved coverage should... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND...

  14. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... benefits available under base benchmark plans described in 45 CFR 156.100. (1) States wishing to elect... 42 Public Health 4 2014-10-01 2014-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND...

  15. Benchmarking with the BLASST Sessional Staff Standards Framework

    ERIC Educational Resources Information Center

    Luzia, Karina; Harvey, Marina; Parker, Nicola; McCormack, Coralie; Brown, Natalie R.

    2013-01-01

    Benchmarking as a type of knowledge-sharing around good practice within and between institutions is increasingly common in the higher education sector. More recently, benchmarking as a process that can contribute to quality enhancement has been deployed across numerous institutions with a view to systematising frameworks to assure and enhance the…

  16. The Use of Educational Standards and Benchmarks in Indicator Publications

    ERIC Educational Resources Information Center

    Thomas, Sally; Peng, Wen-Jung

    2004-01-01

    This paper examines the use of educational standards and benchmarks in international indicator and other relevant policy publications, particularly those originating in the UK. The authors first examine what is meant by educational standards and benchmarks and how these concepts are defined. Then, they address the use of standards and benchmarks…

  17. Benchmarks for Psychotherapy Efficacy in Adult Major Depression

    ERIC Educational Resources Information Center

    Minami, Takuya; Wampold, Bruce E.; Serlin, Ronald C.; Kircher, John C.; Brown, George S.

    2007-01-01

    This study estimates pretreatment-posttreatment effect size benchmarks for the treatment of major depression in adults that may be useful in evaluating psychotherapy effectiveness in clinical practice. Treatment efficacy benchmarks for major depression were derived for 3 different types of outcome measures: the Hamilton Rating Scale for Depression…

  18. Enhancing knowledge of rangeland ecological processes with benchmark ecological sites

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A benchmark ecological site is one that has the greatest potential to yield data and information about ecological functions, processes, and the effects of management or climate changes on a broad area or critical ecological zone. A benchmark ecological site represents other similar sites in a major ...

  19. Developing Benchmarks to Measure Teacher Candidates' Performance

    ERIC Educational Resources Information Center

    Frazier, Laura Corbin; Brown-Hobbs, Stacy; Palmer, Barbara Martin

    2013-01-01

    This paper traces the development of teacher candidate benchmarks at one liberal arts institution. Begun as a classroom assessment activity over ten years ago, the benchmarks, through collaboration with professional development school partners, now serve as a primary measure of teacher candidates' performance in the final phases of the…

  20. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    NASA Astrophysics Data System (ADS)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  1. The challenge of benchmarking health systems: is ICT innovation capacity more systemic than organizational dependent?

    PubMed

    Lapão, Luís Velez

    2015-01-01

    The article by Catan et al. presents a benchmarking exercise comparing Israel and Portugal on the implementation of Information and Communication Technologies in the healthcare sector. Special attention was given to e-Health and m-Health. The authors collected information via a set of interviews with key stakeholders. They compared two different cultures and societies, which have reached slightly different implementation outcomes. Although the comparison is very enlightening, it is also challenging. Benchmarking exercises present a set of challenges, such as the choice of methodologies and the assessment of the impact on organizational strategy. Precise benchmarking methodology is a valid tool for eliciting information about alternatives for improving health systems. However, many beneficial interventions, which benchmark as effective, fail to translate into meaningful healthcare outcomes across contexts. There is a relationship between results and the innovational and competitive environments. Differences in healthcare governance and financing models are well known; but little is known about their impact on Information and Communication Technology implementation. The article by Catan et al. provides interesting clues about this issue. Public systems (such as those of Portugal, UK, Sweden, Spain, etc.) present specific advantages and disadvantages concerning Information and Communication Technology development and implementation. Meanwhile, private systems based fundamentally on insurance packages, (such as Israel, Germany, Netherlands or USA) present a different set of advantages and disadvantages - especially a more open context for innovation. Challenging issues from both the Portuguese and Israeli cases will be addressed. Clearly, more research is needed on both benchmarking methodologies and on ICT implementation strategies. PMID:26301085

  2. A new numerical benchmark of a freshwater lens

    NASA Astrophysics Data System (ADS)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  3. Quantum benchmarks for pure single-mode Gaussian states.

    PubMed

    Chiribella, Giulio; Adesso, Gerardo

    2014-01-10

    Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large-scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian single-mode states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments. PMID:24483875

  4. Development of a Benchmark Hydroclimate Data Library for N. America

    NASA Astrophysics Data System (ADS)

    Lall, U.; Cook, E.

    2001-12-01

    This poster presents the recommendations of an international workshop held May 24-25, 2001, at the Lamont-Doherty Earth Observatory, Palisades, New York. The purpose of the workshop was to: (1) Identify the needs for a continental and eventually global benchmark hydroclimatic dataset; (2) Evaluate how they are currently being met in the 3 countries of N. America; and (3)Identify the main scientific and institutional challenges in improving access, and associated implementation strategies to improve the data elements and access. An initial focus on N. American streamflow was suggested. The estimation of streamflow (or its specific statistics) at ungaged, poorly gaged locations or locations with a substantial modification of the hydrologic regime was identified as a priority. The potential for the use of extended (to 1856) climate records and of tree rings and other proxies (that may go back multiple centuries)for the reconstruction of a comprehensive data set of concurrent hydrologic and climate fields was considered. Specific recommendations for the implementation of a research program to support the development and enhance availability of the products in conjunction with the major federal and state agencies in the three countries of continental N. America were made. The implications of these recommendations for the Hydrologic Information Systems initiative of the Consortium of Universities for the Advanced of Hydrologic Science are discussed.

  5. Benchmark 2 - Springback of a draw / re-draw panel: Part A: Benchmark description

    NASA Astrophysics Data System (ADS)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing; Chen, Zhong

    2013-12-01

    Numerical methods have been effectively implemented to predict springback behavior of complex stampings to reduce die tryout through compensation and produce dimensionally accurate products after forming and trimming. However, accurate prediction of the sprung shape of a panel formed with an initial draw followed with a restrike forming step remains a difficult challenge. The objective of this benchmark was to predict the sprung shape after stamping, restriking and trimming a sheet metal panel. A simple, rectangular draw die was used to draw sheet metal to a set depth with a "larger" tooling radius, followed by additional drawing to a greater depth with a "smaller" tooling radius. Panels were sectioned along a centerline and released to allow measurement of thickness strain and position of the trim line in the sprung condition. Smaller radii were used in the restrike step in order to significantly alter the deformation and the sprung shape. These measurements were used to evaluate numerical analysis predictions submitted by benchmark participants. Additional panels were drawn to "failure" during both the first draw and the re-draw in order to set the parameters for the springback trials and to demonstrate that a sheet metal going through a re-strike operation can exceed conventional forming limits of that under a simple draw operation. Two sheet metals were used for this benchmark study: DP600 steel sheet and aluminum alloy 5182-O.

  6. Evaluation of 3D surface scanners for skin documentation in forensic medicine: comparison of benchmark surfaces

    PubMed Central

    Schweitzer, Wolf; Häusler, Martin; Bär, Walter; Schaepman, Michael

    2007-01-01

    Background Two 3D surface scanners using collimated light patterns were evaluated in a new application domain: to document details of surfaces similar to the ones encountered in forensic skin pathology. Since these scanners have not been specifically designed for forensic skin pathology, we tested their performance under practical constraints in an application domain that is to be considered new. Methods Two solid benchmark objects containing relevant features were used to compare two 3D surface scanners: the ATOS-II (GOM, Germany) and the QTSculptor (Polygon Technology, Germany). Both scanners were used to capture and process data within a limited amount of time, whereas point-and-click editing was not allowed. We conducted (a) a qualitative appreciation of setup, handling and resulting 3D data, (b) an experimental subjective evaluation of matching 3D data versus photos of benchmark object regions by a number of 12 judges who were forced to state their preference for either of the two scanners, and (c) a quantitative characterization of both 3D data sets comparing 220 single surface areas with the real benchmark objects in order to determine the recognition rate's possible dependency on feature size and geometry. Results The QTSculptor generated significantly better 3D data in both qualitative tests (a, b) that we had conducted, possibly because of a higher lateral point resolution; statistical evaluation (c) showed that the QTSculptor-generated data allowed the discrimination of features as little as 0.3 mm, whereas ATOS-II-generated data allowed for discrimination of features sized not smaller than 1.2 mm. Conclusion It is particularly important to conduct specific benchmark tests if devices are brought into new application domains they were not specifically designed for; using a realistic test featuring forensic skin pathology features, QT Sculptor-generated data quantitatively exceeded manufacturer's specifications, whereas ATOS-II-generated data was within

  7. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Delivery of benchmark and benchmark-equivalent coverage through managed care entities. 440.385 Section 440.385 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS...

  8. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Delivery of benchmark and benchmark-equivalent coverage through managed care entities. 440.385 Section 440.385 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS...

  9. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent coverage through managed care entities. 440.385 Section 440.385 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS...

  10. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Delivery of benchmark and benchmark-equivalent coverage through managed care entities. 440.385 Section 440.385 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS...

  11. The role of the hospital registry in achieving outcome benchmarks in cancer care.

    PubMed

    Greene, Frederick L; Gilkerson, Sharon; Tedder, Paige; Smith, Kathy

    2009-06-15

    The hospital registry is a valuable tool for evaluating quality benchmarks in cancer care. As payment for performance standards are adopted, the registry will assume a more dynamic and economically important role in the hospital setting. At Carolinas Medical Center, the registry has been a key instrument in the comparison of state and national benchmarks and for program improvement in meeting standards in the care of breast and colon cancer. One of the significant successes of the American College of Surgeons Commission on Cancer (CoC) Hospital Approvals Program is the support of hospital registries, especially in small and midsized community hospitals throughout the United States. To become a member of the Hospital Approvals Program, a registry must be staffed appropriately and include analytic data for patients who have their primary diagnosis or treatment at the facility 1. The current challenge for most hospitals is to prove that the registry has specific worth when many facets of care are not compensated. Unfortunately a small number of hospitals have disbanded their registries because of the short-sighted decision that the registry and its personnel are a drain on the hospital system and do not generate revenue. In the present era of meeting benchmarks for care as a prelude to being paid by third party and governmental agencies 2,3, a primary argument is that the registry can be revenue-enhancing by quantifying specific outcomes in cancer care. Without having appropriate registry and abstract capability, the hospital leadership cannot measure the specific outcome benchmarks required in the era of "pay for performance" or "pay for participation". PMID:19466739

  12. Consistency and Magnitude of Differences in Reading Curriculum-Based Measurement Slopes in Benchmark versus Strategic Monitoring

    ERIC Educational Resources Information Center

    Mercer, Sterett H.; Keller-Margulis, Milena A.

    2015-01-01

    Differences in oral reading curriculum-based measurement (R-CBM) slopes based on two commonly used progress monitoring practices in field-based data were compared in this study. Semester-specific R-CBM slopes were calculated for 150 Grade 1 and 2 students who completed benchmark (i.e., 3 R-CBM probes collected 3 times per year) and strategic…

  13. Guidebook for Using the Tool BEST Cement: Benchmarking and Energy Savings Tool for the Cement Industry

    SciTech Connect

    Galitsky, Christina; Price, Lynn; Zhou, Nan; Fuqiu , Zhou; Huawen, Xiong; Xuemin, Zeng; Lan, Wang

    2008-07-30

    The Benchmarking and Energy Savings Tool (BEST) Cement is a process-based tool based on commercially available efficiency technologies used anywhere in the world applicable to the cement industry. This version has been designed for use in China. No actual cement facility with every single efficiency measure included in the benchmark will likely exist; however, the benchmark sets a reasonable standard by which to compare for plants striving to be the best. The energy consumption of the benchmark facility differs due to differences in processing at a given cement facility. The tool accounts for most of these variables and allows the user to adapt the model to operational variables specific for his/her cement facility. Figure 1 shows the boundaries included in a plant modeled by BEST Cement. In order to model the benchmark, i.e., the most energy efficient cement facility, so that it represents a facility similar to the user's cement facility, the user is first required to input production variables in the input sheet (see Section 6 for more information on how to input variables). These variables allow the tool to estimate a benchmark facility that is similar to the user's cement plant, giving a better picture of the potential for that particular facility, rather than benchmarking against a generic one. The input variables required include the following: (1) the amount of raw materials used in tonnes per year (limestone, gypsum, clay minerals, iron ore, blast furnace slag, fly ash, slag from other industries, natural pozzolans, limestone powder (used post-clinker stage), municipal wastes and others); the amount of raw materials that are preblended (prehomogenized and proportioned) and crushed (in tonnes per year); (2) the amount of additives that are dried and ground (in tonnes per year); (3) the production of clinker (in tonnes per year) from each kiln by kiln type; (4) the amount of raw materials, coal and clinker that is ground by mill type (in tonnes per year); (5

  14. Benchmark Evaluation of Plutonium Nitrate Solution Arrays

    SciTech Connect

    M. A. Marshall; J. D. Bess

    2011-09-01

    In October and November of 1981 thirteen approach-to-critical experiments were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington, using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas{reg_sign} reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L of Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were performed to fill a gap in experimental data regarding criticality limits for storing and handling arrays of Pu solution in reprocessing facilities. Of the thirteen approach-to-critical experiments eleven resulted in extrapolations to critical configurations. Four of the approaches were extrapolated to the critical number of bottles; these were not evaluated further due to the large uncertainty associated with the modeling of a fraction of a bottle. The remaining seven approaches were extrapolated to critical array spacing of 3-4 and 4-4 arrays; these seven critical configurations were evaluation for inclusion as acceptable benchmark experiments in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Detailed and simple models of these configurations were created and the associated bias of these simplifications was determined to range from 0.00116 and 0.00162 {+-} 0.00006 ?keff. Monte Carlo analysis of all models was completed using MCNP5 with ENDF/BVII.0 neutron cross section libraries. A thorough uncertainty analysis of all critical, geometric, and material parameters was performed using parameter

  15. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    SciTech Connect

    Gerhard Strydom; Javier Ortensi; Sonat Sen; Hans Hammer

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible for defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results

  16. Benchmark and gap analysis of current mask carriers vs future requirements: example of the carrier contamination

    NASA Astrophysics Data System (ADS)

    Fontaine, H.; Davenet, M.; Cheung, D.; Hoellein, I.; Richsteiger, P.; Dejaune, P.; Torsy, A.

    2007-02-01

    In the frame of the European Medea+ 2T302 MUSCLE project, an extensive mask carriers benchmark was carried out in order to evaluate whether some containers answer to the 65nm technology needs. Ten different containers, currently used or expected in the future all along the mask supply chain (blank, maskhouse and fab carriers) were selected at different steps of their life cycle (new, aged, aged & cleaned). The most critical parameters identified for analysis versus future technologies were: automation, particle contamination, chemical contamination (organic outgassing, ionic contamination), cleanability, ESD, airtightness and purgeability. Furthermore, experimental protocols corresponding to suitable methods were then developed and implemented to test each criterion. The benchmark results are presented giving a "state of the art" of mask carriers currently available and allowing a gap analysis for the tested parameters related to future needs. This approach is detailed through the particular case of carrier contamination measurements. Finally, this benchmark / gap analysis leads to propose advisable mask carrier specifications (and the test protocols associated) on various key parameters which can also be taken as guidelines for a standardization perspective for the 65nm technology. This also indicates that none of tested carriers fulfills all the specifications proposed.

  17. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1996 revision

    SciTech Connect

    Suter, G.W. II; Tsao, C.L.

    1996-06-01

    This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more complete documentation of the sources and derivation of all values are presented.

  18. Analysis of the pool critical assembly pressure vessel benchmark using pentran

    SciTech Connect

    Edgar, C. A.; Sjoden, G. E.

    2012-07-01

    The internationally circulated Pool Critical Assembly (PCA) Pressure Vessel Benchmark was analyzed using the PENTRAN Parallel Sn code system for the geometry, material, and source specifications as described in the PCA Benchmark documentation. This research focused on utilizing the BUGLE-96 cross section library and accompanying reaction rates, while examining both adaptive differencing on a coarse mesh basis as well as Directional Theta Weighted Sn differencing in order to compare the calculated PENTRAN results to measured data. The results show good comparison with the measured data as well as to the calculated results provided from TORT for the BUGLE-96 cross sections and reaction rates, which suggests PENTRAN is a viable and reliable code system for calculation of light water reactor neutron shielding and dosimetry calculations. (authors)

  19. IAEA CRP on HTGR Uncertainty Analysis: Benchmark Definition and Test Cases

    SciTech Connect

    Gerhard Strydom; Frederik Reitsma; Hans Gougar; Bismark Tyobeka; Kostadin Ivanov

    2012-11-01

    Uncertainty and sensitivity studies are essential elements of the reactor simulation code verification and validation process. Although several international uncertainty quantification activities have been launched in recent years in the LWR, BWR and VVER domains (e.g. the OECD/NEA BEMUSE program [1], from which the current OECD/NEA LWR Uncertainty Analysis in Modelling (UAM) benchmark [2] effort was derived), the systematic propagation of uncertainties in cross-section, manufacturing and model parameters for High Temperature Reactor (HTGR) designs has not been attempted yet. This paper summarises the scope, objectives and exercise definitions of the IAEA Coordinated Research Project (CRP) on HTGR UAM [3]. Note that no results will be included here, as the HTGR UAM benchmark was only launched formally in April 2012, and the specification is currently still under development.

  20. Three-index Model for Westenberger-Kallrath Benchmark Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Vooradi, Ramsagar; Shaik, Munawar A.; Gupta, Nikhil M.

    2010-10-01

    Short-term scheduling of batch operations has become an important research area in the last two decades. Recently Shaik and Floudas (2009) proposed a novel unified model for short-term scheduling using unit-specific event based continuous time representation employing three-index binary and continuous variables. In this work, we extend this three index model to solve a challenging benchmark problem from the scheduling literature that covers most of the features contributing to the complexity of batch process scheduling in industry. In order to implement the problem, new sets of constraints and modifications are incorporated into the three-index model. The different demand instances of the benchmark problem have been solved using the developed model and the results are compared with the literature to demonstrate the effectiveness of the proposed three-index model.

  1. Some benchmark problems for computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Chapman, C. J.

    2004-02-01

    This paper presents analytical results for high-speed leading-edge noise which may be useful for benchmark testing of computational aeroacoustics codes. The source of the noise is a convected gust striking the leading edge of a wing or fan blade at arbitrary subsonic Mach number; the streamwise shape of the gust is top-hat, Gaussian, or sinusoidal, and the cross-stream shape is top-hat, Gaussian, or uniform. Detailed results are given for all nine combinations of shapes; six combinations give three-dimensional sound fields, and three give two-dimensional fields. The gust shapes depend on numerical parameters, such as frequency, rise time, and width, which may be varied arbitrarily in relation to aeroacoustic code parameters, such as time-step, grid size, and artificial viscosity. Hence it is possible to determine values of code parameters suitable for accurate calculation of a given acoustic feature, e.g., the impulsive sound field produced by a gust with sharp edges, or a full three-dimensional acoustic directivity pattern, or a complicated multi-lobed directivity. Another possibility is to check how accurately a code can determine the far acoustic field from nearfield data; a parameter here would be the distance from the leading edge at which the data are taken.

  2. European Lean Gasoline Direct Injection Vehicle Benchmark

    SciTech Connect

    Chambon, Paul H; Huff, Shean P; Edwards, Kevin Dean; Norman, Kevin M; Prikhodko, Vitaly Y; Thomas, John F

    2011-01-01

    Lean Gasoline Direct Injection (LGDI) combustion is a promising technical path for achieving significant improvements in fuel efficiency while meeting future emissions requirements. Though Stoichiometric Gasoline Direct Injection (SGDI) technology is commercially available in a few vehicles on the American market, LGDI vehicles are not, but can be found in Europe. Oak Ridge National Laboratory (ORNL) obtained a European BMW 1-series fitted with a 2.0l LGDI engine. The vehicle was instrumented and commissioned on a chassis dynamometer. The engine and after-treatment performance and emissions were characterized over US drive cycles (Federal Test Procedure (FTP), the Highway Fuel Economy Test (HFET), and US06 Supplemental Federal Test Procedure (US06)) and steady state mappings. The vehicle micro hybrid features (engine stop-start and intelligent alternator) were benchmarked as well during the course of that study. The data was analyzed to quantify the benefits and drawbacks of the lean gasoline direct injection and micro hybrid technologies from a fuel economy and emissions perspectives with respect to the US market. Additionally that data will be formatted to develop, substantiate, and exercise vehicle simulations with conventional and advanced powertrains.

  3. Geant4 Computing Performance Benchmarking and Monitoring

    NASA Astrophysics Data System (ADS)

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-01

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. The scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  4. Hydrologic information server for benchmark precipitation dataset

    NASA Astrophysics Data System (ADS)

    McEnery, John A.; McKee, Paul W.; Shelton, Gregory P.; Ramsey, Ryan W.

    2013-01-01

    This paper will present the methodology and overall system development by which a benchmark dataset of precipitation information has been made available. Rainfall is the primary driver of the hydrologic cycle. High quality precipitation data is vital for hydrologic models, hydrometeorologic studies and climate analysis,and hydrologic time series observations are important to many water resources applications. Over the past two decades, with the advent of NEXRAD radar, science to measure and record rainfall has improved dramatically. However, much existing data has not been readily available for public access or transferable among the agricultural, engineering and scientific communities. This project takes advantage of the existing CUAHSI Hydrologic Information System ODM model and tools to bridge the gap between data storage and data access, providing an accepted standard interface for internet access to the largest time-series dataset of NEXRAD precipitation data ever assembled. This research effort has produced an operational data system to ingest, transform, load and then serve one of most important hydrologic variable sets.

  5. Geant4 Computing Performance Benchmarking and Monitoring

    SciTech Connect

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  6. Benchmarking Calculations of Excitonic Couplings between Bacteriochlorophylls.

    PubMed

    Kenny, Elise P; Kassal, Ivan

    2016-01-14

    Excitonic couplings between (bacterio)chlorophyll molecules are necessary for simulating energy transport in photosynthetic complexes. Many techniques for calculating the couplings are in use, from the simple (but inaccurate) point-dipole approximation to fully quantum-chemical methods. We compared several approximations to determine their range of applicability, noting that the propagation of experimental uncertainties poses a fundamental limit on the achievable accuracy. In particular, the uncertainty in crystallographic coordinates yields an uncertainty of about 20% in the calculated couplings. Because quantum-chemical corrections are smaller than 20% in most biologically relevant cases, their considerable computational cost is rarely justified. We therefore recommend the electrostatic TrEsp method across the entire range of molecular separations and orientations because its cost is minimal and it generally agrees with quantum-chemical calculations to better than the geometric uncertainty. Understanding these uncertainties can guard against striving for unrealistic precision; at the same time, detailed benchmarks can allow important qualitative questions-which do not depend on the precise values of the simulation parameters-to be addressed with greater confidence about the conclusions. PMID:26651217

  7. Geant4 Computing Performance Benchmarking and Monitoring

    DOE PAGESBeta

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less

  8. Benchmark notch test for life prediction

    NASA Technical Reports Server (NTRS)

    Domas, P. A.; Sharpe, W. N.; Ward, M.; Yau, J. F.

    1982-01-01

    The laser Interferometric Strain Displacement Gage (ISDG) was used to measure local strains in notched Inconel 718 test bars subjected to six different load histories at 649 C (1200 F) and including effects of tensile and compressive hold periods. The measurements were compared to simplified Neuber notch analysis predictions of notch root stress and strain. The actual strains incurred at the root of a discontinuity in cyclically loaded test samples subjected to inelastic deformation at high temperature where creep deformations readily occur were determined. The steady state cyclic, stress-strain response at the root of the discontinuity was analyzed. Flat, double notched uniaxially loaded fatigue specimens manufactured from the nickel base, superalloy Inconel 718 were used. The ISDG was used to obtain cycle by cycle recordings of notch root strain during continuous and hold time cycling at 649 C. Comparisons to Neuber and finite element model analyses were made. The results obtained provide a benchmark data set in high technology design where notch fatigue life is the predominant component service life limitation.

  9. Benchmarking of Planning Models Using Recorded Dynamics

    SciTech Connect

    Huang, Zhenyu; Yang, Bo; Kosterev, Dmitry

    2009-03-15

    Power system planning extensively uses model simulation to understand the dynamic behaviors and determine the operating limits of a power system. Model quality is key to the safety and reliability of electricity delivery. Planning model benchmarking, or model validation, has been one of the central topics in power engineering studies for years. As model validation aims at obtaining reasonable models to represent dynamic behavior of power system components, it has been essential to validate models against actual measurements. The development of phasor technology provides such measurements and represents a new opportunity for model validation as phasor measurements can capture power system dynamics with high-speed, time-synchronized data. Previously, methods for rigorous comparison of model simulation and recorded dynamics have been developed and applied to quantify model quality of power plants in the Western Electricity Coordinating Council (WECC). These methods can locate model components which need improvement. Recent work continues this effort and focuses on how model parameters may be calibrated to match recorded dynamics after the problematic model components are identified. A calibration method using Extended Kalman Filter technique is being developed. This paper provides an overview of prior work on model validation and presents new development on the calibration method and initial results of model parameter calibration.

  10. International land Model Benchmarking (ILAMB) Package v002.00

    DOE Data Explorer

    Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory

    2016-05-09

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  11. International land Model Benchmarking (ILAMB) Package v001.00

    DOE Data Explorer

    Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory

    2016-05-02

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  12. Simulating diffusion processes in discontinuous media: Benchmark tests

    NASA Astrophysics Data System (ADS)

    Lejay, Antoine; Pichot, Géraldine

    2016-06-01

    We present several benchmark tests for Monte Carlo methods simulating diffusion in one-dimensional discontinuous media. These benchmark tests aim at studying the potential bias of the schemes and their impact on the estimation of micro- or macroscopic quantities (repartition of masses, fluxes, mean residence time, …). These benchmark tests are backed by a statistical analysis to filter out the bias from the unavoidable Monte Carlo error. We apply them on four different algorithms. The results of the numerical tests give a valuable insight into the fine behavior of these schemes, as well as rules to choose between them.

  13. Structural Benchmark Testing for Stirling Converter Heater Heads

    NASA Astrophysics Data System (ADS)

    Krause, David L.; Kalluri, Sreeramesh; Bowman, Randy R.

    2007-01-01

    The National Aeronautics and Space Administration (NASA) has identified high efficiency Stirling technology for potential use on long duration Space Science missions such as Mars rovers, deep space missions, and lunar applications. For the long life times required, a structurally significant design limit for the Stirling convertor heater head is creep deformation induced even under relatively low stress levels at high material temperatures. Conventional investigations of creep behavior adequately rely on experimental results from uniaxial creep specimens, and much creep data is available for the proposed Inconel-718 (IN-718) and MarM-247 nickel-based superalloy materials of construction. However, very little experimental creep information is available that directly applies to the atypical thin walls, the specific microstructures, and the low stress levels. In addition, the geometry and loading conditions apply multiaxial stress states on the heater head components, far from the conditions of uniaxial testing. For these reasons, experimental benchmark testing is underway to aid in accurately assessing the durability of Stirling heater heads. The investigation supplements uniaxial creep testing with pneumatic testing of heater head test articles at elevated temperatures and with stress levels ranging from one to seven times design stresses. This paper presents experimental methods, results, post-test microstructural analyses, and conclusions for both accelerated and non-accelerated tests. The Stirling projects use the results to calibrate deterministic and probabilistic analytical creep models of the heater heads to predict their life times.

  14. Community-based benchmarking of the CMIP DECK experiments

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2015-12-01

    A diversity of community-based efforts are independently developing "diagnostic packages" with little or no coordination between them. A short list of examples include NCAR's Climate Variability Diagnostics Package (CVDP), ORNL's International Land Model Benchmarking (ILAMB), LBNL's Toolkit for Extreme Climate Analysis (TECA), PCMDI's Metrics Package (PMP), the EU EMBRACE ESMValTool, the WGNE MJO diagnostics package, and CFMIP diagnostics. The full value of these efforts cannot be realized without some coordination. As a first step, a WCRP effort has initiated a catalog to document candidate packages that could potentially be applied in a "repeat-use" fashion to all simulations contributed to the CMIP DECK (Diagnostic, Evaluation and Characterization of Klima) experiments. Some coordination of community-based diagnostics has the additional potential to improve how CMIP modeling groups analyze their simulations during model-development. The fact that most modeling groups now maintain a "CMIP compliant" data stream means that in principal without much effort they could readily adopt a set of well organized diagnostic capabilities specifically designed to operate on CMIP DECK experiments. Ultimately, a detailed listing of and access to analysis codes that are demonstrated to work "out of the box" with CMIP data could enable model developers (and others) to select those codes they wish to implement in-house, potentially enabling more systematic evaluation during the model development process.

  15. Benchmarking wastewater treatment plants under an eco-efficiency perspective.

    PubMed

    Lorenzo-Toja, Yago; Vázquez-Rowe, Ian; Amores, María José; Termes-Rifé, Montserrat; Marín-Navarro, Desirée; Moreira, María Teresa; Feijoo, Gumersindo

    2016-10-01

    The new ISO 14045 framework is expected to slowly start shifting the definition of eco-efficiency toward a life-cycle perspective, using Life Cycle Assessment (LCA) as the environmental impact assessment method together with a system value assessment method for the economic analysis. In the present study, a set of 22 wastewater treatment plants (WWTPs) in Spain were analyzed on the basis of eco-efficiency criteria, using LCA and Life Cycle Costing (LCC) as a system value assessment method. The study is intended to be useful to decision-makers in the wastewater treatment sector, since the combined method provides an alternative scheme for analyzing the relationship between environmental impacts and costs. Two midpoint impact categories, global warming and eutrophication potential, as well as an endpoint single score indicator were used for the environmental assessment, while LCC was used for value assessment. Results demonstrated that substantial differences can be observed between different WWTPs depending on a wide range of factors such as plant configuration, plant size or even legal discharge limits. Based on these results the benchmarking of wastewater treatment facilities was performed by creating a specific classification and certification scheme. The proposed eco-label for the WWTPs rating is based on the integration of the three environmental indicators and an economic indicator calculated within the study under the eco-efficiency new framework. PMID:27235897

  16. A comparison and benchmark of two electron cloud packages

    SciTech Connect

    Lebrun, Paul L.G.; Amundson, James F; Spentzouris, Panagiotis G; Veitzer, Seth A

    2012-01-01

    We present results from precision simulations of the electron cloud (EC) problem in the Fermilab Main Injector using two distinct codes. These two codes are (i)POSINST, a F90 2D+ code, and (ii)VORPAL, a 2D/3D electrostatic and electromagnetic code used for self-consistent simulations of plasma and particle beam problems. A specific benchmark has been designed to demonstrate the strengths of both codes that are relevant to the EC problem in the Main Injector. As differences between results obtained from these two codes were bigger than the anticipated model uncertainties, a set of changes to the POSINST code were implemented. These changes are documented in this note. This new version of POSINST now gives EC densities that agree with those predicted by VORPAL, within {approx}20%, in the beam region. The root cause of remaining differences are most likely due to differences in the electrostatic Poisson solvers. From a software engineering perspective, these two codes are very different. We comment on the pros and cons of both approaches. The design(s) for a new EC package are briefly discussed.

  17. Structural Benchmark Testing for Stirling Convertor Heater Heads

    NASA Technical Reports Server (NTRS)

    Krause, David L.; Kalluri, Sreeramesh; Bowman, Randy R.

    2007-01-01

    The National Aeronautics and Space Administration (NASA) has identified high efficiency Stirling technology for potential use on long duration Space Science missions such as Mars rovers, deep space missions, and lunar applications. For the long life times required, a structurally significant design limit for the Stirling convertor heater head is creep deformation induced even under relatively low stress levels at high material temperatures. Conventional investigations of creep behavior adequately rely on experimental results from uniaxial creep specimens, and much creep data is available for the proposed Inconel-718 (IN-718) and MarM-247 nickel-based superalloy materials of construction. However, very little experimental creep information is available that directly applies to the atypical thin walls, the specific microstructures, and the low stress levels. In addition, the geometry and loading conditions apply multiaxial stress states on the heater head components, far from the conditions of uniaxial testing. For these reasons, experimental benchmark testing is underway to aid in accurately assessing the durability of Stirling heater heads. The investigation supplements uniaxial creep testing with pneumatic testing of heater head test articles at elevated temperatures and with stress levels ranging from one to seven times design stresses. This paper presents experimental methods, results, post-test microstructural analyses, and conclusions for both accelerated and non-accelerated tests. The Stirling projects use the results to calibrate deterministic and probabilistic analytical creep models of the heater heads to predict their life times.

  18. Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves

    NASA Technical Reports Server (NTRS)

    Aitchison, Lindsay; McFarland, Shane

    2012-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test

  19. Nomenclatural benchmarking: the roles of digital typification and telemicroscopy

    PubMed Central

    Wheeler, Quentin; Bourgoin, Thierry; Coddington, Jonathan; Gostony, Timothy; Hamilton, Andrew; Larimer, Roy; Polaszek, Andrew; Schauff, Michael; Solis, M. Alma

    2012-01-01

    Abstract Nomenclatural benchmarking is the periodic realignment of species names with species theories and is necessary for the accurate and uniform use of Linnaean binominals in the face of changing species limits. Gaining access to types, often for little more than a cursory examination by an expert, is a major bottleneck in the advance and availability of biodiversity informatics. For the nearly two million described species it has been estimated that five to six million name-bearing type specimens exist, including those for synonymized binominals. Recognizing that examination of types in person will remain necessary in special cases, we propose a four-part strategy for opening access to types that relies heavily on digitization and that would eliminate much of the bottleneck: (1) modify codes of nomenclature to create registries of nomenclatural acts, such as the proposed ZooBank, that include a requirement for digital representations (e-types) for all newly described species to avoid adding to backlog; (2) an “r” strategy that would engineer and deploy a network of automated instruments capable of rapidly creating 3-D images of type specimens not requiring participation of taxon experts; (3) a “K” strategy using remotely operable microscopes to engage taxon experts in targeting and annotating informative characters of types to supplement and extend information content of rapidly acquired e-types, a process that can be done on an as-needed basis as in the normal course of revisionary taxonomy; and (4) creation of a global e-type archive associated with the commissions on nomenclature and species registries providing one-stop-shopping for e-types. We describe a first generation implementation of the “K” strategy that adapts current technology to create a network of Remotely Operable Benchmarkers Of Types (ROBOT) specifically engineered to handle the largest backlog of types, pinned insect specimens. The three initial instruments will be in the

  20. Benchmark Solutions for Computational Aeroacoustics (CAA) Code Validation

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    2004-01-01

    NASA has conducted a series of Computational Aeroacoustics (CAA) Workshops on Benchmark Problems to develop a set of realistic CAA problems that can be used for code validation. In the Third (1999) and Fourth (2003) Workshops, the single airfoil gust response problem, with real geometry effects, was included as one of the benchmark problems. Respondents were asked to calculate the airfoil RMS pressure and far-field acoustic intensity for different airfoil geometries and a wide range of gust frequencies. This paper presents the validated that have been obtained to the benchmark problem, and in addition, compares them with classical flat plate results. It is seen that airfoil geometry has a strong effect on the airfoil unsteady pressure, and a significant effect on the far-field acoustic intensity. Those parts of the benchmark problem that have not yet been adequately solved are identified and presented as a challenge to the CAA research community.

  1. The State of Energy and Performance Benchmarking for Enterprise Servers

    NASA Astrophysics Data System (ADS)

    Fanara, Andrew; Haines, Evan; Howard, Arthur

    To address the server industry’s marketing focus on performance, benchmarking organizations have played a pivotal role in developing techniques to determine the maximum achievable performance level of a system. Generally missing has been an assessment of energy use to achieve that performance. The connection between performance and energy consumption is becoming necessary information for designers and operators as they grapple with power constraints in the data center. While industry and policy makers continue to strategize about a universal metric to holistically measure IT equipment efficiency, existing server benchmarks for various workloads could provide an interim proxy to assess the relative energy efficiency of general servers. This paper discusses ideal characteristics a future energy-performance benchmark might contain, suggests ways in which current benchmarks might be adapted to provide a transitional step to this end, and notes the need for multiple workloads to provide a holistic proxy for a universal metric.

  2. A proposed benchmark for simulation in radiographic testing

    SciTech Connect

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Guerin, P.

    2014-02-18

    The purpose of this benchmark study is to compare simulation results predicted by various models of radiographic testing, in particular those that are capable of separately predicting primary and scatter radiation for specimens of arbitrary geometry.

  3. Directivity benchmarks using an automated three-dimensional scanning system

    NASA Astrophysics Data System (ADS)

    Burns, Thomas

    2005-09-01

    In clinical studies relating a patient's perception in noise, it is interesting to note that the hearing industry has used exclusively the Directivity Index as an objective performance benchmark for the hearing aid. Considering, for example, that a dipole directional pattern has the same DI as a cardioid pattern, it is reasonable to require that additional directional performance benchmarks be reported in these clinical studies, along with the room acoustics parameters related to noise/source positions and the relationship between direct and reverberant fields. The purpose of this study is to describe an automated 3-D scanning system for benchmarking directional performance, and to review the traditional repertoire of directional benchmarking that has been used in the broader engineering acoustics community; namely, the null angle, maximum response angle, random energy efficiency, front to total random energy ratio, distance factor, and omni to directional array gain. Lastly, visualization of 3-D polar responses will be explored.

  4. Issues in benchmarking human reliability analysis methods : a literature review.

    SciTech Connect

    Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.; Hendrickson, Stacey M. Langfitt; Boring, Ronald L.

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  5. A Benchmark Profile of Economics Departments in 15 Private Universities.

    ERIC Educational Resources Information Center

    Dearden, James; Taylor, Larry; Thornton, Robert

    2001-01-01

    Describes a 1999 benchmarking survey of 15 economics departments in private universities. Reports information gleaned from the survey concerning departmental resources, teaching load, class sizes, and the weight given to research, teaching, and service in salary determination and promotion. (RLH)

  6. Draft Mercury Aquatic Wildlife Benchmarks for Great Salt Lake Assessment

    EPA Science Inventory

    This document describes the EPA Region 8's rationale for selecting aquatic wildlife dietary and tissue mercury benchmarks for use in interpreting available data collected from the Great Salt Lake and surrounding wetlands.

  7. Seismo-acoustic ray model benchmarking against experimental tank data.

    PubMed

    Camargo Rodríguez, Orlando; Collis, Jon M; Simpson, Harry J; Ey, Emanuel; Schneiderwind, Joseph; Felisberto, Paulo

    2012-08-01

    Acoustic predictions of the recently developed traceo ray model, which accounts for bottom shear properties, are benchmarked against tank experimental data from the EPEE-1 and EPEE-2 (Elastic Parabolic Equation Experiment) experiments. Both experiments are representative of signal propagation in a Pekeris-like shallow-water waveguide over a non-flat isotropic elastic bottom, where significant interaction of the signal with the bottom can be expected. The benchmarks show, in particular, that the ray model can be as accurate as a parabolic approximation model benchmarked in similar conditions. The results of benchmarking are important, on one side, as a preliminary experimental validation of the model and, on the other side, demonstrates the reliability of the ray approach for seismo-acoustic applications. PMID:22894193

  8. Developing scheduling benchmark tests for the Space Network

    NASA Technical Reports Server (NTRS)

    Moe, Karen L.; Happell, Nadine; Brady, Sean

    1993-01-01

    A set of benchmark tests were developed to analyze and measure Space Network scheduling characteristics and to assess the potential benefits of a proposed flexible scheduling concept. This paper discusses the role of the benchmark tests in evaluating alternative flexible scheduling approaches and defines a set of performance measurements. The paper describes the rationale for the benchmark tests as well as the benchmark components, which include models of the Tracking and Data Relay Satellite (TDRS), mission spacecraft, their orbital data, and flexible requests for communication services. Parameters which vary in the tests address the degree of request flexibility, the request resource load, and the number of events to schedule. Test results are evaluated based on time to process and schedule quality. Preliminary results and lessons learned are addressed.

  9. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    SciTech Connect

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester; Tuan Q. Tran; Erasmia Lois

    2010-06-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  10. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE PAGESBeta

    Quinn, Heather; Robinson, William H.; Rech, Paolo; Aguirre, Miguel; Barnard, Arno; Desogus, Marco; Entrena, Luis; Garcia-Valderas, Mario; Guertin, Steven M.; Kaeli, David; et al

    2015-12-17

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  11. GPUs benchmarking in subpixel image registration algorithm

    NASA Astrophysics Data System (ADS)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  12. A Benchmark for Cloud Tracking Wind Measurements

    NASA Astrophysics Data System (ADS)

    Sayanagi, K. M.; Mitchell, J.; Ingersoll, A. P.; Ewald, S. P.; Marcus, P. S.; de Pater, I.; Wong, M. H.; Choi, D. S.; Sussman, M.; Ogohara, K.; Imamura, T.; Kouyama, T.; Takagi, M.; Satoh, N.; Del Genio, A. D.; Barbara, J.; Sanchez-Lavega, A.; Hueso, R.; García-Melendo, E.; Simon-Miller, A. A.

    2010-12-01

    Cloud tracking has been the primary method of measuring wind speeds in planetary atmospheres through Earth- and space- based remote sensing. Latest developments of automated feature tracking software are able to harvest thousands of wind vectors out of a sequence of high-resolution images acquired with an appropriate temporal separation. However, unlike satellite-based cloud-tracking measurements of Earth, these planetary measurements cannot easily be validated against in-situ data, which makes the interpretation difficult when different cloud-tracking schemes do not agree on their results. To address the issue of data validation, we run multiple automated cloud-tracking software independently developed at multiple institutions on synthetic wind data generated using a General Circulation Model. Our simulations calculate the advection of tracer distributions to represent cloud motions as done by Sayanagi and Showman (2007, Icarus 187, p520-539). The motions of tracers are measured using cloud-tracking software to derive wind vector fields, which will be compared against the model "truth." We test the performance of cloud-tracking software for different wind scenarios. Our first test wind field contains a simple zonal jet. The second test scenario is a large vortex like Jupiter’s Great Red Spot. The third test case has waves propagating alongside a zonal jet. We compare the results returned from different cloud-tracking schemes and discuss what approaches work better at measuring winds. In addition to verifying the wind vector field measurements, we also address the accuracy and validity of eddy momentum flux measurements by tracking clouds. The difficulties of such measurements are discussed by Salyk et al. (2006, Icarus 185, p430-442), and we re-examine the issue using our synthetic wind data. From our experiments, we aim to establish a standard benchmark of cloud tracking measurements for planetary mission applications.

  13. Benchmarking the QUAD4/TRIA3 element

    NASA Astrophysics Data System (ADS)

    Pitrof, Stephen M.; Venkayya, Vipperla B.

    1993-09-01

    The QUAD4 and TRIA3 elements are the primary plate/shell elements in NASTRAN. These elements enable the user to analyze thin plate/shell structures for membrane, bending and shear phenomena. They are also very new elements in the NASTRAN library. These elements are extremely versatile and constitute a substantially enhanced analysis capability in NASTRAN. However, with the versatility comes the burden of understanding a myriad of modeling implications and their effect on accuracy and analysis quality. The validity of many aspects of these elements were established through a series of benchmark problem results and comparison with those available in the literature and obtained from other programs like MSC/NASTRAN and CSAR/NASTRAN. Never-the-less such a comparison is never complete because of the new and creative use of these elements in complex modeling situations. One of the important features of QUAD4 and TRIA3 elements is the offset capability which allows the midsurface of the plate to be noncoincident with the surface of the grid points. None of the previous elements, with the exception of bar (beam), has this capability. The offset capability played a crucial role in the design of QUAD4 and TRIA3 elements. It allowed modeling layered composites, laminated plates and sandwich plates with the metal and composite face sheets. Even though the basic implementation of the offset capability is found to be sound in the previous applications, there is some uncertainty in relatively simple applications. The main purpose of this paper is to test the integrity of the offset capability and provide guidelines for its effective use. For the purpose of simplicity, references in this paper to the QUAD4 element will also include the TRIA3 element.

  14. Isprs Benchmark for Multi-Platform Photogrammetry

    NASA Astrophysics Data System (ADS)

    Nex, F.; Gerke, M.; Remondino, F.; Przybilla, H.-J.; Bäumker, M.; Zurhorst, A.

    2015-03-01

    Airborne high resolution oblique imagery systems and RPAS/UAVs are very promising technologies that will keep on influencing the development of geomatics in the future years closing the gap between terrestrial and classical aerial acquisitions. These two platforms are also a promising solution for National Mapping and Cartographic Agencies (NMCA) as they allow deriving complementary mapping information. Although the interest for the registration and integration of aerial and terrestrial data is constantly increasing, only limited work has been truly performed on this topic. Several investigations still need to be undertaken concerning algorithms ability for automatic co-registration, accurate point cloud generation and feature extraction from multiplatform image data. One of the biggest obstacles is the non-availability of reliable and free datasets to test and compare new algorithms and procedures. The Scientific Initiative "ISPRS benchmark for multi-platform photogrammetry", run in collaboration with EuroSDR, aims at collecting and sharing state-of-the-art multi-sensor data (oblique airborne, UAV-based and terrestrial images) over an urban area. These datasets are used to assess different algorithms and methodologies for image orientation and dense matching. As ground truth, Terrestrial Laser Scanning (TLS), Aerial Laser Scanning (ALS) as well as topographic networks and GNSS points were acquired to compare 3D coordinates on check points (CPs) and evaluate cross sections and residuals on generated point cloud surfaces. In this paper, the acquired data, the pre-processing steps, the evaluation procedures as well as some preliminary results achieved with commercial software will be presented.

  15. Benchmarking the QUAD4/TRIA3 element

    NASA Technical Reports Server (NTRS)

    Pitrof, Stephen M.; Venkayya, Vipperla B.

    1993-01-01

    The QUAD4 and TRIA3 elements are the primary plate/shell elements in NASTRAN. These elements enable the user to analyze thin plate/shell structures for membrane, bending and shear phenomena. They are also very new elements in the NASTRAN library. These elements are extremely versatile and constitute a substantially enhanced analysis capability in NASTRAN. However, with the versatility comes the burden of understanding a myriad of modeling implications and their effect on accuracy and analysis quality. The validity of many aspects of these elements were established through a series of benchmark problem results and comparison with those available in the literature and obtained from other programs like MSC/NASTRAN and CSAR/NASTRAN. Never-the-less such a comparison is never complete because of the new and creative use of these elements in complex modeling situations. One of the important features of QUAD4 and TRIA3 elements is the offset capability which allows the midsurface of the plate to be noncoincident with the surface of the grid points. None of the previous elements, with the exception of bar (beam), has this capability. The offset capability played a crucial role in the design of QUAD4 and TRIA3 elements. It allowed modeling layered composites, laminated plates and sandwich plates with the metal and composite face sheets. Even though the basic implementation of the offset capability is found to be sound in the previous applications, there is some uncertainty in relatively simple applications. The main purpose of this paper is to test the integrity of the offset capability and provide guidelines for its effective use. For the purpose of simplicity, references in this paper to the QUAD4 element will also include the TRIA3 element.

  16. Benchmarking transport solvers for fracture flow problems

    NASA Astrophysics Data System (ADS)

    Olkiewicz, Piotr; Dabrowski, Marcin

    2015-04-01

    Fracture flow may dominate in rocks with low porosity and it can accompany both industrial and natural processes. Typical examples of such processes are natural flows in crystalline rocks and industrial flows in geothermal systems or hydraulic fracturing. Fracture flow provides an important mechanism for transporting mass and energy. For example, geothermal energy is primarily transported by the flow of the heated water or steam rather than by the thermal diffusion. The geometry of the fracture network and the distribution of the mean apertures of individual fractures are the key parameters with regard to the fracture network transmissivity. Transport in fractures can occur through the combination of advection and diffusion processes like in the case of dissolved chemical components. The local distribution of the fracture aperture may play an important role for both flow and transport processes. In this work, we benchmark various numerical solvers for flow and transport processes in a single fracture in 2D and 3D. Fracture aperture distributions are generated by a number of synthetic methods. We examine a single-phase flow of an incompressible viscous Newtonian fluid in the low Reynolds number limit. Periodic boundary conditions are used and a pressure difference is imposed in the background. The velocity field is primarly found using the Stokes equations. We systematically compare the obtained velocity field to the results obtained by solving the Reynolds equation. This allows us to examine the impact of the aperture distribution on the permeability of the medium and the local velocity distribution for two different mathematical descriptions of the fracture flow. Furthermore, we analyse the impact of aperture distribution on the front characteristics such as the standard deviation and the fractal dimension for systems in 2D and 3D.

  17. Nanomagnet Logic: Architectures, design, and benchmarking

    NASA Astrophysics Data System (ADS)

    Kurtz, Steven J.

    Nanomagnet Logic (NML) is an emerging technology being studied as a possible replacement or supplementary device for Complimentary Metal-Oxide-Semiconductor (CMOS) Field-Effect Transistors (FET) by the year 2020. NML devices offer numerous potential advantages including: low energy operation, steady state non-volatility, radiation hardness and a clear path to fabrication and integration with CMOS. However, maintaining both low-energy operation and non-volatility while scaling from the device to the architectural level is non-trivial as (i) nearest neighbor interactions within NML circuits complicate the modeling of ensemble nanomagnet behavior and (ii) the energy intensive clock structures required for re-evaluation and NML's relatively high latency challenge its ability to offer system-level performance wins against other emerging nanotechnologies. Thus, further research efforts are required to model more complex circuits while also identifying circuit design techniques that balance low-energy operation with steady state non-volatility. In addition, further work is needed to design and model low-power on-chip clocks while simultaneously identifying application spaces where NML systems (including clock overhead) offer sufficient energy savings to merit their inclusion in future processors. This dissertation presents research advancing the understanding and modeling of NML at all levels including devices, circuits, and line clock structures while also benchmarking NML against both scaled CMOS and tunneling FETs (TFET) devices. This is accomplished through the development of design tools and methodologies for (i) quantifying both energy and stability in NML circuits and (ii) evaluating line-clocked NML system performance. The application of these newly developed tools improves the understanding of ideal design criteria (i.e., magnet size, clock wire geometry, etc.) for NML architectures. Finally, the system-level performance evaluation tool offers the ability to

  18. Measurement, Standards, and Peer Benchmarking: One Hospital's Journey.

    PubMed

    Martin, Brian S

    2016-04-01

    Peer-to-peer benchmarking is an important component of rapid-cycle performance improvement in patient safety and quality-improvement efforts. Institutions should carefully examine critical success factors before engagement in peer-to-peer benchmarking in order to maximize growth and change opportunities. Solutions for Patient Safety has proven to be a high-yield engagement for Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center, with measureable improvement in both organizational process and culture. PMID:27017032

  19. Benchmarking Method for Estimation of Biogas Upgrading Schemes

    NASA Astrophysics Data System (ADS)

    Blumberga, D.; Kuplais, Ģ.; Veidenbergs, I.; Dāce, E.

    2009-01-01

    The paper describes a new benchmarking method proposed for estimation of different biogas upgrading schemes. The method has been developed to compare the indicators of alternative biogas purification and upgrading solutions and their threshold values. The chosen indicators cover both economic and ecologic aspects of these solutions, e.g. the prime cost of biogas purification and storage, and the cost efficiency of greenhouse gas emission reduction. The proposed benchmarking method has been tested at "Daibe" - a landfill for solid municipal waste.

  20. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    SciTech Connect

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  1. Implementation of the NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for High Performance Computing (HPC). In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for CFD applications.

  2. Performance and Scalability of the NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for scientific applications. In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for scientific applications.

  3. Benchmarking Ada tasking on tightly coupled multiprocessor architectures

    NASA Technical Reports Server (NTRS)

    Collard, Philippe; Goforth, Andre; Marquardt, Matthew

    1989-01-01

    The development of benchmarks and performance measures for parallel Ada tasking is reported with emphasis on the macroscopic behavior of the benchmark across a set of load parameters. The application chosen for the study was the NASREM model for telerobot control, relevant to many NASA missions. The results of the study demonstrate the potential of parallel Ada in accomplishing the task of developing a control system for a system such as the Flight Telerobotic Servicer using the NASREM framework.

  4. WIPP Benchmark calculations with the large strain SPECTROM codes

    SciTech Connect

    Callahan, G.D.; DeVries, K.L.

    1995-08-01

    This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems.

  5. The design of a scalable, fixed-time computer benchmark

    SciTech Connect

    Gustafson, J.; Rover, D.; Elbert, S.; Carter, M.

    1990-10-01

    By using the principle of fixed time benchmarking, it is possible to compare a very wide range of computers, from a small personal computer to the most powerful parallel supercomputer, an a single scale. Fixed-time benchmarks promise far greater longevity than those based on a particular problem size, and are more appropriate for grand challenge'' capability comparison. We present the design of a benchmark, SLALOM{trademark}, that scales automatically to the computing power available, and corrects several deficiencies in various existing benchmarks: it is highly scalable, it solves a real problem, it includes input and output times, and it can be run on parallel machines of all kinds, using any convenient language. The benchmark provides a reasonable estimate of the size of problem solvable on scientific computers. Results are presented that span six orders of magnitude for contemporary computers of various architectures. The benchmarks also can be used to demonstrate a new source of superlinear speedup in parallel computers. 15 refs., 14 figs., 3 tabs.

  6. Design and development of a community carbon cycle benchmarking system for CMIP5 models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Randerson, J. T.

    2013-12-01

    Benchmarking has been widely used to assess the ability of atmosphere, ocean, sea ice, and land surface models to capture the spatial and temporal variability of observations during the historical period. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we designed and developed a software system that enables the user to specify the models, benchmarks, and scoring systems so that results can be tailored to specific model intercomparison projects. We used this system to evaluate the performance of CMIP5 Earth system models (ESMs). Our scoring system used information from four different aspects of climate, including the climatological mean spatial pattern of gridded surface variables, seasonal cycle dynamics, the amplitude of interannual variability, and long-term decadal trends. We used this system to evaluate burned area, global biomass stocks, net ecosystem exchange, gross primary production, and ecosystem respiration from CMIP5 historical simulations. Initial results indicated that the multi-model mean often performed better than many of the individual models for most of the observational constraints.

  7. Key findings of the US Cystic Fibrosis Foundation's clinical practice benchmarking project.

    PubMed

    Boyle, Michael P; Sabadosa, Kathryn A; Quinton, Hebe B; Marshall, Bruce C; Schechter, Michael S

    2014-04-01

    Benchmarking is the process of using outcome data to identify high-performing centres and determine practices associated with their outstanding performance. The US Cystic Fibrosis Foundation (CFF) Patient Registry contains centre-specific outcomes data for all CFF-certified paediatric and adult cystic fibrosis (CF) care programmes in the USA. The CFF benchmarking project analysed these registry data, adjusting for differences in patient case mix known to influence outcomes, and identified the top-performing US paediatric and adult CF care programmes for pulmonary and nutritional outcomes. Separate multidisciplinary paediatric and adult benchmarking teams each visited 10 CF care programmes, five in the top quintile for pulmonary outcomes and five in the top quintile for nutritional outcomes. Key practice patterns and approaches present in both paediatric and adult programmes with outstanding clinical outcomes were identified and could be summarised as systems, attitudes, practices, patient/family empowerment and projects. These included: (1) the presence of strong leadership and a well-functioning care team working with a systematic approach to providing consistent care; (2) high expectations for outcomes among providers and families; (3) early and aggressive management of clinical declines, avoiding reliance on 'rescues'; and (4) patients/families that were engaged, empowered and well informed on disease management and its rationale. In summary, assessment of practice patterns at CF care centres with top-quintile pulmonary and nutritional outcomes provides insight into characteristic practices that may aid in optimising patient outcomes. PMID:24608546

  8. Certifying quantumness: Benchmarks for the optimal processing of generalized coherent and squeezed states

    NASA Astrophysics Data System (ADS)

    Yang, Yuxiang; Chiribella, Giulio; Adesso, Gerardo

    2014-10-01

    Quantum technology promises revolutionary advantages in information processing and transmission compared to classical technology; however, determining which specific resources are needed to surpass the capabilities of classical machines often remains a nontrivial problem. To address such a problem, one first needs to establish the best classical solutions, which set benchmarks that must be beaten by any implementation claiming to harness quantum features for an enhanced performance. Here we introduce and develop a self-contained formalism to obtain the ultimate, generally probabilistic benchmarks for quantum information protocols including teleportation and approximate cloning, with arbitrary ensembles of input states generated by a group action, so-called Gilmore-Perelomov coherent states. This allows us to construct explicit fidelity thresholds for the transmission of multimode Gaussian and non-Gaussian states of continuous-variable systems, as well as qubit and qudit pure states drawn according to nonuniform distributions on the Bloch hypersphere, which accurately model the current laboratory facilities. The performance of deterministic classical procedures such as square-root measurement strategies is further compared with the optimal probabilistic benchmarks, and the state-of-the-art performance of experimental quantum implementations against our newly derived thresholds is discussed. This work provides a comprehensive collection of directly useful criteria for the reliable certification of quantum communication technologies.

  9. Model-averaged benchmark concentration estimates for continuous response data arising from epidemiological studies

    SciTech Connect

    Noble, R.B.; Bailer, A.J.; Park, R.

    2009-04-15

    Worker populations often provide data on adverse responses associated with exposure to potential hazards. The relationship between hazard exposure levels and adverse response can be modeled and then inverted to estimate the exposure associated with some specified response level. One concern is that this endpoint may be sensitive to the concentration metric and other variables included in the model. Further, it may be that the models yielding different risk endpoints are all providing relatively similar fits. We focus on evaluating the impact of exposure on a continuous response by constructing a model-averaged benchmark concentration from a weighted average of model-specific benchmark concentrations. A method for combining the estimates based on different models is applied to lung function in a cohort of miners exposed to coal dust. In this analysis, we see that a small number of the thousands of models considered survive a filtering criterion for use in averaging. Even after filtering, the models considered yield benchmark concentrations that differ by a factor of 2 to 9 depending on the concentration metric and covariates. The model-average BMC captures this uncertainty, and provides a useful strategy for addressing model uncertainty.

  10. Strategies for energy benchmarking in cleanrooms and laboratory-type facilities

    SciTech Connect

    Sartor, Dale; Piette, Mary Ann; Tschudi, William; Fok, Stephen

    2000-06-01

    Buildings with cleanrooms and laboratories are growing in terms of total floor area and energy intensity. This building type is common in institutions such as universities and in many industries such as microelectronics and biotechnology. These buildings, with high ventilation rates and special environmental considerations, consume from 4 to 100 times more energy per square foot than conventional commercial buildings. Owners and operators of such facilities know they are expensive to operate, but have little way of knowing if their facilities are efficient or inefficient. A simple comparison of energy consumption per square foot is of little value. A growing interest in benchmarking is also fueled by: A new U.S. Executive Order removing the exemption of federal laboratories from energy efficiency goals, setting a 25% savings target, and calling for baseline guidance to measure progress; A new U.S. EPA and U.S. DOE initiative, Laboratories for the 21st Century, establishing voluntary performance goals and criteria for recognition; and A new PG and E market transformation program to improve energy efficiency in high tech facilities, including a cleanroom energy use benchmarking project. This paper identifies the unique issues associated with benchmarking energy use in high-tech facilities. Specific options discussed include statistical comparisons, point-based rating systems, model-based techniques, and hierarchical end-use and performance-metrics evaluations.

  11. Using benchmarking to minimize common DOE waste streams. Volume 1, Methodology and liquid photographic waste

    SciTech Connect

    Levin, V.

    1994-04-01

    Finding innovative ways to reduce waste streams generated at Department of Energy (DOE) sites by 50% by the year 2000 is a challenge for DOE`s waste minimization efforts. This report examines the usefulness of benchmarking as a waste minimization tool, specifically regarding common waste streams at DOE sites. A team of process experts from a variety of sites, a project leader, and benchmarking consultants completed the project with management support provided by the Waste Minimization Division EM-352. Using a 12-step benchmarking process, the team examined current waste minimization processes for liquid photographic waste used at their sites and used telephone and written questionnaires to find ``best-in-class`` industrv partners willing to share information about their best waste minimization techniques and technologies through a site visit. Eastman Kodak Co., and Johnson Space Center/National Aeronautics and Space Administration (NASA) agreed to be partners. The site visits yielded strategies for source reduction, recycle/recovery of components, regeneration/reuse of solutions, and treatment of residuals, as well as best management practices. An additional benefit of the work was the opportunity for DOE process experts to network and exchange ideas with their peers at similar sites.

  12. Benchmarking in the Globalised World and Its Impact on South African Higher Education.

    ERIC Educational Resources Information Center

    Alt, H.

    2002-01-01

    Discusses what benchmarking is and reflects on the importance and development of benchmarking in universities on a national and international level. Offers examples of transnational benchmarking activities, including the International Benchmarking Club, in which South African higher education institutions participate. (EV)

  13. Design and Application of a Community Land Benchmarking System for Earth System Models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Koven, C. D.; Kluzek, E. B.; Mao, J.; Randerson, J. T.

    2015-12-01

    Benchmarking has been widely used to assess the ability of climate models to capture the spatial and temporal variability of observations during the historical era. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we developed a new benchmarking software system that enables the user to specify the models, benchmarks, and scoring metrics, so that results can be tailored to specific model intercomparison projects. Evaluation data sets included soil and aboveground carbon stocks, fluxes of energy, carbon and water, burned area, leaf area, and climate forcing and response variables. We used this system to evaluate simulations from the 5th Phase of the Coupled Model Intercomparison Project (CMIP5) with prognostic atmospheric carbon dioxide levels over the period from 1850 to 2005 (i.e., esmHistorical simulations archived on the Earth System Grid Federation). We found that the multi-model ensemble had a high bias in incoming solar radiation across Asia, likely as a consequence of incomplete representation of aerosol effects in this region, and in South America, primarily as a consequence of a low bias in mean annual precipitation. The reduced precipitation in South America had a larger influence on gross primary production than the high bias in incoming light, and as a consequence gross primary production had a low bias relative to the observations. Although model to model variations were large, the multi-model mean had a positive bias in atmospheric carbon dioxide that has been attributed in past work to weak ocean uptake of fossil emissions. In mid latitudes of the northern hemisphere, most models overestimate latent heat fluxes in the early part of the growing season, and underestimate these fluxes in mid-summer and early fall, whereas sensible heat fluxes show the opposite trend.

  14. How can bedside rationing be justified despite coexisting inefficiency? The need for ‘benchmarks of efficiency’

    PubMed Central

    Strech, Daniel; Danis, Marion

    2016-01-01

    Imperfect efficiency in healthcare delivery is sometimes given as a justification for refusing to ration or even discuss how to pursue fair rationing. This paper aims to clarify the relationship between inefficiency and rationing, and the conditions under which bedside rationing can be justified despite coexisting inefficiency. This paper first clarifies several assumptions that underlie the classification of a clinical practice as being inefficient. We then suggest that rationing is difficult to justify in circumstances where the rationing agent is or should be aware of and contributes to clinical inefficiency. We further explain the different ethical implications of this suggestion for rationing decisions made by clinicians. We argue that rationing is more legitimate when sufficient efforts are undertaken to decrease inefficiency in parallel with efforts to pursue unavoidable but fair rationing. While the qualifier ‘sufficient’ is crucial here, we explain why ‘sufficient efforts’ should be translated into ‘benchmarks of efficiency’ that address specific healthcare activities where clinical inefficiency can be decreased. Referring to recent consensus papers, we consider some examples of specific clinical situations where improving clinical inefficiency has been recommended and consider how benchmarks for efficiency might apply. These benchmarks should state explicitly how much inefficiency shall be reduced in a reasonable time range and why these efforts are ‘sufficient’. Possible strategies for adherence to benchmarks are offered to address the possibility of non-compliance. PMID:23258082

  15. INTERFROST: a benchmark of Thermo-Hydraulic codes for cold regions hydrology

    NASA Astrophysics Data System (ADS)

    Grenier, C. F.; Roux, N.; Costard, F.; Pessel, M.

    2013-12-01

    Large focus was put recently on the impact of climate changes in boreal regions due to the large temperature amplitudes expected. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) with very specific evolution and water budget. These water bodies generate taliks (unfrozen zones below) that may play a key role in the context of climate change. Recent studies and modeling exercises showed that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is a minimal requirement to model and understand the evolution of the river and lake - soil continuum in a changing climate (e.g. Mc Kenzie et al., 2007; Bense et al 2009, Rowland et al 2011; Painter 2011; Grenier et al 2012; Painter et al 2012 and others from the 2012 special issue Hydrogeology Journal: 'Hydrogeology of cold regions'). However, 3D studies are still scarce while numerical approaches can only be validated against analytical solutions for the purely thermal equation with conduction and phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare different codes on provided test cases and/or to have controlled experiments for validation. We propose here to initiate a benchmark exercise, detail some of its planned test cases (phase I) and invite other research groups to join. This initial phase of the benchmark will consist of some test cases inspired by existing literature (e.g. Mc Kenzie et al., 2007) as well as new ones. Some experimental cases in cold room will complement the validation approach. In view of a Phase II, the project is open as well to other test cases reflecting a numerical or a process oriented interest or answering a more general concern among the cold region community. A further purpose of the benchmark exercise is to propel discussions for the optimization of codes and numerical approaches in order to develop validated and

  16. INTERFROST: a benchmark of Thermo-Hydraulic codes for cold regions hydrology

    NASA Astrophysics Data System (ADS)

    Grenier, Christophe; Roux, Nicolas; Costard, François; Pessel, Marc

    2014-05-01

    Large focus was put recently on the impact of climate changes in boreal regions due to the large temperature amplitudes expected. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) with very specific evolution and water budget. These water bodies generate taliks (unfrozen zones below) that may play a key role in the context of climate change. Recent studies and modeling exercises showed that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is a minimal requirement to model and understand the evolution of the river and lake - soil continuum in a changing climate (e.g. Mc Kenzie et al., 2007; Bense et al 2009, Rowland et al 2011; Painter 2011; Grenier et al 2012; Painter et al 2012 and others from the 2012 special issue Hydrogeology Journal: "Hydrogeology of cold regions"). However, 3D studies are still scarce while numerical approaches can only be validated against analytical solutions for the purely thermal equation with conduction and phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare different codes on provided test cases and/or to have controlled experiments for validation. We propose here to join the INTERFROST benchmark exercise addressing these issues. We give an overview of some of its test cases (phase I) as well as provide the present stand of the exercise and invite other research groups to join. This initial phase of the benchmark consists of some test cases inspired by existing literature (e.g. Mc Kenzie et al., 2007) as well as new ones. Some experimental cases in cold room complement the validation approach. In view of a Phase II, the project is open as well to other test cases reflecting a numerical or a process oriented interest or answering a more general concern among the cold region community. A further purpose of the benchmark exercise is to propel discussions for the

  17. Benchmarks and performance indicators: two tools for evaluating organizational results and continuous quality improvement efforts.

    PubMed

    McKeon, T

    1996-04-01

    Benchmarks are tools that can be compared across companies and industries to measure process output. The key to benchmarking is understanding the composition of the benchmark and whether the benchmarks consist of homogeneous groupings. Performance measures expand the concept of benchmarking and cross organizational boundaries to include factors that are strategically important to organizational success. Incorporating performance measures into a balanced score card will provide a comprehensive tool to evaluate organizational results. PMID:8634466

  18. A spherical shell numerical dynamo benchmark with pseudo-vacuum magnetic boundary conditions

    NASA Astrophysics Data System (ADS)

    Jackson, A.; Sheyko, A.; Marti, P.; Tilgner, A.; Cébron, D.; Vantieghem, S.; Simitev, R.; Busse, F.; Zhan, X.; Schubert, G.; Takehiro, S.; Sasaki, Y.; Hayashi, Y.-Y.; Ribeiro, A.; Nore, C.; Guermond, J.-L.

    2014-02-01

    It is frequently considered that many planetary magnetic fields originate as a result of convection within planetary cores. Buoyancy forces responsible for driving the convection generate a fluid flow that is able to induce magnetic fields; numerous sophisticated computer codes are able to simulate the dynamic behaviour of such systems. This paper reports the results of a community activity aimed at comparing numerical results of several different types of computer codes that are capable of solving the equations of momentum transfer, magnetic field generation and heat transfer in the setting of a spherical shell, namely a sphere containing an inner core. The electrically conducting fluid is incompressible and rapidly rotating and the forcing of the flow is thermal convection under the Boussinesq approximation. We follow the original specifications and results reported in Harder & Hansen to construct a specific benchmark in which the boundaries of the fluid are taken to be impenetrable, non-slip and isothermal, with the added boundary condition for the magnetic field B that the field must be entirely radial there; this type of boundary condition for B is frequently referred to as `pseudo-vacuum'. This latter condition should be compared with the more frequently used insulating boundary condition. This benchmark is so-defined in order that computer codes based on local methods, such as finite element, finite volume or finite differences, can handle the boundary condition with ease. The defined benchmark, governed by specific choices of the Roberts, magnetic Rossby, Rayleigh and Ekman numbers, possesses a simple solution that is steady in an azimuthally drifting frame of reference, thus allowing easy comparison among results. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume

  19. Toward Automated Benchmarking of Atomistic Force Fields: Neat Liquid Densities and Static Dielectric Constants from the ThermoML Data Archive.

    PubMed

    Beauchamp, Kyle A; Behr, Julie M; Rustenburg, Ariën S; Bayly, Christopher I; Kroenlein, Kenneth; Chodera, John D

    2015-10-01

    Atomistic molecular simulations are a powerful way to make quantitative predictions, but the accuracy of these predictions depends entirely on the quality of the force field employed. Although experimental measurements of fundamental physical properties offer a straightforward approach for evaluating force field quality, the bulk of this information has been tied up in formats that are not machine-readable. Compiling benchmark data sets of physical properties from non-machine-readable sources requires substantial human effort and is prone to the accumulation of human errors, hindering the development of reproducible benchmarks of force-field accuracy. Here, we examine the feasibility of benchmarking atomistic force fields against the NIST ThermoML data archive of physicochemical measurements, which aggregates thousands of experimental measurements in a portable, machine-readable, self-annotating IUPAC-standard format. As a proof of concept, we present a detailed benchmark of the generalized Amber small-molecule force field (GAFF) using the AM1-BCC charge model against experimental measurements (specifically, bulk liquid densities and static dielectric constants at ambient pressure) automatically extracted from the archive and discuss the extent of data available for use in larger scale (or continuously performed) benchmarks. The results of even this limited initial benchmark highlight a general problem with fixed-charge force fields in the representation low-dielectric environments, such as those seen in binding cavities or biological membranes. PMID:26339862

  20. Automatic generation of executable communication specifications from parallel applications

    SciTech Connect

    Pakin, Scott; Wu, Xing; Mueller, Frank

    2011-01-19

    Portable parallel benchmarks are widely used and highly effective for (a) the evaluation, analysis and procurement of high-performance computing (HPC) systems and (b) quantifying the potential benefits of porting applications for new hardware platforms. Yet, past techniques to synthetically parameterized hand-coded HPC benchmarks prove insufficient for today's rapidly-evolving scientific codes particularly when subject to multi-scale science modeling or when utilizing domain-specific libraries. To address these problems, this work contributes novel methods to automatically generate highly portable and customizable communication benchmarks from HPC applications. We utilize ScalaTrace, a lossless, yet scalable, parallel application tracing framework to collect selected aspects of the run-time behavior of HPC applications, including communication operations and execution time, while abstracting away the details of the computation proper. We subsequently generate benchmarks with identical run-time behavior from the collected traces. A unique feature of our approach is that we generate benchmarks in CONCEPTUAL, a domain-specific language that enables the expression of sophisticated communication patterns using a rich and easily understandable grammar yet compiles to ordinary C + MPI. Experimental results demonstrate that the generated benchmarks are able to preserve the run-time behavior - including both the communication pattern and the execution time - of the original applications. Such automated benchmark generation is particularly valuable for proprietary, export-controlled, or classified application codes: when supplied to a third party. Our auto-generated benchmarks ensure performance fidelity but without the risks associated with releasing the original code. This ability to automatically generate performance-accurate benchmarks from parallel applications is novel and without any precedence, to our knowledge.

  1. Water quality criteria/toxicological benchmarks for nitroaromatic munitions compounds

    SciTech Connect

    Talmage, S.S.; Opresko, D.M.; Hovatter, P.S.; Daniel, F.B.

    1995-12-31

    There is a need to develop screening level and cleanup criteria for nitroaromatic compounds at US Army Superfund sites. Using available methodologies, Water Quality Criteria (WQC) for aquatic organisms and toxicological benchmarks for terrestrial plants and wildlife were developed for eight nitroaromatic munitions compounds and/or their degradation products: 2,4,6-trinitroluene, 1,3,5-trinitrobenzene, 1,3-dinitrobenzene, 3,5-dinitroaniline, 2-amino-4, 6-dinitrotoluene, RDX, HMX, and tetryl. Depending on available data, acute and chronic WQC for aquatic species were developed based on US EPA Tier 1 or Tier 2 guidelines. Criteria for sediment-associated organisms were derived based on Equilibrium Partitioning. In the absence of criteria or guidance for effects on terrestrial wildlife, plants and soil processes, ecotoxicological benchmarks, i.e., NOAELs and LOECs for effects on these organisms were identified. Benchmarks for terrestrial wildlife species were derived from experimental data identifying toxicological endpoints for wildlife or laboratory species. NOAELs were based on endpoints of population growth and survival following oral exposures. These values were used as the basis for calculation of NOAELs or screening benchmarks for food and water intake for seven selected mammalian wildlife species: the short-tailed shrew, white footed mouse, meadow vole, cottontail rabbit, mink, red fox, and whitetail deer. Equivalent NOAELs were calculated by scaling the test data on the basis of differences in body weight. Benchmarks for terrestrial plants and soil invertebrates and heterotrophic processes were based on LOECs.

  2. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  3. An automated protocol for performance benchmarking a widefield fluorescence microscope.

    PubMed

    Halter, Michael; Bier, Elianna; DeRose, Paul C; Cooksey, Gregory A; Choquette, Steven J; Plant, Anne L; Elliott, John T

    2014-11-01

    Widefield fluorescence microscopy is a highly used tool for visually assessing biological samples and for quantifying cell responses. Despite its widespread use in high content analysis and other imaging applications, few published methods exist for evaluating and benchmarking the analytical performance of a microscope. Easy-to-use benchmarking methods would facilitate the use of fluorescence imaging as a quantitative analytical tool in research applications, and would aid the determination of instrumental method validation for commercial product development applications. We describe and evaluate an automated method to characterize a fluorescence imaging system's performance by benchmarking the detection threshold, saturation, and linear dynamic range to a reference material. The benchmarking procedure is demonstrated using two different materials as the reference material, uranyl-ion-doped glass and Schott 475 GG filter glass. Both are suitable candidate reference materials that are homogeneously fluorescent and highly photostable, and the Schott 475 GG filter glass is currently commercially available. In addition to benchmarking the analytical performance, we also demonstrate that the reference materials provide for accurate day to day intensity calibration. Published 2014 Wiley Periodicals Inc. PMID:25132217

  4. How to utilize benchmarking in the clinical laboratory.

    PubMed

    Steiner, Jan W; Murphy, Kathleen A; Buck, Earl C; Rajkovich, Daniel E

    2006-01-01

    Benchmarking of clinical laboratory activities has become a tool used increasingly to enable administrators and managers to obtain an independent evaluation of the performance of the laboratory and identify opportunities for improvement. Benchmarking is particularly important because of the diversity and complexity of the various sections of the laboratory. The critical component of laboratory benchmarking is peer comparison, as solutions to shortcomings or problems can be titrated and planned through this process. The reliability of benchmarking must be supplemented and modified by the input of the manager's detailed understanding of local circumstances. At this critical moment, the changes in peer review strategies instituted by JCAHO, CAP, CLIA, and individual states create an urgent opportunity to assist medical directors and laboratory managers in maintaining an overview of the performance and quality of laboratory operations. Unannounced site visits will require prompt reports and alerts of undesirable changes in performance. The future goals of benchmarking must expand to include surveys of laboratory test utilization and patient outcomes as ultimate measures of test utility in the clinical process and important assessments of the quality of patient care. PMID:17132459

  5. Preliminary Benchmarking and MCNP Simulation Results for Homeland Security

    SciTech Connect

    Robert Hayes

    2008-03-01

    The purpose of this article is to create Monte Carlo N-Particle (MCNP) input stacks for benchmarked measurements sufficient for future perturbation studies and analysis. The approach was to utilize historical experimental measurements to recreate the empirical spectral results in MCNP, both qualitatively and quantitatively. Results demonstrate that perturbation analysis of benchmarked MCNP spectra can be used to obtain a better understanding of field measurement results which may be of national interest. If one or more spectral radiation measurements are made in the field and deemed of national interest, the potential source distribution, naturally occurring radioactive material shielding, and interstitial materials can only be estimated in many circumstances. The effects from these factors on the resultant spectral radiation measurements can be very confusing. If benchmarks exist which are sufficiently similar to the suspected configuration, these benchmarks can then be compared to the suspect measurements. Having these benchmarks with validated MCNP input stacks can substantially improve the predictive capability of experts supporting these efforts.

  6. Benchmarking Successional Progress in a Quantitative Food Web

    PubMed Central

    Boit, Alice; Gaedke, Ursula

    2014-01-01

    Central to ecology and ecosystem management, succession theory aims to mechanistically explain and predict the assembly and development of ecological communities. Yet processes at lower hierarchical levels, e.g. at the species and functional group level, are rarely mechanistically linked to the under-investigated system-level processes which drive changes in ecosystem properties and functioning and are comparable across ecosystems. As a model system for secondary succession, seasonal plankton succession during the growing season is readily observable and largely driven autogenically. We used a long-term dataset from large, deep Lake Constance comprising biomasses, auto- and heterotrophic production, food quality, functional diversity, and mass-balanced food webs of the energy and nutrient flows between functional guilds of plankton and partly fish. Extracting population- and system-level indices from this dataset, we tested current hypotheses about the directionality of successional progress which are rooted in ecosystem theory, the metabolic theory of ecology, quantitative food web theory, thermodynamics, and information theory. Our results indicate that successional progress in Lake Constance is quantifiable, passing through predictable stages. Mean body mass, functional diversity, predator-prey weight ratios, trophic positions, system residence times of carbon and nutrients, and the complexity of the energy flow patterns increased during succession. In contrast, both the mass-specific metabolic activity and the system export decreased, while the succession rate exhibited a bimodal pattern. The weighted connectance introduced here represents a suitable index for assessing the evenness and interconnectedness of energy flows during succession. Diverging from earlier predictions, ascendency and eco-exergy did not increase during succession. Linking aspects of functional diversity to metabolic theory and food web complexity, we reconcile previously disjoint bodies of

  7. Benchmarking successional progress in a quantitative food web.

    PubMed

    Boit, Alice; Gaedke, Ursula

    2014-01-01

    Central to ecology and ecosystem management, succession theory aims to mechanistically explain and predict the assembly and development of ecological communities. Yet processes at lower hierarchical levels, e.g. at the species and functional group level, are rarely mechanistically linked to the under-investigated system-level processes which drive changes in ecosystem properties and functioning and are comparable across ecosystems. As a model system for secondary succession, seasonal plankton succession during the growing season is readily observable and largely driven autogenically. We used a long-term dataset from large, deep Lake Constance comprising biomasses, auto- and heterotrophic production, food quality, functional diversity, and mass-balanced food webs of the energy and nutrient flows between functional guilds of plankton and partly fish. Extracting population- and system-level indices from this dataset, we tested current hypotheses about the directionality of successional progress which are rooted in ecosystem theory, the metabolic theory of ecology, quantitative food web theory, thermodynamics, and information theory. Our results indicate that successional progress in Lake Constance is quantifiable, passing through predictable stages. Mean body mass, functional diversity, predator-prey weight ratios, trophic positions, system residence times of carbon and nutrients, and the complexity of the energy flow patterns increased during succession. In contrast, both the mass-specific metabolic activity and the system export decreased, while the succession rate exhibited a bimodal pattern. The weighted connectance introduced here represents a suitable index for assessing the evenness and interconnectedness of energy flows during succession. Diverging from earlier predictions, ascendency and eco-exergy did not increase during succession. Linking aspects of functional diversity to metabolic theory and food web complexity, we reconcile previously disjoint bodies of

  8. The Gaia FGK benchmark stars. High resolution spectral library

    NASA Astrophysics Data System (ADS)

    Blanco-Cuaresma, S.; Soubiran, C.; Jofré, P.; Heiter, U.

    2014-06-01

    Context. An increasing number of high-resolution stellar spectra is available today thanks to many past and ongoing spectroscopic surveys. Consequently, numerous methods have been developed to perform an automatic spectral analysis on a massive amount of data. When reviewing published results, biases arise and they need to be addressed and minimized. Aims: We are providing a homogeneous library with a common set of calibration stars (known as the Gaia FGK benchmark stars) that will allow us to assess stellar analysis methods and calibrate spectroscopic surveys. Methods: High-resolution and signal-to-noise spectra were compiled from different instruments. We developed an automatic process to homogenize the observed data and assess the quality of the resulting library. Results: We built a high-quality library that will facilitate the assessment of spectral analyses and the calibration of present and future spectroscopic surveys. The automation of the process minimizes the human subjectivity and ensures reproducibility. Additionally, it allows us to quickly adapt the library to specific needs that can arise from future spectroscopic analyses. Based on NARVAL and HARPS data obtained within the Gaia Data Processing and Analysis Consortium (DPAC) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group, and on data retrieved from the ESO-ADP database.The library of spectra is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/566/A98

  9. SIMULATE-3K Peach Bottom 2 Turbine Trip 2 Benchmark Calculations

    SciTech Connect

    Belblidia, Lotfi A.; Grandi, Gerardo M.; Joensson, Christian

    2004-10-15

    This paper discusses the model and results for the Peach Bottom 2 Turbine Trip Test 2 using Studsvik Scandpower's transient code SIMULATE-3K. All data pertaining to core, vessel, and scenario were taken from the NEA/OECD BWR benchmark specifications. Nuclear data were generated with Studsvik Scandpower's lattice code CASMO-4 and core analysis code SIMULATE-3. Comparisons to measured data, sensitivity to model options and data, as well as results from more limiting scenarios are presented. SIMULATE-3K captures well the pressure wave propagation, void collapse during the pressurization phase, and resulting power excursion.

  10. Benchmarking: More Aspects of High Performance Computing

    SciTech Connect

    Rahul Ravindrudu

    2004-12-19

    The original HPL algorithm makes the assumption that all data can be fit entirely in the main memory. This assumption will obviously give a good performance due to the absence of disk I/O. However, not all applications can fit their entire data in memory. These applications which require a fair amount of I/O to move data to and from main memory and secondary storage, are more indicative of usage of an Massively Parallel Processor (MPP) System. Given this scenario a well designed I/O architecture will play a significant part in the performance of the MPP System on regular jobs. And, this is not represented in the current Benchmark. The modified HPL algorithm is hoped to be a step in filling this void. The most important factor in the performance of out-of-core algorithms is the actual I/O operations performed and their efficiency in transferring data to/from main memory and disk, Various methods were introduced in the report for performing I/O operations. The I/O method to use depends on the design of the out-of-core algorithm. Conversely, the performance of the out-of-core algorithm is affected by the choice of I/O operations. This implies, good performance is achieved when I/O efficiency is closely tied with the out-of-core algorithms. The out-of-core algorithms must be designed from the start. It is easily observed in the timings for various plots, that I/O plays a significant part in the overall execution time. This leads to an important conclusion, retro-fitting an existing code may not be the best choice. The right-looking algorithm selected for the LU factorization is a recursive algorithm and performs well when the entire dataset is in memory. At each stage of the loop the entire trailing submatrix is read into memory panel by panel. This gives a polynomial number of I/O reads and writes. If the left-looking algorithm was selected for the main loop, the number of I/O operations involved will be linear on the number of columns. This is due to the data access

  11. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.

    PubMed

    Renner, F; Wulff, J; Kapsch, R-P; Zink, K

    2015-10-01

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as

  12. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark

    NASA Astrophysics Data System (ADS)

    Renner, F.; Wulff, J.; Kapsch, R.-P.; Zink, K.

    2015-10-01

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as

  13. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  14. Benchmarking an Unstructured-Grid Model for Tsunami Current Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Yinglong J.; Priest, George; Allan, Jonathan; Stimely, Laura

    2016-06-01

    We present model results derived from a tsunami current benchmarking workshop held by the NTHMP (National Tsunami Hazard Mitigation Program) in February 2015. Modeling was undertaken using our own 3D unstructured-grid model that has been previously certified by the NTHMP for tsunami inundation. Results for two benchmark tests are described here, including: (1) vortex structure in the wake of a submerged shoal and (2) impact of tsunami waves on Hilo Harbor in the 2011 Tohoku event. The modeled current velocities are compared with available lab and field data. We demonstrate that the model is able to accurately capture the velocity field in the two benchmark tests; in particular, the 3D model gives a much more accurate wake structure than the 2D model for the first test, with the root-mean-square error and mean bias no more than 2 cm s-1 and 8 mm s-1, respectively, for the modeled velocity.

  15. Scalable randomized benchmarking of non-Clifford gates

    NASA Astrophysics Data System (ADS)

    Cross, Andrew; Magesan, Easwar; Bishop, Lev; Smolin, John; Gambetta, Jay

    Randomized benchmarking is a widely used experimental technique to characterize the average error of quantum operations. Benchmarking procedures that scale to enable characterization of n-qubit circuits rely on efficient procedures for manipulating those circuits and, as such, have been limited to subgroups of the Clifford group. However, universal quantum computers require additional, non-Clifford gates to approximate arbitrary unitary transformations. We define a scalable randomized benchmarking procedure over n-qubit unitary matrices that correspond to protected non-Clifford gates for a class of stabilizer codes. We present efficient methods for representing and composing group elements, sampling them uniformly, and synthesizing corresponding poly (n) -sized circuits. The procedure provides experimental access to two independent parameters that together characterize the average gate fidelity of a group element. We acknowledge support from ARO under Contract W911NF-14-1-0124.

  16. Development of a Benchmark Example for Delamination Fatigue Growth Prediction

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2010-01-01

    The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  17. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    SciTech Connect

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  18. VENUS-F: A fast lead critical core for benchmarking

    SciTech Connect

    Kochetkov, A.; Wagemans, J.; Vittiglio, G.

    2011-07-01

    The zero-power thermal neutron water-moderated facility VENUS at SCK-CEN has been extensively used for benchmarking in the past. In accordance with GEN-IV design tasks (fast reactor systems and accelerator driven systems), the VENUS facility was modified in 2007-2010 into the fast neutron facility VENUS-F with solid core components. This paper introduces the projects GUINEVERE and FREYA, which are being conducted at the VENUS-F facility, and it presents the measurement results obtained at the first critical core. Throughout the projects other fast lead benchmarks also will be investigated. The measurement results of the different configurations can all be used as fast neutron benchmarks. (authors)

  19. Building America Research Benchmark Definition: Updated August 15, 2007

    SciTech Connect

    Hendron, R.

    2007-09-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a 'moving target'.

  20. Building America Research Benchmark Definition: Updated December 20, 2007

    SciTech Connect

    Hendron, R.

    2008-01-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a 'moving target'.

  1. Building America Research Benchmark Definition, Updated December 15, 2006

    SciTech Connect

    Hendron, R.

    2007-01-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a ''moving target''.

  2. Development of a Benchmark Example for Delamination Fatigue Growth Prediction

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2010-01-01

    The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required

  3. Success by Eight Evaluation: Benchmarks for Implementation.

    ERIC Educational Resources Information Center

    Wolcott, Deborah; Sockwell, Recardo; Blum, Holly; Hansborough, Ann; Fowler, Dorothy; Farley, Muriel; Wald, Penelope

    Success by Eight is a pilot program of the Fairfax County (Virginia) Public Schools for students in kindergarten through grade 2. The initiative is designed to provide a group of six pilot schools with training and additional resources in implementing a specific set of components to prepare students for the third grade. These components are: (1)…

  4. ENDF/B-V and ENDF/B-VI results for UO-2 lattice benchmark problems using MCNP

    SciTech Connect

    Mosteller, R.D.

    1998-08-01

    Calculations for the ANS UO{sub 2} lattice benchmark have been performed with the MCNP Monte Carlo code and its ENDF/B-V and EnDF/B-VI continuous-energy libraries. Similar calculations were performed previously for the experiments upon which these benchmarks are based, using continuous-energy libraries derived from EnDF/B-V and from Release 2 of EnDF/B-VI (ENDF/B-VI.2). This study extends those calculations to the infinite-lattice configurations given in the benchmark specifications and also includes results from Release 3 of EnDF/B-VI (ENDF/B-VI.3) for both the core and infinite-lattice configurations. For this set of benchmarks, the only significant difference between the ENDF/B-VI.2 and EnDF/B-VI.3 libraries is the cross-section behavior of {sup 235}U. EnDF/B-VI.3 contains revised cross sections for {sup 235}U below 900 eV, although those changes principally affect the range below 110 eV. In particular, relative to EnDF/B-VI.2, EnDF/B-VI.3 increases the epithermal capture-to-fission ratio for {sup 235}U and slightly increases its thermal fission cross section.

  5. On a new benchmark for the simulation of saltwater intrusion

    NASA Astrophysics Data System (ADS)

    Stoeckl, Leonard; Graf, Thomas

    2015-04-01

    To date, many different benchmark problems for density-driven flow are available. Benchmarks are necessary to validate numerical models. The benchmark by Henry (1964) measures a saltwater wedge, intruding into a freshwater aquifer in a rectangular model. The Henry (1964) problem of saltwater intrusion is one of the most applied benchmarks in hydrogeology. Modelling saltwater intrusion will be of major importance in the future because investigating the impact of groundwater overexploitation, climate change or sea level rise are of key concern. The worthiness of the Henry (1964) problem was questioned by Simpson and Clement (2003), who compared density-coupled and density-uncoupled simulations. Density-uncoupling was achieved by neglecting density effects in the governing equations, and by considering density effects only in the flow boundary conditions. As both of their simulations showed similar results, Simpson and Clement (2003) concluded that flow patterns of the Henry (1964) problem are largely dictated by the applied flow boundary conditions and density-dependent effects are not adequately represented in the Henry (1964) problem. In the present study, we compare numerical simulations of the physical benchmark of a freshwater lens by Stoeckl and Houben (2012) to the Henry (1964) problem. In this new benchmark, the development of a freshwater lens under an island is simulated by applying freshwater recharge to the model top. Results indicate that density-uncoupling significantly alters the flow patterns of fresh- and saltwater. This leads to the conclusion that next to the boundary conditions applied, density-dependent effects are important to correctly simulate the flow dynamics of a freshwater lens.

  6. Performance and accuracy benchmarks for a next generation geodynamo simulation

    NASA Astrophysics Data System (ADS)

    Matsui, H.

    2015-12-01

    A number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field in the last twenty years. However, parameters in the current dynamo model are far from realistic for the Earth's core. To approach a realistic parameters for the Earth's core in geodynmo simulations, extremely large spatial resolutions are required to resolve convective turbulence and small-scale magnetic fields. To assess the next generation dynamo models on a massively parallel computer, we performed performance and accuracy benchmarks from 15 dynamo codes which employ a diverse range of discretization (spectral, finite difference, finite element, and hybrid methods) and parallelization methods. In the performance benchmark, we compare elapsed time and parallelization capability on the TACC Stampede platform, using up to 16384 processor cores. In the accuracy benchmark, we compare required resolutions to obtain less than 1% error from the suggested solutions. The results of the performance benchmark show that codes using 2-D or 3-D parallelization models have a capability to run with 16384 processor cores. The elapsed time for Calypso and Rayleigh, two parallelized codes that use the spectral method, scales with a smaller exponent than the ideal scaling. The elapsed time of SFEMaNS, which uses finite element and Fourier transform, has the smallest growth of the elapsed time with the resolution and parallelization. However, the accuracy benchmark results show that SFEMaNS require three times more degrees of freedoms in each direction compared with a spherical harmonics expansion. Consequently, SFEMaNS needs more than 200 times of elapsed time for the Calypso and Rayleigh with 10000 cores to obtain the same accuracy. These benchmark results indicate that the spectral method with 2-D or 3-D domain decomposition is the most promising methodology for advancing numerical dynamo simulations in the immediate future.

  7. Advanced Benchmarking for Complex Building Types: Laboratories as an Exemplar

    SciTech Connect

    Mathew, Paul A.; Clear, Robert; Kircher, Kevin; Webster, Tom; Lee, Kwang Ho; Hoyt, Tyler

    2010-08-01

    Complex buildings such as laboratories, data centers and cleanrooms present particular challenges for energy benchmarking because it is difficult to normalize special requirements such as health and safety in laboratories and reliability (i.e., system redundancy to maintain uptime) in data centers which significantly impact energy use. For example, air change requirements vary widely based on the type of work being performed in each laboratory space. We present methods and tools for energy benchmarking in laboratories, as an exemplar of a complex building type. First, we address whole building energy metrics and normalization parameters. We present empirical methods based on simple data filtering as well as multivariate regression analysis on the Labs21 database. The regression analysis showed lab type, lab-area ratio and occupancy hours to be significant variables. Yet the dataset did not allow analysis of factors such as plug loads and air change rates, both of which are critical to lab energy use. The simulation-based method uses an EnergyPlus model to generate a benchmark energy intensity normalized for a wider range of parameters. We suggest that both these methods have complementary strengths and limitations. Second, we present"action-oriented" benchmarking, which extends whole-building benchmarking by utilizing system-level features and metrics such as airflow W/cfm to quickly identify a list of potential efficiency actions which can then be used as the basis for a more detailed audit. While action-oriented benchmarking is not an"audit in a box" and is not intended to provide the same degree of accuracy afforded by an energy audit, we demonstrate how it can be used to focus and prioritize audit activity and track performance at the system level. We conclude with key principles that are more broadly applicable to other complex building types.

  8. Benchmarking of Neutron Production of Heavy-Ion Transport Codes

    SciTech Connect

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    2012-01-01

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required.

  9. Validation of NESTLE against static reactor benchmark problems

    SciTech Connect

    Mosteller, R.D.

    1996-02-01

    The NESTLE advanced modal code was developed at North Carolina State University with support from Los Alamos National Laboratory and Idaho National Engineering Laboratory. It recently has been benchmarked successfully against measured data from pressurized water reactors (PWRs). However, NESTLE`s geometric capabilities are very flexible, and it can be applied to a variety of other types of reactors. This study presents comparisons of NESTLE results with those from other codes for static benchmark problems for PWRs, boiling water reactors (BWRs), high-temperature gas-cooled reactors (HTGRs) and CANDU heavy- water reactors (HWRs).

  10. Piping benchmark problems for the Westinghouse AP600 Standardized Plant

    SciTech Connect

    Bezler, P.; DeGrassi, G.; Braverman, J.; Wang, Y.K.

    1997-01-01

    To satisfy the need for verification of the computer programs and modeling techniques that will be used to perform the final piping analyses for the Westinghouse AP600 Standardized Plant, three benchmark problems were developed. The problems are representative piping systems subjected to representative dynamic loads with solutions developed using the methods being proposed for analysis for the AP600 standard design. It will be required that the combined license licensees demonstrate that their solutions to these problems are in agreement with the benchmark problem set.

  11. Visualization of the air flow behind the automotive benchmark vent

    NASA Astrophysics Data System (ADS)

    Pech, Ondrej; Jedelsky, Jan; Caletka, Petr; Jicha, Miroslav

    2015-05-01

    Passenger comfort in cars depends on appropriate function of the cabin HVAC system. A great attention is therefore paid to the effective function of automotive vents and proper formation of the flow behind the ventilation outlet. The article deals with the visualization of air flow from the automotive benchmark vent. The visualization was made for two different shapes of the inlet channel connected to the benchmark vent. The smoke visualization with the laser knife was used. The influence of the shape of the inlet channel to the airflow direction, its enlargement and position of air flow axis were investigated.

  12. 2008 ULTRASONIC BENCHMARK STUDIES OF INTERFACE CURVATURE--A SUMMARY

    SciTech Connect

    Schmerr, L. W.; Huang, R.; Raillon, R.; Mahaut, S.; Leymarie, N.; Lonne, S.; Spies, M.; Lupien, V.

    2009-03-03

    In the 2008 QNDE ultrasonic benchmark session researchers from five different institutions around the world examined the influence that the curvature of a cylindrical fluid-solid interface has on the measured NDE immersion pulse-echo response of a flat-bottom hole (FBH) reflector. This was a repeat of a study conducted in the 2007 benchmark to try to determine the sources of differences seen in 2007 between model-based predictions and experiments. Here, we will summarize the results obtained in 2008 and analyze the model-based results and the experiments.

  13. Numerical results for the WFNDEC 2012 eddy current benchmark problem

    NASA Astrophysics Data System (ADS)

    Theodoulidis, T. P.; Martinos, J.; Poulakis, N.

    2013-01-01

    We present numerical results for the World Federation of NDE Centers (WFNDEC) 2012 eddy current benchmark problem obtained with a commercial FEM package (Comsol Multiphysics). The measurements of the benchmark problem consist of coil impedance values acquired when an inspection probe coil is moved inside an Inconel tube along an axial through-wall notch. The simulation runs smoothly with minimal user interference (default settings used for mesh and solver) and agreement between numerical and experimental results is excellent for all five inspection frequencies. Comments are made for the pros and cons of FEM and also some good practice rules are presented when using such numerical tools.

  14. Quantum benchmark via an uncertainty product of canonical variables.

    PubMed

    Namiki, Ryo; Azuma, Koji

    2015-04-10

    We present an uncertainty-relation-type quantum benchmark for continuous-variable (CV) quantum channels that works with an input ensemble of Gaussian-distributed coherent states and homodyne measurements. It determines an optimal trade-off relation between canonical quadrature noises that is unbeatable by entanglement breaking channels and refines the notion of two quantum duties introduced in the original papers of CV quantum teleportation. This benchmark can verify the quantum-domain performance for all one-mode Gaussian channels. We also address the case of stochastic channels and the effect of asymmetric gains. PMID:25910100

  15. Benchmark 1 - Nonlinear strain path forming limit of a reverse draw: Part C: Benchmark analysis

    NASA Astrophysics Data System (ADS)

    Wu, Xin

    2013-12-01

    This document summarizes the Benchmark 1 results and presents all submitted FEA results from 9 participants and experimental results in 104 figures, including load-displacement curves, strain path evolutions over four specified blank positions, and deformed under different forming settings: three sheet materials, three blank geometries (width), and two shim heights that alter the strain paths and the two stage strain ratio with major strain path changes. The fourth specified point corresponds to the location of localized necking. The FEA models and software/hardware used by the participants are provided. The rich data presented from both simulation and experimental measurement provide valuable information on the current interest in material plasticity/formability and their prediction under continuous non-linear strain path that exist in this reverse draw process. At the request of the author, and Proceedings Editor, a corrected and updated version of this paper was published on January 2, 2014. The Corrigendum attached to the updated article PDF contains a list of the changes made to the original published version.

  16. Establishing Instructional Technology Benchmarks for Teacher Preparation Programs.

    ERIC Educational Resources Information Center

    Northrup, Pamela Taylor; Little, Wesley

    1996-01-01

    Examines technology use in teacher preparation, emerging state and national standards for educators and technology, and benchmarks for teacher preparation programs (including faculty preparation), and notes the importance of creating school-business partnerships to help finance this costly venture. (SM)

  17. Benchmarking Attrition: What Can We Learn From Other Industries?

    ERIC Educational Resources Information Center

    Delta Cost Project at American Institutes for Research, 2012

    2012-01-01

    This brief summarizes Internet-based research into other industries that may offer useful analogies for thinking about student attrition in higher education, in particular for setting realistic benchmarks for reductions in attrition. Reducing attrition to zero or close to zero is not a realistic possibility in higher education. Students are…

  18. Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W. (Editor); Hardin, J. C. (Editor)

    1997-01-01

    The proceedings of the Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems held at Florida State University are the subject of this report. For this workshop, problems arising in typical industrial applications of CAA were chosen. Comparisons between numerical solutions and exact solutions are presented where possible.

  19. Collaborative Benchmarking: Discovering and Implementing Best Practices to Strengthen SEAs

    ERIC Educational Resources Information Center

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    To help state educational agencies (SEAs) learn about and adapt best practices that exist in other SEAs and other organizations, the Building State Capacity and Productivity Center (BSCP Center) working closely with the Regional Comprehensive Centers will create multi-state groups, through a "Collaborative Benchmarking Best Practices Process" that…

  20. Policy Analysis of the English Graduation Benchmark in Taiwan

    ERIC Educational Resources Information Center

    Shih, Chih-Min

    2012-01-01

    To nudge students to study English and to improve their English proficiency, many universities in Taiwan have imposed an English graduation benchmark on their students. This article reviews this policy, using the theoretic framework for education policy analysis proposed by Haddad and Demsky (1995). The author presents relevant research findings,…