Sample records for benchmark core appendix

  1. Taking the Lead in Science Education: Forging Next-Generation Science Standards. International Science Benchmarking Report. Appendix

    ERIC Educational Resources Information Center

    Achieve, Inc., 2010

    2010-01-01

    This appendix accompanies the report "Taking the Lead in Science Education: Forging Next-Generation Science Standards. International Science Benchmarking Report," a study conducted by Achieve to compare the science standards of 10 countries. This appendix includes the following: (1) PISA and TIMSS Assessment Rankings; (2) Courses and…

  2. Aquarius Project: Research in the System Architecture of Accelerators for the High Performance Execution of Logic Programs.

    DTIC Science & Technology

    1991-05-31

    benchmarks ............ .... . .. .. . . .. 220 Appendix G : Source code of the Aquarius Prolog compiler ........ . 224 Chapter I Introduction "You’re given...notation, a tool that is used throughout the compiler’s implementation. Appendix F lists the source code of the C and Prolog benchmarks. Appendix G lists the...source code of the compilcr. 5 "- standard form Prolog / a-sfomadon / head umrvln Convert to tmeikernel Prol g vrans~fonaon 1symbolic execution

  3. 17 CFR Appendix B to Part 38 - Guidance on, and Acceptable Practices in, Compliance With Core Principles

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Practices in, Compliance With Core Principles B Appendix B to Part 38 Commodity and Securities Exchanges...—Guidance on, and Acceptable Practices in, Compliance With Core Principles 1. This appendix provides guidance on complying with the core principles, both initially and on an ongoing basis, to maintain...

  4. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGES

    Bess, John D.; Montierth, Leland; Köberl, Oliver; ...

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data aremore » greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  5. Sequoia Messaging Rate Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedley, Andrew

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8) with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected tomore » be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less

  6. Core-core and core-valence correlation energy atomic and molecular benchmarks for Li through Ar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ranasinghe, Duminda S.; Frisch, Michael J.; Petersson, George A., E-mail: gpetersson@wesleyan.edu

    2015-12-07

    We have established benchmark core-core, core-valence, and valence-valence absolute coupled-cluster single double (triple) correlation energies (±0.1%) for 210 species covering the first- and second-rows of the periodic table. These species provide 194 energy differences (±0.03 mE{sub h}) including ionization potentials, electron affinities, and total atomization energies. These results can be used for calibration of less expensive methodologies for practical routine determination of core-core and core-valence correlation energies.

  7. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    NASA Technical Reports Server (NTRS)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  8. CLEAR: Cross-Layer Exploration for Architecting Resilience

    DTIC Science & Technology

    2017-03-01

    benchmark analysis, also provides cost-effective solutions (~1% additional energy cost for the same 50× improvement). This paper addresses the...core (OoO-core) [Wang 04], across 18 benchmarks . Such extensive exploration enables us to conclusively answer the above cross-layer resilience...analysis of the effects of soft errors on application benchmarks , provides a highly effective soft error resilience approach. 3. The above

  9. Integral Full Core Multi-Physics PWR Benchmark with Measured Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forget, Benoit; Smith, Kord; Kumar, Shikhar

    In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevantmore » multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.« less

  10. Shift Verification and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less

  11. 17 CFR Appendix A to Part 39 - Application Guidance and Compliance With Core Principles

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... carrying out the clearing organization's risk management program. In addressing Core Principle M... further the objectives of the clearing organization's risk management program and any of its surveillance... TRADING COMMISSION DERIVATIVES CLEARING ORGANIZATIONS Pt. 39, App. A Appendix A to Part 39—Application...

  12. Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements

    DOE PAGES

    Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.; ...

    2014-11-04

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  13. Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  14. 17 CFR Appendix B to Part 38 - Guidance on, and Acceptable Practices in, Compliance With Core Principles

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... decision-making and implementation of emergency intervention in the market. At a minimum, the DCM must have... COMMODITY FUTURES TRADING COMMISSION DESIGNATED CONTRACT MARKETS Pt. 38, App. B Appendix B to Part 38... the core principle is illustrative only of the types of matters a designated contract market may...

  15. ZPR-6 assembly 7 high {sup 240} PU core : a cylindrical assemby with mixed (PU, U)-oxide fuel and a central high {sup 240} PU zone.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R. M.; Schaefer, R. W.; McKnight, R. D.

    Over a period of 30 years more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited to form the basis for criticality safety benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactormore » physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. The term 'benchmark' in a ZPR program connotes a particularly simple loading aimed at gaining basic reactor physics insight, as opposed to studying a reactor design. In fact, the ZPR-6/7 Benchmark Assembly (Reference 1) had a very simple core unit cell assembled from plates of depleted uranium, sodium, iron oxide, U3O8, and plutonium. The ZPR-6/7 core cell-average composition is typical of the interior region of liquid-metal fast breeder reactors (LMFBRs) of the era. It was one part of the Demonstration Reactor Benchmark Program,a which provided integral experiments characterizing the important features of demonstration-size LMFBRs. As a benchmark, ZPR-6/7 was devoid of many 'real' reactor features, such as simulated control rods and multiple enrichment zones, in its reference form. Those kinds of features were investigated experimentally in variants of the reference ZPR-6/7 or in other critical assemblies in the Demonstration Reactor Benchmark Program.« less

  16. Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; Leland M. Montierth

    2014-06-01

    PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 editionmore » of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.« less

  17. Defining core elements and outstanding practice in Nutritional Science through collaborative benchmarking.

    PubMed

    Samman, Samir; McCarthur, Jennifer O; Peat, Mary

    2006-01-01

    Benchmarking has been adopted by educational institutions as a potentially sensitive tool for improving learning and teaching. To date there has been limited application of benchmarking methodology in the Discipline of Nutritional Science. The aim of this survey was to define core elements and outstanding practice in Nutritional Science through collaborative benchmarking. Questionnaires that aimed to establish proposed core elements for Nutritional Science, and inquired about definitions of " good" and " outstanding" practice were posted to named representatives at eight Australian universities. Seven respondents identified core elements that included knowledge of nutrient metabolism and requirement, food production and processing, modern biomedical techniques that could be applied to understanding nutrition, and social and environmental issues as related to Nutritional Science. Four of the eight institutions who agreed to participate in the present survey identified the integration of teaching with research as an indicator of outstanding practice. Nutritional Science is a rapidly evolving discipline. Further and more comprehensive surveys are required to consolidate and update the definition of the discipline, and to identify the optimal way of teaching it. Global ideas and specific regional requirements also need to be considered.

  18. Present Status and Extensions of the Monte Carlo Performance Benchmark

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  19. An Architecture for Coexistence with Multiple Users in Frequency Hopping Cognitive Radio Networks

    DTIC Science & Technology

    2013-03-01

    the base WARP system, a custom IP core written in VHDL , and the Virtex IV’s embedded PowerPC core with C code to implement the radio and hopset...shown in Appendix C as Figure C.2. All VHDL code necessary to implement this IP core is included in Appendix G. 69 Figure 3.19: FPGA bus structure...subsystem functionality. A total of 1,430 lines of VHDL code were implemented for this research. 1 library ieee; 2 use ieee.std logic 1164.all; 3 use

  20. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  1. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGES

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  2. The Learning Organisation: Results of a Benchmarking Study.

    ERIC Educational Resources Information Center

    Zairi, Mohamed

    1999-01-01

    Learning in corporations was assessed using these benchmarks: core qualities of creative organizations, characteristic of organizational creativity, attributes of flexible organizations, use of diversity and conflict, creative human resource management systems, and effective and successful teams. These benchmarks are key elements of the learning…

  3. Supplemental Information for Appendix A of the Common Core State Standards for English Language Arts and Literacy: New Research on Text Complexity

    ERIC Educational Resources Information Center

    Council of Chief State School Officers, 2017

    2017-01-01

    Appendix A of the Common Core State Standards (hereafter CCSS) contains a review of the research stressing the importance of being able to read complex text for success in college and career. The research shows that while the complexity of reading demands for college, career, and citizenship have held steady or risen over the past half century,…

  4. EBR-II Reactor Physics Benchmark Evaluation Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, Chad L.; Lum, Edward S; Stewart, Ryan

    This report provides a reactor physics benchmark evaluation with associated uncertainty quantification for the critical configuration of the April 1986 Experimental Breeder Reactor II Run 138B core configuration.

  5. A benchmarking tool to evaluate computer tomography perfusion infarct core predictions against a DWI standard.

    PubMed

    Cereda, Carlo W; Christensen, Søren; Campbell, Bruce Cv; Mishra, Nishant K; Mlynash, Michael; Levi, Christopher; Straka, Matus; Wintermark, Max; Bammer, Roland; Albers, Gregory W; Parsons, Mark W; Lansberg, Maarten G

    2016-10-01

    Differences in research methodology have hampered the optimization of Computer Tomography Perfusion (CTP) for identification of the ischemic core. We aim to optimize CTP core identification using a novel benchmarking tool. The benchmarking tool consists of an imaging library and a statistical analysis algorithm to evaluate the performance of CTP. The tool was used to optimize and evaluate an in-house developed CTP-software algorithm. Imaging data of 103 acute stroke patients were included in the benchmarking tool. Median time from stroke onset to CT was 185 min (IQR 180-238), and the median time between completion of CT and start of MRI was 36 min (IQR 25-79). Volumetric accuracy of the CTP-ROIs was optimal at an rCBF threshold of <38%; at this threshold, the mean difference was 0.3 ml (SD 19.8 ml), the mean absolute difference was 14.3 (SD 13.7) ml, and CTP was 67% sensitive and 87% specific for identification of DWI positive tissue voxels. The benchmarking tool can play an important role in optimizing CTP software as it provides investigators with a novel method to directly compare the performance of alternative CTP software packages. © The Author(s) 2015.

  6. 78 FR 47154 - Core Principles and Other Requirements for Swap Execution Facilities; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-05

    ... COMMODITY FUTURES TRADING COMMISSION 17 CFR Part 37 RIN 3038-AD18 Core Principles and Other... this chapter. Appendix B to Part 37--Guidance on, and Acceptable Practices in, Compliance With Core Principles [Corrected] 2. On page 33600, in the second column, under the heading Core Principle 3 of Section...

  7. Space Station Furnace Facility. Volume 2: Appendix 1: Contract End Item specification (CEI), part 1

    NASA Technical Reports Server (NTRS)

    Seabrook, Craig

    1992-01-01

    This specification establishes the performance, design, development, and verification requirements for the Space Station Furnace Facility (SSFF) Core. The definition of the SSFF Core and its interfaces, specifies requirements for the SSFF Core performance, specifies requirements for the SSFF Core design, and construction are presented, and the verification requirements are established.

  8. HTR-PROTEUS pebble bed experimental program cores 9 & 10: columnar hexagonal point-on-point packing with a 1:1 moderator-to-fuel pebble ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.

    2014-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  9. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORES 5, 6, 7, & 8: COLUMNAR HEXAGONAL POINT-ON-POINT PACKING WITH A 1:2 MODERATOR-TO-FUEL PEBBLE RATIO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess

    2013-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  10. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORES 9 & 10: COLUMNAR HEXAGONAL POINT-ON-POINT PACKING WITH A 1:1 MODERATOR-TO-FUEL PEBBLE RATIO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess

    2013-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  11. 10 CFR Appendix A to Part 50 - General Design Criteria for Nuclear Power Plants

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Heat Removal 34 Emergency Core Cooling 35 Inspection of Emergency Core Cooling System 36 Testing of Emergency Core Cooling System 37 Containment Heat Removal 38 Inspection of Containment Heat Removal System 39 Testing of Containment Heat Removal System 40 Containment Atmosphere Cleanup 41 Inspection of...

  12. 10 CFR Appendix A to Part 50 - General Design Criteria for Nuclear Power Plants

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Heat Removal 34 Emergency Core Cooling 35 Inspection of Emergency Core Cooling System 36 Testing of Emergency Core Cooling System 37 Containment Heat Removal 38 Inspection of Containment Heat Removal System 39 Testing of Containment Heat Removal System 40 Containment Atmosphere Cleanup 41 Inspection of...

  13. Implementation and validation of a conceptual benchmarking framework for patient blood management.

    PubMed

    Kastner, Peter; Breznik, Nada; Gombotz, Hans; Hofmann, Axel; Schreier, Günter

    2015-01-01

    Public health authorities and healthcare professionals are obliged to ensure high quality health service. Because of the high variability of the utilisation of blood and blood components, benchmarking is indicated in transfusion medicine. Implementation and validation of a benchmarking framework for Patient Blood Management (PBM) based on the report from the second Austrian Benchmark trial. Core modules for automatic report generation have been implemented with KNIME (Konstanz Information Miner) and validated by comparing the output with the results of the second Austrian benchmark trial. Delta analysis shows a deviation <0.1% for 95% (max. 1.4%). The framework provides a reliable tool for PBM benchmarking. The next step is technical integration with hospital information systems.

  14. ED School Climate Surveys (EDSCLS) National Benchmark Study 2016. Appendix D. EDSCLS Pilot Test 2015 Report

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2015

    2015-01-01

    The ED School Climate Surveys (EDSCLS) are a suite of survey instruments being developed for schools, school districts, and states by the U.S. Department of Education's National Center for Education Statistics (NCES). Through the EDSCLS, schools nationwide will have access to survey instruments and a survey platform that will allow for the…

  15. ZPR-3 Assembly 11 : A cylindrical sssembly of highly enriched uranium and depleted uranium with an average {sup 235}U enrichment of 12 atom % and a depleted uranium reflector.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R. M.; McKnight, R. D.; Tsiboulia, A.

    2010-09-30

    Over a period of 30 years, more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited for nuclear data validation and to form the basis for criticality safety benchmarks. A number of the Argonne ZPR/ZPPR critical assemblies have been evaluated as ICSBEP and IRPhEP benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physicsmore » benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactor physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. ZPR-3 Assembly 11 (ZPR-3/11) was designed as a fast reactor physics benchmark experiment with an average core {sup 235}U enrichment of approximately 12 at.% and a depleted uranium reflector. Approximately 79.7% of the total fissions in this assembly occur above 100 keV, approximately 20.3% occur below 100 keV, and essentially none below 0.625 eV - thus the classification as a 'fast' assembly. This assembly is Fast Reactor Benchmark No. 8 in the Cross Section Evaluation Working Group (CSEWG) Benchmark Specificationsa and has historically been used as a data validation benchmark assembly. Loading of ZPR-3 Assembly 11 began in early January 1958, and the Assembly 11 program ended in late January 1958. The core consisted of highly enriched uranium (HEU) plates and depleted uranium plates loaded into stainless steel drawers, which were inserted into the central square stainless steel tubes of a 31 x 31 matrix on a split table machine. The core unit cell consisted of two columns of 0.125 in.-wide (3.175 mm) HEU plates, six columns of 0.125 in.-wide (3.175 mm) depleted uranium plates and one column of 1.0 in.-wide (25.4 mm) depleted uranium plates. The length of each column was 10 in. (254.0 mm) in each half of the core. The axial blanket consisted of 12 in. (304.8 mm) of depleted uranium behind the core. The thickness of the depleted uranium radial blanket was approximately 14 in. (355.6 mm), and the length of the radial blanket in each half of the matrix was 22 in. (558.8 mm). The assembly geometry approximated a right circular cylinder as closely as the square matrix tubes allowed. According to the logbook and loading records for ZPR-3/11, the reference critical configuration was loading 10 which was critical on January 21, 1958. Subsequent loadings were very similar but less clean for criticality because there were modifications made to accommodate reactor physics measurements other than criticality. Accordingly, ZPR-3/11 loading 10 was selected as the only configuration for this benchmark. As documented below, it was determined to be acceptable as a criticality safety benchmark experiment. A very accurate transformation to a simplified model is needed to make any ZPR assembly a practical criticality-safety benchmark. There is simply too much geometric detail in an exact (as-built) model of a ZPR assembly, even a clean core such as ZPR-3/11 loading 10. The transformation must reduce the detail to a practical level without masking any of the important features of the critical experiment. And it must do this without increasing the total uncertainty far beyond that of the original experiment. Such a transformation is described in Section 3. It was obtained using a pair of continuous-energy Monte Carlo calculations. First, the critical configuration was modeled in full detail - every plate, drawer, matrix tube, and air gap was modeled explicitly. Then the regionwise compositions and volumes from the detailed as-built model were used to construct a homogeneous, two-dimensional (RZ) model of ZPR-3/11 that conserved the mass of each nuclide and volume of each region. The simple cylindrical model is the criticality-safety benchmark model. The difference in the calculated k{sub eff} values between the as-built three-dimensional model and the homogeneous two-dimensional benchmark model was used to adjust the measured excess reactivity of ZPR-3/11 loading 10 to obtain the k{sub eff} for the benchmark model.« less

  16. 17 CFR Appendix B to Part 36 - Guidance on, and Acceptable Practices in, Compliance With Core Principles

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS Pt. 36, App. B Appendix B to Part 36—Guidance on, and... contracts to prevent market manipulation, price distortion, and disruptions of the delivery of cash-settlement process through market surveillance, compliance and disciplinary practices and procedures...

  17. 17 CFR Appendix B to Part 36 - Guidance on, and Acceptable Practices in, Compliance With Core Principles

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS Pt. 36, App. B Appendix B to Part 36—Guidance on, and... contracts to prevent market manipulation, price distortion, and disruptions of the delivery of cash-settlement process through market surveillance, compliance and disciplinary practices and procedures...

  18. 17 CFR Appendix B to Part 36 - Guidance on, and Acceptable Practices in, Compliance With Core Principles

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... have clear procedures and guidelines for decision-making regarding emergency intervention in the market... COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS Pt. 36, App. B Appendix B to Part 36—Guidance on, and... trading in significant price discovery contracts to prevent market manipulation, price distortion, and...

  19. TREAT Transient Analysis Benchmarking for the HEU Core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontogeorgakos, D. C.; Connaway, H. M.; Wright, A. E.

    2014-05-01

    This work was performed to support the feasibility study on the potential conversion of the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory from the use of high enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel. The analyses were performed by the GTRI Reactor Conversion staff at the Argonne National Laboratory (ANL). The objective of this study was to benchmark the transient calculations against temperature-limited transients performed in the final operating HEU TREAT core configuration. The MCNP code was used to evaluate steady-state neutronics behavior, and the point kinetics code TREKIN was used tomore » determine core power and energy during transients. The first part of the benchmarking process was to calculate with MCNP all the neutronic parameters required by TREKIN to simulate the transients: the transient rod-bank worth, the prompt neutron generation lifetime, the temperature reactivity feedback as a function of total core energy, and the core-average temperature and peak temperature as a functions of total core energy. The results of these calculations were compared against measurements or against reported values as documented in the available TREAT reports. The heating of the fuel was simulated as an adiabatic process. The reported values were extracted from ANL reports, intra-laboratory memos and experiment logsheets and in some cases it was not clear if the values were based on measurements, on calculations or a combination of both. Therefore, it was decided to use the term “reported” values when referring to such data. The methods and results from the HEU core transient analyses will be used for the potential LEU core configurations to predict the converted (LEU) core’s performance.« less

  20. 17 CFR Appendix B to Part 37 - Guidance on Compliance With Core Principles

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... compliance with, or satisfaction of, the core principles is not self-explanatory from the face of the... collected should be suitable for the type of information collected and should occur in a timely fashion. A...

  1. 17 CFR Appendix B to Part 37 - Guidance on Compliance With Core Principles

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... compliance with, or satisfaction of, the core principles is not self-explanatory from the face of the... collected should be suitable for the type of information collected and should occur in a timely fashion. A...

  2. 17 CFR Appendix B to Part 38 - Guidance on, and Acceptable Practices in, Compliance With Core Principles

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    .... The designated contract market must demonstrate that it is making a good-faith effort to resolve... decision-making and implementation of emergency intervention in the market. At a minimum, the DCM must have... COMMODITY FUTURES TRADING COMMISSION DESIGNATED CONTRACT MARKETS Pt. 38, App. B Appendix B to Part 38...

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Sterbentz, James W.; Snoj, Luka

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  4. 76 FR 54209 - Corrosion-Resistant Carbon Steel Flat Products From the Republic of Korea: Preliminary Results of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-31

    ... description of the merchandise is dispositive. Subsidies Valuation Information A. Benchmarks for Short-Term Financing For those programs requiring the application of a won-denominated, short-term interest rate... Issues and Decision Memorandum (CORE from Korea 2006 Decision Memorandum) at ``Benchmarks for Short-Term...

  5. A Programming Model Performance Study Using the NAS Parallel Benchmarks

    DOE PAGES

    Shan, Hongzhang; Blagojević, Filip; Min, Seung-Jai; ...

    2010-01-01

    Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the threemore » programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.« less

  6. Modal analysis and acoustic transmission through offset-core honeycomb sandwich panels

    NASA Astrophysics Data System (ADS)

    Mathias, Adam Dustin

    The work presented in this thesis is motivated by an earlier research that showed that double, offset-core honeycomb sandwich panels increased thermal resistance and, hence, decreased heat transfer through the panels. This result lead to the hypothesis that these panels could be used for acoustic insulation. Using commercial finite element modeling software, COMSOL Multiphysics, the acoustical properties, specifically the transmission loss across a variety of offset-core honeycomb sandwich panels, is studied for the case of a plane acoustic wave impacting the panel at normal incidence. The transmission loss results are compared with those of single-core honeycomb panels with the same cell sizes. The fundamental frequencies of the panels are also computed in an attempt to better understand the vibrational modes of these particular sandwich-structured panels. To ensure that the finite element analysis software is adequate for the task at hand, two relevant benchmark problems are solved and compared with theory. Results from these benchmark results compared well to those obtained from theory. Transmission loss results from the offset-core honeycomb sandwich panels show increased transmission loss, especially for large cell honeycombs when compared to single-core honeycomb panels.

  7. Service profiling and outcomes benchmarking using the CORE-OM: toward practice-based evidence in the psychological therapies. Clinical Outcomes in Routine Evaluation-Outcome Measures.

    PubMed

    Barkham, M; Margison, F; Leach, C; Lucock, M; Mellor-Clark, J; Evans, C; Benson, L; Connell, J; Audin, K; McGrath, G

    2001-04-01

    To complement the evidence-based practice paradigm, the authors argued for a core outcome measure to provide practice-based evidence for the psychological therapies. Utility requires instruments that are acceptable scientifically, as well as to service users, and a coordinated implementation of the measure at a national level. The development of the Clinical Outcomes in Routine Evaluation-Outcome Measure (CORE-OM) is summarized. Data are presented across 39 secondary-care services (n = 2,710) and within an intensively evaluated single service (n = 1,455). Results suggest that the CORE-OM is a valid and reliable measure for multiple settings and is acceptable to users and clinicians as well as policy makers. Baseline data levels of patient presenting problem severity, including risk, are reported in addition to outcome benchmarks that use the concept of reliable and clinically significant change. Basic quality improvement in outcomes for a single service is considered.

  8. Summary of comparison and analysis of results from exercises 1 and 2 of the OECD PBMR coupled neutronics/thermal hydraulics transient benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkhabela, P.; Han, J.; Tyobeka, B.

    2006-07-01

    The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less

  9. How to Advance TPC Benchmarks with Dependability Aspects

    NASA Astrophysics Data System (ADS)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  10. Experimental Criticality Benchmarks for SNAP 10A/2 Reactor Cores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krass, A.W.

    2005-12-19

    This report describes computational benchmark models for nuclear criticality derived from descriptions of the Systems for Nuclear Auxiliary Power (SNAP) Critical Assembly (SCA)-4B experimental criticality program conducted by Atomics International during the early 1960's. The selected experimental configurations consist of fueled SNAP 10A/2-type reactor cores subject to varied conditions of water immersion and reflection under experimental control to measure neutron multiplication. SNAP 10A/2-type reactor cores are compact volumes fueled and moderated with the hydride of highly enriched uranium-zirconium alloy. Specifications for the materials and geometry needed to describe a given experimental configuration for a model using MCNP5 are provided. Themore » material and geometry specifications are adequate to permit user development of input for alternative nuclear safety codes, such as KENO. A total of 73 distinct experimental configurations are described.« less

  11. 17 CFR Appendix B to Part 37 - Guidance on Compliance With Core Principles

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ..., resources and authority to detect and deter abuses by effectively and affirmatively enforcing its rules... privileges but having no, or only nominal equity, in the facility and non-member market participants or, in... transparent to the member or market participant. Core Principle 3 of section 5a(d) of the Act: MONITORING OF...

  12. Deterministic Modeling of the High Temperature Test Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortensi, J.; Cogliati, J. J.; Pope, M. A.

    2010-06-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is usedmore » in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less

  13. Benchmark gas core critical experiment.

    NASA Technical Reports Server (NTRS)

    Kunze, J. F.; Lofthouse, J. H.; Cooper, C. G.; Hyland, R. E.

    1972-01-01

    A critical experiment with spherical symmetry has been conducted on the gas core nuclear reactor concept. The nonspherical perturbations in the experiment were evaluated experimentally and produce corrections to the observed eigenvalue of approximately 1% delta k. The reactor consisted of a low density, central uranium hexafluoride gaseous core, surrounded by an annulus of void or low density hydrocarbon, which in turn was surrounded with a 97-cm-thick heavy water reflector.

  14. User and Performance Impacts from Franklin Upgrades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Yun

    2009-05-10

    The NERSC flagship computer Cray XT4 system"Franklin" has gone through three major upgrades: quad core upgrade, CLE 2.1 upgrade, and IO upgrade, during the past year. In this paper, we will discuss the various aspects of the user impacts such as user access, user environment, and user issues etc from these upgrades. The performance impacts on the kernel benchmarks and selected application benchmarks will also be presented.

  15. New Multi-group Transport Neutronics (PHISICS) Capabilities for RELAP5-3D and its Application to Phase I of the OECD/NEA MHTGR-350 MW Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Cristian Rabiti; Andrea Alfonsi

    2012-10-01

    PHISICS is a neutronics code system currently under development at the Idaho National Laboratory (INL). Its goal is to provide state of the art simulation capability to reactor designers. The different modules for PHISICS currently under development are a nodal and semi-structured transport core solver (INSTANT), a depletion module (MRTAU) and a cross section interpolation (MIXER) module. The INSTANT module is the most developed of the mentioned above. Basic functionalities are ready to use, but the code is still in continuous development to extend its capabilities. This paper reports on the effort of coupling the nodal kinetics code package PHISICSmore » (INSTANT/MRTAU/MIXER) to the thermal hydraulics system code RELAP5-3D, to enable full core and system modeling. This will enable the possibility to model coupled (thermal-hydraulics and neutronics) problems with more options for 3D neutron kinetics, compared to the existing diffusion theory neutron kinetics module in RELAP5-3D (NESTLE). In the second part of the paper, an overview of the OECD/NEA MHTGR-350 MW benchmark is given. This benchmark has been approved by the OECD, and is based on the General Atomics 350 MW Modular High Temperature Gas Reactor (MHTGR) design. The benchmark includes coupled neutronics thermal hydraulics exercises that require more capabilities than RELAP5-3D with NESTLE offers. Therefore, the MHTGR benchmark makes extensive use of the new PHISICS/RELAP5-3D coupling capabilities. The paper presents the preliminary results of the three steady state exercises specified in Phase I of the benchmark using PHISICS/RELAP5-3D.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marck, Steven C. van der, E-mail: vandermarck@nrg.eu

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), tomore » mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for {sup 6}Li, {sup 7}Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such instances can often be related to nuclear data for specific non-fissile elements, such as C, Fe, or Gd. Indications are that the intermediate and mixed spectrum cases are less well described. The results for the shielding benchmarks are generally good, with very similar results for the three libraries in the majority of cases. Nevertheless there are, in certain cases, strong deviations between calculated and benchmark values, such as for Co and Mg. Also, the results show discrepancies at certain energies or angles for e.g. C, N, O, Mo, and W. The functionality of MCNP6 to calculate the effective delayed neutron fraction yields very good results for all three libraries.« less

  17. MC21 analysis of the MIT PWR benchmark: Hot zero power results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly Iii, D. J.; Aviles, B. N.; Herman, B. R.

    2013-07-01

    MC21 Monte Carlo results have been compared with hot zero power measurements from an operating pressurized water reactor (PWR), as specified in a new full core PWR performance benchmark from the MIT Computational Reactor Physics Group. Included in the comparisons are axially integrated full core detector measurements, axial detector profiles, control rod bank worths, and temperature coefficients. Power depressions from grid spacers are seen clearly in the MC21 results. Application of Coarse Mesh Finite Difference (CMFD) acceleration within MC21 has been accomplished, resulting in a significant reduction of inactive batches necessary to converge the fission source. CMFD acceleration has alsomore » been shown to work seamlessly with the Uniform Fission Site (UFS) variance reduction method. (authors)« less

  18. Benchmark Evaluation of Fuel Effect and Material Worth Measurements for a Beryllium-Reflected Space Reactor Mockup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, Margaret A.; Bess, John D.

    2015-02-01

    The critical configuration of the small, compact critical assembly (SCCA) experiments performed at the Oak Ridge Critical Experiments Facility (ORCEF) in 1962-1965 have been evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The initial intent of these experiments was to support the design of the Medium Power Reactor Experiment (MPRE) program, whose purpose was to study “power plants for the production of electrical power in space vehicles.” The third configuration in this series of experiments was a beryllium-reflected assembly of stainless-steel-clad, highly enriched uranium (HEU)-O 2 fuel mockup of a potassium-cooledmore » space power reactor. Reactivity measurements cadmium ratio spectral measurements and fission rate measurements were measured through the core and top reflector. Fuel effect worth measurements and neutron moderating and absorbing material worths were also measured in the assembly fuel region. The cadmium ratios, fission rate, and worth measurements were evaluated for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The fuel tube effect and neutron moderating and absorbing material worth measurements are the focus of this paper. Additionally, a measurement of the worth of potassium filling the core region was performed but has not yet been evaluated Pellets of 93.15 wt.% enriched uranium dioxide (UO 2) were stacked in 30.48 cm tall stainless steel fuel tubes (0.3 cm tall end caps). Each fuel tube had 26 pellets with a total mass of 295.8 g UO 2 per tube. 253 tubes were arranged in 1.506-cm triangular lattice. An additional 7-tube cluster critical configuration was also measured but not used for any physics measurements. The core was surrounded on all side by a beryllium reflector. The fuel effect worths were measured by removing fuel tubes at various radius. An accident scenario was also simulated by moving outward twenty fuel rods from the periphery of the core so they were touching the core tank. The change in the system reactivity when the fuel tube(s) were removed/moved compared with the base configuration was the worth of the fuel tubes or accident scenario. The worth of neutron absorbing and moderating materials was measured by inserting material rods into the core at regular intervals or placing lids at the top of the core tank. Stainless steel 347, tungsten, niobium, polyethylene, graphite, boron carbide, aluminum and cadmium rods and/or lid worths were all measured. The change in the system reactivity when a material was inserted into the core is the worth of the material.« less

  19. Benchmarking high performance computing architectures with CMS’ skeleton framework

    NASA Astrophysics Data System (ADS)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  20. Benchmarking NWP Kernels on Multi- and Many-core Processors

    NASA Astrophysics Data System (ADS)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  1. featsel: A framework for benchmarking of feature selection algorithms and cost functions

    NASA Astrophysics Data System (ADS)

    Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior

    In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.

  2. Knowledge and Practices of Faculty at NASM Accredited Institutions in the Southeast Region Regarding Standards-Based Instruction

    ERIC Educational Resources Information Center

    Nelson, Jonathan Leon

    2017-01-01

    In 1993, Congress passed the mandate "Goals 2000: Educate America Act," which established standards for K-12 education that outlined the core benchmarks of student achievement for individuals who have mastered the core curricula required to earn a high school diploma (Mark, 1995). Unfortunately, these curricular requirements did not…

  3. Benchmarking and Accreditation Goals Support the Value of an Undergraduate Business Law Core Course

    ERIC Educational Resources Information Center

    O'Brien, Christine Neylon; Powers, Richard E.; Wesner, Thomas L.

    2018-01-01

    This article provides information about the value of a core course in business law and why it remains essential to business education. It goes on to identify highly ranked undergraduate business programs that require one or more business law courses. Using "Business Week" and "US News and World Report" to identify top…

  4. Research-Based Writing Practices and the Common Core: Meta-Analysis and Meta-Synthesis

    ERIC Educational Resources Information Center

    Graham, Steve; Harris, Karen R.; Santangelo, Tanya

    2015-01-01

    In order to meet writing objectives specified in the Common Core State Standards (CCSS), many teachers need to make significant changes in how writing is taught. While CCSS identified what students need to master, it did not provide guidance on how teachers are to meet these writing benchmarks. The current article presents research-supported…

  5. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology. Appendix C -- Finite Element Models Solution Database File, Appendix D -- Benchmark Finite Element Models Solution Database File

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  6. Kohn-Sham Band Structure Benchmark Including Spin-Orbit Coupling for 2D and 3D Solids

    NASA Astrophysics Data System (ADS)

    Huhn, William; Blum, Volker

    2015-03-01

    Accurate electronic band structures serve as a primary indicator of the suitability of a material for a given application, e.g., as electronic or catalytic materials. Computed band structures, however, are subject to a host of approximations, some of which are more obvious (e.g., the treatment of the exchange-correlation of self-energy) and others less obvious (e.g., the treatment of core, semicore, or valence electrons, handling of relativistic effects, or the accuracy of the underlying basis set used). We here provide a set of accurate Kohn-Sham band structure benchmarks, using the numeric atom-centered all-electron electronic structure code FHI-aims combined with the ``traditional'' PBE functional and the hybrid HSE functional, to calculate core, valence, and low-lying conduction bands of a set of 2D and 3D materials. Benchmarks are provided with and without effects of spin-orbit coupling, using quasi-degenerate perturbation theory to predict spin-orbit splittings. This work is funded by Fritz-Haber-Institut der Max-Planck-Gesellschaft.

  7. The X40×10 Halogen Bonding Benchmark Revisited: Surprising Importance of (n-1)d Subvalence Correlation.

    PubMed

    Kesharwani, Manoj K; Manna, Debashree; Sylvetsky, Nitai; Martin, Jan M L

    2018-03-01

    We have re-evaluated the X40×10 benchmark for halogen bonding using conventional and explicitly correlated coupled cluster methods. For the aromatic dimers at small separation, improved CCSD(T)-MP2 "high-level corrections" (HLCs) cause substantial reductions in the dissociation energy. For the bromine and iodine species, (n-1)d subvalence correlation increases dissociation energies and turns out to be more important for noncovalent interactions than is generally realized; (n-1)sp subvalence correlation is much less important. The (n-1)d subvalence term is dominated by core-valence correlation; with the smaller cc-pVDZ-F12-PP and cc-pVTZ-F12-PP basis sets, basis set convergence for the core-core contribution becomes sufficiently erratic that it may compromise results overall. The two factors conspire to generate discrepancies of up to 0.9 kcal/mol (0.16 kcal/mol RMS) between the original X40×10 data and the present revision.

  8. HTR-PROTEUS Pebble Bed Experimental Program Cores 1, 1A, 2, and 3: Hexagonal Close Packing with a 1:2 Moderator-to-Fuel Pebble Ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; Barbara H. Dolphin; James W. Sterbentz

    2013-03-01

    In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » Four benchmark experiments were evaluated in this report: Cores 1, 1A, 2, and 3. These core configurations represent the hexagonal close packing (HCP) configurations of the HTR-PROTEUS experiment with a moderator-to-fuel pebble ratio of 1:2. Core 1 represents the only configuration utilizing ZEBRA control rods. Cores 1A, 2, and 3 use withdrawable, hollow, stainless steel control rods. Cores 1 and 1A are similar except for the use of different control rods; Core 1A also has one less layer of pebbles (21 layers instead of 22). Core 2 retains the first 16 layers of pebbles from Cores 1 and 1A and has 16 layers of moderator pebbles stacked above the fueled layers. Core 3 retains the first 17 layers of pebbles but has polyethylene rods inserted between pebbles to simulate water ingress. The additional partial pebble layer (layer 18) for Core 3 was not included as it was used for core operations and not the reported critical configuration. Cores 1, 1A, 2, and 3 were determined to be acceptable benchmark experiments.« less

  9. HTR-PROTEUS Pebble Bed Experimental Program Cores 1, 1A, 2, and 3: Hexagonal Close Packing with a 1:2 Moderator-to-Fuel Pebble Ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; Barbara H. Dolphin; James W. Sterbentz

    2012-03-01

    In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » Four benchmark experiments were evaluated in this report: Cores 1, 1A, 2, and 3. These core configurations represent the hexagonal close packing (HCP) configurations of the HTR-PROTEUS experiment with a moderator-to-fuel pebble ratio of 1:2. Core 1 represents the only configuration utilizing ZEBRA control rods. Cores 1A, 2, and 3 use withdrawable, hollow, stainless steel control rods. Cores 1 and 1A are similar except for the use of different control rods; Core 1A also has one less layer of pebbles (21 layers instead of 22). Core 2 retains the first 16 layers of pebbles from Cores 1 and 1A and has 16 layers of moderator pebbles stacked above the fueled layers. Core 3 retains the first 17 layers of pebbles but has polyethylene rods inserted between pebbles to simulate water ingress. The additional partial pebble layer (layer 18) for Core 3 was not included as it was used for core operations and not the reported critical configuration. Cores 1, 1A, 2, and 3 were determined to be acceptable benchmark experiments.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less thanmore » these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk, osprey) (scientific names for both the mammalian and avian species are presented in Appendix B). [In this document, NOAEL refers to both dose (mg contaminant per kg animal body weight per day) and concentration (mg contaminant per kg of food or L of drinking water)]. The 20 wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at U.S. Department of Energy (DOE) waste sites. The NOAEL-based benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species; LOAEL-based benchmarks represent threshold levels at which adverse effects are likely to become evident. These benchmarks consider contaminant exposure through oral ingestion of contaminated media only. Exposure through inhalation and/or direct dermal exposure are not considered in this report.« less

  11. Benchmarking high performance computing architectures with CMS’ skeleton framework

    DOE PAGES

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-11-23

    Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less

  12. Benchmarking high performance computing architectures with CMS’ skeleton framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less

  13. In-Situ Sampling and Characterization of Naturally Occurring Marine Methane Hydrate Using the D/V JOIDES Resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rack, Frank; Storms, Michael; Schroeder, Derryl

    The primary accomplishments of the JOI Cooperative Agreement with DOE/NETL in this quarter were (1) the preliminary postcruise evaluation of the tools and measurement systems that were used during ODP Leg 204 to study hydrate deposits on Hydrate Ridge, offshore Oregon from July through September 2002; and (2) the preliminary study of the hydrate-bearing core samples preserved in pressure vessels and in liquid nitrogen cryofreezers, which are now stored at the ODP Gulf Coast Repository in College Station, TX. During ODP Leg 204, several newly modified downhole tools were deployed to better characterize the subsurface lithologies and environments hosting microbialmore » populations and gas hydrates. A preliminary review of the use of these tools is provided herein. The DVTP, DVTP-P, APC-methane, and APC-Temperature tools (ODP memory tools) were used extensively and successfully during ODP Leg 204 aboard the D/V JOIDES Resolution. These systems provided a strong operational capability for characterizing the in situ properties of methane hydrates in subsurface environments on Hydrate Ridge during ODP Leg 204. Pressure was also measured during a trial run of the Fugro piezoprobe, which operates on similar principles as the DVTP-P. The final report describing the deployments of the Fugro Piezoprobe is provided in Appendix A of this report. A preliminary analysis and comparison between the piezoprobe and DVTP-P tools is provided in Appendix B of this report. Finally, a series of additional holes were cored at the crest of Hydrate Ridge (Site 1249) specifically geared toward the rapid recovery and preservation of hydrate samples as part of a hydrate geriatric study partially funded by the Department of Energy (DOE). In addition, the preliminary results from gamma density non-invasive imaging of the cores preserved in pressure vessels are provided in Appendix C of this report. An initial visual inspection of the samples stored in liquid nitrogen is provided in Appendix D of this report.« less

  14. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    DTIC Science & Technology

    2017-04-13

    modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a

  15. Three-dimensional pin-to-pin analyses of VVER-440 cores by the MOBY-DICK code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehmann, M.; Mikolas, P.

    1994-12-31

    Nuclear design for the Dukovany (EDU) VVER-440s nuclear power plant is routinely performed by the MOBY-DICK system. After its implementation on Hewlett Packard series 700 workstations, it is able to perform routinely three-dimensional pin-to-pin core analyses. For purposes of code validation, the benchmark prepared from EDU operational data was solved.

  16. Let History Not Repeat Itself: Overcoming Obstacles to the Common Core's Success. ES Select

    ERIC Educational Resources Information Center

    Chubb, John

    2012-01-01

    The Common Core State Standards project is the latest in a series of efforts to improve the academic success of American students. Forty-five states and the District of Columbia have endorsed new academic benchmarks that substantially raise the bar for achievement in English and mathematics. Aiming at a deeper form of learning, the initiative is a…

  17. Performance implications from sizing a VM on multi-core systems: A Data analytic application s view

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Horey, James L; Begoli, Edmon

    In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloudmore » Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.« less

  18. Evaluation of concrete pavements with materials-related distress : appendix G.

    DOT National Transportation Integrated Search

    2010-03-02

    An evaluation of cores sampled from six concrete pavements was performed. Factors contributing to pavement distress observed in the field were determined, including expansive alkali-silica reactivity and freeze-thaw deterioration related to poor entr...

  19. 17 CFR Appendix B to Part 37 - Guidance on, and Acceptable Practices in, Compliance with Core Principles

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... should include full customer restitution where customer harm is demonstrated, except where the amount of... or external audit findings, self-reported errors, or through validated complaints. (C) Requirements...

  20. Evaluation of concrete pavements with materials-related distress : appendix F.

    DOT National Transportation Integrated Search

    2010-03-02

    An evaluation of cores sampled from six concrete pavements was performed. Factors contributing to pavement distress observed in the field were determined, including expansive alkali-silica reactivity and freeze-thaw deterioration related to poor entr...

  1. Evaluation of concrete pavements with materials-related distress : appendix E.

    DOT National Transportation Integrated Search

    2010-03-02

    An evaluation of cores sampled from six concrete pavements was performed. Factors contributing to pavement distress observed in the field were determined, including expansive alkali-silica reactivity and freeze-thaw deterioration related to poor entr...

  2. Evaluation of concrete pavements with materials-related distress : appendix D.

    DOT National Transportation Integrated Search

    2010-03-02

    An evaluation of cores sampled from six concrete pavements was performed. Factors contributing to pavement distress observed in the field were determined, including expansive alkali-silica reactivity and freeze-thaw deterioration related to poor entr...

  3. Evaluation of concrete pavements with materials-related distress : appendix B.

    DOT National Transportation Integrated Search

    2010-02-02

    An evaluation of cores sampled from six concrete pavements was performed. Factors contributing to pavement distress observed in the field were determined, including expansive alkali-silica reactivity and freeze-thaw deterioration related to poor entr...

  4. Evaluation of concrete pavements with materials-related distress : appendix C.

    DOT National Transportation Integrated Search

    2010-03-02

    An evaluation of cores sampled from six concrete pavements was performed. Factors contributing to pavement distress observed in the field were determined, including expansive alkali-silica reactivity and freeze-thaw deterioration related to poor entr...

  5. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part I: Benchmark comparisons of WIMS-D5 and DRAGON cell and control rod parameters with MCNP5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mollerach, R.; Leszczynski, F.; Fink, J.

    2006-07-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure-vessel design with 451 vertical coolant channels, and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update calculation methods and models (cell, supercell and reactor) was recently carried out coveringmore » cell, supercell (control rod) and core calculations. As a validation of the new models some benchmark comparisons were done with Monte Carlo calculations with MCNP5. This paper presents comparisons of cell and supercell benchmark problems based on a slightly idealized model of the Atucha-I core obtained with the WIMS-D5 and DRAGON codes with MCNP5 results. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, and more symmetric than Atucha-II Cell parameters compared include cell k-infinity, relative power levels of the different rings of fuel rods, and some two-group macroscopic cross sections. Supercell comparisons include supercell k-infinity changes due to the control rods (tubes) of steel and hafnium. (authors)« less

  6. It's Not Education by Zip Code Anymore--But What is It? Conceptions of Equity under the Common Core

    ERIC Educational Resources Information Center

    Kornhaber, Mindy L.; Griffith, Kelly; Tyler, Alison

    2014-01-01

    The Common Core State Standards Initiative is a standards-based reform in which 45 U.S. states and the District of Columbia have agreed to participate. The reform seeks to anchor primary and secondary education across these states in one set of demanding, internationally benchmarked standards. Thereby, all students will be prepared for further…

  7. Identity Activities

    DTIC Science & Technology

    2016-08-03

    individual or organization knows or says about another individual         Core personal Addresses Employment Educational Military Service...rhythm; handwriting ; type/keyboard pattern; posture/bearing; gait/limp; gestures). Appendix D D-8 JDN X-XX (3) Financial Transactions. Any

  8. Evaluation of concrete pavements with materials-related distress : appendix A, part 1.

    DOT National Transportation Integrated Search

    2010-03-02

    An evaluation of cores sampled from six concrete pavements was performed. Factors contributing to pavement distress observed in the field were determined, including expansive alkali-silica reactivity and freeze-thaw deterioration related to poor entr...

  9. Evaluation of concrete pavements with materials-related distress : appendix A, part 3.

    DOT National Transportation Integrated Search

    2010-03-02

    An evaluation of cores sampled from six concrete pavements was performed. Factors contributing to pavement distress observed in the field were determined, including expansive alkali-silica reactivity and freeze-thaw deterioration related to poor entr...

  10. Evaluation of concrete pavements with materials-related distress : appendix A, part 2.

    DOT National Transportation Integrated Search

    2010-03-02

    An evaluation of cores sampled from six concrete pavements was performed. Factors contributing to pavement distress observed in the field were determined, including expansive alkali-silica reactivity and freeze-thaw deterioration related to poor entr...

  11. 17 CFR Appendix A to Part 39 - Application Guidance and Compliance With Core Principles

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... management tools such as stress testing and value at risk calculations; and c. What contingency plans the... informal, which the clearing organization views as appropriate and applicable to its operations. 2. How...

  12. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  13. Benchmarking infrastructure for mutation text mining.

    PubMed

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  14. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Javier Ortensi; Sonat Sen

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less

  15. Initial Performance Results on IBM POWER6

    NASA Technical Reports Server (NTRS)

    Saini, Subbash; Talcott, Dale; Jespersen, Dennis; Djomehri, Jahed; Jin, Haoqiang; Mehrotra, Piysuh

    2008-01-01

    The POWER5+ processor has a faster memory bus than that of the previous generation POWER5 processor (533 MHz vs. 400 MHz), but the measured per-core memory bandwidth of the latter is better than that of the former (5.7 GB/s vs. 4.3 GB/s). The reason for this is that in the POWER5+, the two cores on the chip share the L2 cache, L3 cache and memory bus. The memory controller is also on the chip and is shared by the two cores. This serializes the path to memory. For consistently good performance on a wide range of applications, the performance of the processor, the memory subsystem, and the interconnects (both latency and bandwidth) should be balanced. Recognizing this, IBM has designed the Power6 processor so as to avoid the bottlenecks due to the L2 cache, memory controller and buffer chips of the POWER5+. Unlike the POWER5+, each core in the POWER6 has its own L2 cache (4 MB - double that of the Power5+), memory controller and buffer chips. Each core in the POWER6 runs at 4.7 GHz instead of 1.9 GHz in POWER5+. In this paper, we evaluate the performance of a dual-core Power6 based IBM p6-570 system, and we compare its performance with that of a dual-core Power5+ based IBM p575+ system. In this evaluation, we have used the High- Performance Computing Challenge (HPCC) benchmarks, NAS Parallel Benchmarks (NPB), and four real-world applications--three from computational fluid dynamics and one from climate modeling.

  16. Teaching Core Courses with a Focus on Rural Health. An Instructor Resource Guide. Appendix to a Final Report on the Paraprofessional Rurally Oriented Family Home Health Training Program.

    ERIC Educational Resources Information Center

    Myer, Donna Foster, Ed.

    This instructor's resource guide, one in a series of products from a project to develop an associate degree program for paraprofessional rural family health promoters, deals with teaching courses that focus on rural health. Discussed in the first section of the guide are the role of core courses in rural health promotional training and the…

  17. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  18. DE-NE0008277_PROTEUS final technical report 2018

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enqvist, Andreas

    This project details re-evaluations of experiments of gas-cooled fast reactor (GCFR) core designs performed in the 1970s at the PROTEUS reactor and create a series of International Reactor Physics Experiment Evaluation Project (IRPhEP) benchmarks. Currently there are no gas-cooled fast reactor (GCFR) experiments available in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). These experiments are excellent candidates for reanalysis and development of multiple benchmarks because these experiments provide high-quality integral nuclear data relevant to the validation and refinement of thorium, neptunium, uranium, plutonium, iron, and graphite cross sections. It would be cost prohibitive to reproduce suchmore » a comprehensive suite of experimental data to support any future GCFR endeavors.« less

  19. Theoretical Background and Prognostic Modeling for Benchmarking SHM Sensors for Composite Structures

    DTIC Science & Technology

    2010-10-01

    minimum flaw size can be detected by the existing SHM based monitoring methods. Sandwich panels with foam , WebCore and honeycomb structures were...Whether it be hat stiffened, corrugated sandwich, honeycomb sandwich, or foam filled sandwich, all composite structures have one basic handicap in...based monitoring methods. Sandwich panels with foam , WebCore and honeycomb structures were considered for use in this study. Eigenmode frequency

  20. Description and Evaluation of the Cultural Resources within Mathews Canyon and Pine Canyon, Lincoln County, Nevada. Cultural Resources Report. Appendix,

    DTIC Science & Technology

    1977-09-30

    U cm. (Fire cracked rock, charcoal). 28. Burials 29. Artifacts White chert scraper, obsidian biface; broken tool blanks. Flakes: obsidian , core...mostly obsidian ; 1 red chert. 30. Remarks Deer tracks & trail; horse manure; rabbit. 31. Published references 32. Accession No. __________33. Sketch map...Burials 29. Artifacts Dozens of flakes: chert, obsidian , chalcedony, basalt chert is various colors; obsidian core, red chert biface obsidian drill

  1. Precursors to potential severe core damage accidents: 1994, a status report. Volume 22: Appendix I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belles, R.J.; Cletcher, J.W.; Copinger, D.A.

    Nine operational events that affected eleven commercial light-water reactors (LWRs) during 1994 and that are considered to be precursors to potential severe core damage are described. All these events had conditional probabilities of subsequent severe core damage greater than or equal to 1.0 {times} 10{sup {minus}6}. These events were identified by computer-screening the 1994 licensee event reports from commercial LWRs to identify those that could be potential precursors. Candidate precursors were then selected and evaluated in a process similar to that used in previous assessments. Selected events underwent engineering evaluation that identified, analyzed, and documented the precursors. Other events designatedmore » by the Nuclear Regulatory Commission (NRC) also underwent a similar evaluation. Finally, documented precursors were submitted for review by licensees and NRC headquarters and regional offices to ensure that the plant design and its response to the precursor were correctly characterized. This study is a continuation of earlier work, which evaluated 1969--1981 and 1984--1993 events. The report discusses the general rationale for this study, the selection and documentation of events as precursors, and the estimation of conditional probabilities of subsequent severe core damage for events. This document is bound in two volumes: Vol. 21 contains the main report and Appendices A--H; Vol. 22 contains Appendix 1.« less

  2. VHSIC Hardware Description Language (VHDL) Benchmark Suite

    DTIC Science & Technology

    1990-10-01

    T7 prit.iy label iTM Architecture label I flO inch Label I TIC ) P rae s Label I W Conkgurstion Spec. 1 21 Appendix B. Test Descriptions, Shell Code...Siensls R Accnss Operaist s Iov (soc 3 3 & 7 3 61 $ File I/0 S1 Reed S2 Write T Label Site TI Signal TIA Archi~ecture TIE Block TIC Port T2 VariableI...Access Operations I (sec 3 3 & 7.3 61 1 S FI I/0 1 Sl Read 52 Write T Label SreI TI Signal TIA Architeclt TIR Block TIC

  3. Benchmark tests of JENDL-3.2 for thermal and fast reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takano, Hideki; Akie, Hiroshi; Kikuchi, Yasuyuki

    1994-12-31

    Benchmark calculations for a variety of thermal and fast reactors have been performed by using the newly evaluated JENDL-3 Version-2 (JENDL-3.2) file. In the thermal reactor calculations for the uranium and plutonium fueled cores of TRX and TCA, the k{sub eff} and lattice parameters were well predicted. The fast reactor calculations for ZPPR-9 and FCA assemblies showed that the k{sub eff} reactivity worths of Doppler, sodium void and control rod, and reaction rate distribution were in a very good agreement with the experiments.

  4. Effectiveness of Social Marketing Interventions to Promote Physical Activity Among Adults: A Systematic Review.

    PubMed

    Xia, Yuan; Deshpande, Sameer; Bonates, Tiberius

    2016-11-01

    Social marketing managers promote desired behaviors to an audience by making them tangible in the form of environmental opportunities to enhance benefits and reduce barriers. This study proposed "benchmarks," modified from those found in the past literature, that would match important concepts of the social marketing framework and the inclusion of which would ensure behavior change effectiveness. In addition, we analyzed behavior change interventions on a "social marketing continuum" to assess whether the number of benchmarks and the role of specific benchmarks influence the effectiveness of physical activity promotion efforts. A systematic review of social marketing interventions available in academic studies published between 1997 and 2013 revealed 173 conditions in 92 interventions. Findings based on χ 2 , Mallows' Cp, and Logical Analysis of Data tests revealed that the presence of more benchmarks in interventions increased the likelihood of success in promoting physical activity. The presence of more than 3 benchmarks improved the success of the interventions; specifically, all interventions were successful when more than 7.5 benchmarks were present. Further, primary formative research, core product, actual product, augmented product, promotion, and behavioral competition all had a significant influence on the effectiveness of interventions. Social marketing is an effective approach in promoting physical activity among adults when a substantial number of benchmarks are used and when managers understand the audience, make the desired behavior tangible, and promote the desired behavior persuasively.

  5. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORE 4: RANDOM PACKING WITH A 1:1 MODERATOR-TO-FUEL PEBBLE RATIO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; Leland M. Montierth

    2013-03-01

    In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » One benchmark experiment was evaluated in this report: Core 4. Core 4 represents the only configuration with random pebble packing in the HTR-PROTEUS series of experiments, and has a moderator-to-fuel pebble ratio of 1:1. Three random configurations were performed. The initial configuration, Core 4.1, was rejected because the method for pebble loading, separate delivery tubes for the moderator and fuel pebbles, may not have been completely random; this core loading was rejected by the experimenters. Cores 4.2 and 4.3 were loaded using a single delivery tube, eliminating the possibility for systematic ordering effects. The second and third cores differed slightly in the quantity of pebbles loaded (40 each of moderator and fuel pebbles), stacked height of the pebbles in the core cavity (0.02 m), withdrawn distance of the stainless steel control rods (20 mm), and withdrawn distance of the autorod (30 mm). The 34 coolant channels in the upper axial reflector and the 33 coolant channels in the lower axial reflector were open. Additionally, the axial graphite fillers used in all other HTR-PROTEUS configurations to create a 12-sided core cavity were not used in the randomly packed cores. Instead, graphite fillers were placed on the cavity floor, creating a funnel-like base, to discourage ordering effects during pebble loading. Core 4 was determined to be acceptable benchmark experiment.« less

  6. HTR-proteus pebble bed experimental program core 4: random packing with a 1:1 moderator-to-fuel pebble ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Montierth, Leland M.; Sterbentz, James W.

    2014-03-01

    In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » One benchmark experiment was evaluated in this report: Core 4. Core 4 represents the only configuration with random pebble packing in the HTR-PROTEUS series of experiments, and has a moderator-to-fuel pebble ratio of 1:1. Three random configurations were performed. The initial configuration, Core 4.1, was rejected because the method for pebble loading, separate delivery tubes for the moderator and fuel pebbles, may not have been completely random; this core loading was rejected by the experimenters. Cores 4.2 and 4.3 were loaded using a single delivery tube, eliminating the possibility for systematic ordering effects. The second and third cores differed slightly in the quantity of pebbles loaded (40 each of moderator and fuel pebbles), stacked height of the pebbles in the core cavity (0.02 m), withdrawn distance of the stainless steel control rods (20 mm), and withdrawn distance of the autorod (30 mm). The 34 coolant channels in the upper axial reflector and the 33 coolant channels in the lower axial reflector were open. Additionally, the axial graphite fillers used in all other HTR-PROTEUS configurations to create a 12-sided core cavity were not used in the randomly packed cores. Instead, graphite fillers were placed on the cavity floor, creating a funnel-like base, to discourage ordering effects during pebble loading. Core 4 was determined to be acceptable benchmark experiment.« less

  7. Next Generation School Districts: What Capacities Do Districts Need to Create and Sustain Schools That Are Ready to Deliver on Common Core?

    ERIC Educational Resources Information Center

    Lake, Robin; Hill, Paul T.; Maas, Tricia

    2015-01-01

    Every sector of the U.S. economy is working on ways to deliver services in a more customized manner. If all goes well, education is headed in the same direction. Personalized learning and globally benchmarked academic standards (a.k.a. Common Core) are the focus of most major school districts and charter school networks. Educators and parents know…

  8. Mean velocity and turbulence measurements in a 90 deg curved duct with thin inlet boundary layer

    NASA Technical Reports Server (NTRS)

    Crawford, R. A.; Peters, C. E.; Steinhoff, J.; Hornkohl, J. O.; Nourinejad, J.; Ramachandran, K.

    1985-01-01

    The experimental database established by this investigation of the flow in a large rectangular turning duct is of benchmark quality. The experimental Reynolds numbers, Deans numbers and boundary layer characteristics are significantly different from previous benchmark curved-duct experimental parameters. This investigation extends the experimental database to higher Reynolds number and thinner entrance boundary layers. The 5% to 10% thick boundary layers, based on duct half-width, results in a large region of near-potential flow in the duct core surrounded by developing boundary layers with large crossflows. The turbulent entrance boundary layer case at R sub ed = 328,000 provides an incompressible flowfield which approaches real turbine blade cascade characteristics. The results of this investigation provide a challenging benchmark database for computational fluid dynamics code development.

  9. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Proctor, Fred H.

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these benchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, J; Dossa, D; Gokhale, M

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less

  11. Time and frequency structure of causal correlation networks in the China bond market

    NASA Astrophysics Data System (ADS)

    Wang, Zhongxing; Yan, Yan; Chen, Xiaosong

    2017-07-01

    There are more than eight hundred interest rates published in the China bond market every day. Identifying the benchmark interest rates that have broad influences on most other interest rates is a major concern for economists. In this paper, a multi-variable Granger causality test is developed and applied to construct a directed network of interest rates, whose important nodes, regarded as key interest rates, are evaluated with CheiRank scores. The results indicate that repo rates are the benchmark of short-term rates, the central bank bill rates are in the core position of mid-term interest rates network, and treasury bond rates lead the long-term bond rates. The evolution of benchmark interest rates from 2008 to 2014 is also studied, and it is found that SHIBOR has generally become the benchmark interest rate in China. In the frequency domain we identify the properties of information flows between interest rates, and the result confirms the existence of market segmentation in the China bond market.

  12. Graph 500 on OpenSHMEM: Using a Practical Survey of Past Work to Motivate Novel Algorithmic Developments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grossman, Max; Pritchard Jr., Howard Porter; Budimlic, Zoran

    2016-12-22

    Graph500 [14] is an effort to offer a standardized benchmark across large-scale distributed platforms which captures the behavior of common communicationbound graph algorithms. Graph500 differs from other large-scale benchmarking efforts (such as HPL [6] or HPGMG [7]) primarily in the irregularity of its computation and data access patterns. The core computational kernel of Graph500 is a breadth-first search (BFS) implemented on an undirected graph. The output of Graph500 is a spanning tree of the input graph, usually represented by a predecessor mapping for every node in the graph. The Graph500 benchmark defines several pre-defined input sizes for implementers to testmore » against. This report summarizes investigation into implementing the Graph500 benchmark on OpenSHMEM, and focuses on first building a strong and practical understanding of the strengths and limitations of past work before proposing and developing novel extensions.« less

  13. The change of radial power factor distribution due to RCCA insertion at the first cycle core of AP1000

    NASA Astrophysics Data System (ADS)

    Susilo, J.; Suparlina, L.; Deswandri; Sunaryo, G. R.

    2018-02-01

    The using of a computer program for the PWR type core neutronic design parameters analysis has been carried out in some previous studies. These studies included a computer code validation on the neutronic parameters data values resulted from measurements and benchmarking calculation. In this study, the AP1000 first cycle core radial power peaking factor validation and analysis were performed using CITATION module of the SRAC2006 computer code. The computer code has been also validated with a good result to the criticality values of VERA benchmark core. The AP1000 core power distribution calculation has been done in two-dimensional X-Y geometry through ¼ section modeling. The purpose of this research is to determine the accuracy of the SRAC2006 code, and also the safety performance of the AP1000 core first cycle operating. The core calculations were carried out with the several conditions, those are without Rod Cluster Control Assembly (RCCA), by insertion of a single RCCA (AO, M1, M2, MA, MB, MC, MD) and multiple insertion RCCA (MA + MB, MA + MB + MC, MA + MB + MC + MD, and MA + MB + MC + MD + M1). The maximum power factor of the fuel rods value in the fuel assembly assumedapproximately 1.406. The calculation results analysis showed that the 2-dimensional CITATION module of SRAC2006 code is accurate in AP1000 power distribution calculation without RCCA and with MA+MB RCCA insertion.The power peaking factor on the first operating cycle of the AP1000 core without RCCA, as well as with single and multiple RCCA are still below in the safety limit values (less then about 1.798). So in terms of thermal power generated by the fuel assembly, then it can be considered that the AP100 core at the first operating cycle is safe.

  14. Benchmarking of calculation schemes in APOLLO2 and COBAYA3 for WER lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheleva, N.; Ivanov, P.; Todorova, G.

    This paper presents solutions of the NURISP WER lattice benchmark using APOLLO2, TRIPOLI4 and COBAYA3 pin-by-pin. The main objective is to validate MOC based calculation schemes for pin-by-pin cross-section generation with APOLLO2 against TRIPOLI4 reference results. A specific objective is to test the APOLLO2 generated cross-sections and interface discontinuity factors in COBAYA3 pin-by-pin calculations with unstructured mesh. The VVER-1000 core consists of large hexagonal assemblies with 2 mm inter-assembly water gaps which require the use of unstructured meshes in the pin-by-pin core simulators. The considered 2D benchmark problems include 19-pin clusters, fuel assemblies and 7-assembly clusters. APOLLO2 calculation schemes withmore » the step characteristic method (MOC) and the higher-order Linear Surface MOC have been tested. The comparison of APOLLO2 vs. TRIPOLI4 results shows a very close agreement. The 3D lattice solver in COBAYA3 uses transport corrected multi-group diffusion approximation with interface discontinuity factors of Generalized Equivalence Theory (GET) or Black Box Homogenization (BBH) type. The COBAYA3 pin-by-pin results in 2, 4 and 8 energy groups are close to the reference solutions when using side-dependent interface discontinuity factors. (authors)« less

  15. Benchmarking road safety performance: Identifying a meaningful reference (best-in-class).

    PubMed

    Chen, Faan; Wu, Jiaorong; Chen, Xiaohong; Wang, Jianjun; Wang, Di

    2016-01-01

    For road safety improvement, comparing and benchmarking performance are widely advocated as the emerging and preferred approaches. However, there is currently no universally agreed upon approach for the process of road safety benchmarking, and performing the practice successfully is by no means easy. This is especially true for the two core activities of which: (1) developing a set of road safety performance indicators (SPIs) and combining them into a composite index; and (2) identifying a meaningful reference (best-in-class), one which has already obtained outstanding road safety practices. To this end, a scientific technique that can combine the multi-dimensional safety performance indicators (SPIs) into an overall index, and subsequently can identify the 'best-in-class' is urgently required. In this paper, the Entropy-embedded RSR (Rank-sum ratio), an innovative, scientific and systematic methodology is investigated with the aim of conducting the above two core tasks in an integrative and concise procedure, more specifically in a 'one-stop' way. Using a combination of results from other methods (e.g. the SUNflower approach) and other measures (e.g. Human Development Index) as a relevant reference, a given set of European countries are robustly ranked and grouped into several classes based on the composite Road Safety Index. Within each class the 'best-in-class' is then identified. By benchmarking road safety performance, the results serve to promote best practice, encourage the adoption of successful road safety strategies and measures and, more importantly, inspire the kind of political leadership needed to create a road transport system that maximizes safety. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Properties of 5052 Aluminum For Use as Honeycomb Core in Manned Spaceflight

    NASA Technical Reports Server (NTRS)

    Lerch, Bradley A.

    2018-01-01

    This work explains that the properties of Al 5052 material used commonly for honeycomb cores in sandwich panels are highly dependent on the tempering condition. It has not been common to specify the temper when ordering HC material nor is it common for the supplier to state what the temper is. For aerospace uses, a temper of H38 or H39 is probably recommended. This temper should be stated in the bill of material and should be verified upon receipt of the core. To this end some properties provided herein can aid as benchmark values.

  17. User's Manual for RESRAD-OFFSITE Version 2.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, C.; Gnanapragasam, E.; Biwer, B. M.

    2007-09-05

    The RESRAD-OFFSITE code is an extension of the RESRAD (onsite) code, which has been widely used for calculating doses and risks from exposure to radioactively contaminated soils. The development of RESRAD-OFFSITE started more than 10 years ago, but new models and methodologies have been developed, tested, and incorporated since then. Some of the new models have been benchmarked against other independently developed (international) models. The databases used have also expanded to include all the radionuclides (more than 830) contained in the International Commission on Radiological Protection (ICRP) 38 database. This manual provides detailed information on the design and application ofmore » the RESRAD-OFFSITE code. It describes in detail the new models used in the code, such as the three-dimensional dispersion groundwater flow and radionuclide transport model, the Gaussian plume model for atmospheric dispersion, and the deposition model used to estimate the accumulation of radionuclides in offsite locations and in foods. Potential exposure pathways and exposure scenarios that can be modeled by the RESRAD-OFFSITE code are also discussed. A user's guide is included in Appendix A of this manual. The default parameter values and parameter distributions are presented in Appendix B, along with a discussion on the statistical distributions for probabilistic analysis. A detailed discussion on how to reduce run time, especially when conducting probabilistic (uncertainty) analysis, is presented in Appendix C of this manual.« less

  18. Investigation of Abnormal Heat Transfer and Flow in a VHTR Reactor Core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawaji, Masahiro; Valentin, Francisco I.; Artoun, Narbeh

    2015-12-21

    The main objective of this project was to identify and characterize the conditions under which abnormal heat transfer phenomena would occur in a Very High Temperature Reactor (VHTR) with a prismatic core. High pressure/high temperature experiments have been conducted to obtain data that could be used for validation of VHTR design and safety analysis codes. The focus of these experiments was on the generation of benchmark data for design and off-design heat transfer for forced, mixed and natural circulation in a VHTR core. In particular, a flow laminarization phenomenon was intensely investigated since it could give rise to hot spotsmore » in the VHTR core.« less

  19. Proposed biopsy performance benchmarks for MRI based on an audit of a large academic center.

    PubMed

    Sedora Román, Neda I; Mehta, Tejas S; Sharpe, Richard E; Slanetz, Priscilla J; Venkataraman, Shambhavi; Fein-Zachary, Valerie; Dialani, Vandana

    2018-05-01

    Performance benchmarks exist for mammography (MG); however, performance benchmarks for magnetic resonance imaging (MRI) are not yet fully developed. The purpose of our study was to perform an MRI audit based on established MG and screening MRI benchmarks and to review whether these benchmarks can be applied to an MRI practice. An IRB approved retrospective review of breast MRIs was performed at our center from 1/1/2011 through 12/31/13. For patients with biopsy recommendation, core biopsy and surgical pathology results were reviewed. The data were used to derive mean performance parameter values, including abnormal interpretation rate (AIR), positive predictive value (PPV), cancer detection rate (CDR), percentage of minimal cancers and axillary node negative cancers and compared with MG and screening MRI benchmarks. MRIs were also divided by screening and diagnostic indications to assess for differences in performance benchmarks amongst these two groups. Of the 2455 MRIs performed over 3-years, 1563 were performed for screening indications and 892 for diagnostic indications. With the exception of PPV2 for screening breast MRIs from 2011 to 2013, PPVs were met for our screening and diagnostic populations when compared to the MRI screening benchmarks established by the Breast Imaging Reporting and Data System (BI-RADS) 5 Atlas ® . AIR and CDR were lower for screening indications as compared to diagnostic indications. New MRI screening benchmarks can be used for screening MRI audits while the American College of Radiology (ACR) desirable goals for diagnostic MG can be used for diagnostic MRI audits. Our study corroborates established findings regarding differences in AIR and CDR amongst screening versus diagnostic indications. © 2017 Wiley Periodicals, Inc.

  20. Under Construction: Benchmark Assessments and Common Core Math Implementation in Grades K-8. Formative Evaluation Cycle Report for the Math in Common Initiative, Volume 1

    ERIC Educational Resources Information Center

    Flaherty, John, Jr.; Sobolew-Shubin, Alexandria; Heredia, Alberto; Chen-Gaddini, Min; Klarin, Becca; Finkelstein, Neal D.

    2014-01-01

    Math in Common® (MiC) is a five-year initiative that supports a formal network of 10 California school districts as they implement the Common Core State Standards in mathematics (CCSS-M) across grades K-8. As the MiC initiative moves into its second year, one of the central activities that each of the districts is undergoing to support CCSS…

  1. Benchmark Comparison of Dual- and Quad-Core Processor Linux Clusters with Two Global Climate Modeling Workloads

    NASA Technical Reports Server (NTRS)

    McGalliard, James

    2008-01-01

    This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.

  2. 10 CFR Appendix K to Part 50 - ECCS Evaluation Models

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... hypothetical accident. The modified Baroczy correlation (Baroczy, C. J., “A Systematic Correlation for Two... distribution shapes and peaking factors representing power distributions that may occur over the core lifetime must be studied. The selected combination of power distribution shape and peaking factor should be the...

  3. 10 CFR Appendix K to Part 50 - ECCS Evaluation Models

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... hypothetical accident. The modified Baroczy correlation (Baroczy, C. J., “A Systematic Correlation for Two... distribution shapes and peaking factors representing power distributions that may occur over the core lifetime must be studied. The selected combination of power distribution shape and peaking factor should be the...

  4. 49 CFR 194.113 - Information summary.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 3 2013-10-01 2013-10-01 false Information summary. 194.113 Section 194.113... Response Plans § 194.113 Information summary. (a) The information summary for the core plan, required by... state(s). (b) The information summary for the response zone appendix, required in § 194.107, must...

  5. Handbook of Reference Sources. Third Edition.

    ERIC Educational Resources Information Center

    Nichols, Margaret Irby

    This third edition of popular and useful reference works, which emphasizes the needs of small libraries, contains 975 annotated entries and lists 201 additional titles (most with bibliographic and order information) in the annotations, representing an expansion of 30 percent over the second edition. The appendix lists 116 basic or core reference…

  6. Global Positioning System (GPS) survey of Augustine Volcano, Alaska, August 3-8, 2000: data processing, geodetic coordinates and comparison with prior geodetic surveys

    USGS Publications Warehouse

    Pauk, Benjamin A.; Power, John A.; Lisowski, Mike; Dzurisin, Daniel; Iwatsubo, Eugene Y.; Melbourne, Tim

    2001-01-01

    Between August 3 and 8,2000,the Alaska Volcano Observatory completed a Global Positioning System (GPS) survey at Augustine Volcano, Alaska. Augustine is a frequently active calcalkaline volcano located in the lower portion of Cook Inlet (fig. 1), with reported eruptions in 1812, 1882, 1909?, 1935, 1964, 1976, and 1986 (Miller et al., 1998). Geodetic measurements using electronic and optical surveying techniques (EDM and theodolite) were begun at Augustine Volcano in 1986. In 1988 and 1989, an island-wide trilateration network comprising 19 benchmarks was completed and measured in its entirety (Power and Iwatsubo, 1998). Partial GPS surveys of the Augustine Island geodetic network were completed in 1992 and 1995; however, neither of these surveys included all marks on the island.Additional GPS measurements of benchmarks A5 and A15 (fig. 2) were made during the summers of 1992, 1993, 1994, and 1996. The goals of the 2000 GPS survey were to:1) re-measure all existing benchmarks on Augustine Island using a homogeneous set of GPS equipment operated in a consistent manner, 2) add measurements at benchmarks on the western shore of Cook Inlet at distances of 15 to 25 km, 3) add measurements at an existing benchmark (BURR) on Augustine Island that was not previously surveyed, and 4) add additional marks in areas of the island thought to be actively deforming. The entire survey resulted in collection of GPS data at a total of 24 sites (fig. 1 and 2). In this report we describe the methods of GPS data collection and processing used at Augustine during the 2000 survey. We use this data to calculate coordinates and elevations for all 24 sites surveyed. Data from the 2000 survey is then compared toelectronic and optical measurements made in 1988 and 1989. This report also contains a general description of all marks surveyed in 2000 and photographs of all new marks established during the 2000 survey (Appendix A).

  7. Excore Modeling with VERAShift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.

    It is important to be able to accurately predict the neutron flux outside the immediate reactor core for a variety of safety and material analyses. Monte Carlo radiation transport calculations are required to produce the high fidelity excore responses. Under this milestone VERA (specifically the VERAShift package) has been extended to perform excore calculations by running radiation transport calculations with Shift. This package couples VERA-CS with Shift to perform excore tallies for multiple state points concurrently, with each component capable of parallel execution on independent domains. Specifically, this package performs fluence calculations in the core barrel and vessel, or, performsmore » the requested tallies in any user-defined excore regions. VERAShift takes advantage of the general geometry package in Shift. This gives VERAShift the flexibility to explicitly model features outside the core barrel, including detailed vessel models, detectors, and power plant details. A very limited set of experimental and numerical benchmarks is available for excore simulation comparison. The Consortium for the Advanced Simulation of Light Water Reactors (CASL) has developed a set of excore benchmark problems to include as part of the VERA-CS verification and validation (V&V) problems. The excore capability in VERAShift has been tested on small representative assembly problems, multiassembly problems, and quarter-core problems. VERAView has also been extended to visualize these vessel fluence results from VERAShift. Preliminary vessel fluence results for quarter-core multistate calculations look very promising. Further development is needed to determine the details relevant to excore simulations. Validation of VERA for fluence and excore detectors still needs to be performed against experimental and numerical results.« less

  8. Qualification of CASMO5 / SIMULATE-3K against the SPERT-III E-core cold start-up experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grandi, G.; Moberg, L.

    SIMULATE-3K is a three-dimensional kinetic code applicable to LWR Reactivity Initiated Accidents. S3K has been used to calculate several international recognized benchmarks. However, the feedback models in the benchmark exercises are different from the feedback models that SIMULATE-3K uses for LWR reactors. For this reason, it is worth comparing the SIMULATE-3K capabilities for Reactivity Initiated Accidents against kinetic experiments. The Special Power Excursion Reactor Test III was a pressurized-water, nuclear-research facility constructed to analyze the reactor kinetic behavior under initial conditions similar to those of commercial LWRs. The SPERT III E-core resembles a PWR in terms of fuel type, moderator,more » coolant flow rate, and system pressure. The initial test conditions (power, core flow, system pressure, core inlet temperature) are representative of cold start-up, hot start-up, hot standby, and hot full power. The qualification of S3K against the SPERT III E-core measurements is an ongoing work at Studsvik. In this paper, the results for the 30 cold start-up tests are presented. The results show good agreement with the experiments for the reactivity initiated accident main parameters: peak power, energy release and compensated reactivity. Predicted and measured peak powers differ at most by 13%. Measured and predicted reactivity compensations at the time of the peak power differ less than 0.01 $. Predicted and measured energy release differ at most by 13%. All differences are within the experimental uncertainty. (authors)« less

  9. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas

    2009-01-01

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors,more » other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million atom biological systems scale well up to 30k cores, producing 30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.« less

  10. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer.

    PubMed

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas; Smith, Jeremy C

    2009-10-13

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors, other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million-atom biological systems scale well up to ∼30k cores, producing ∼30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.

  11. Measurement and validation of benchmark-quality thick-target tungsten X-ray spectra below 150 kVp.

    PubMed

    Mercier, J R; Kopp, D T; McDavid, W D; Dove, S B; Lancaster, J L; Tucker, D M

    2000-11-01

    Pulse-height distributions of two constant potential X-ray tubes with fixed anode tungsten targets were measured and unfolded. The measurements employed quantitative alignment of the beam, the use of two different semiconductor detectors (high-purity germanium and cadmium-zinc-telluride), two different ion chamber systems with beam-specific calibration factors, and various filter and tube potential combinations. Monte Carlo response matrices were generated for each detector for unfolding the pulse-height distributions into spectra incident on the detectors. These response matrices were validated for the low error bars assigned to the data. A significant aspect of the validation of spectra, and a detailed characterization of the X-ray tubes, involved measuring filtered and unfiltered beams at multiple tube potentials (30-150 kVp). Full corrections to ion chamber readings were employed to convert normalized fluence spectra into absolute fluence spectra. The characterization of fixed anode pitting and its dominance over exit window plating and/or detector dead layer was determined. An Appendix of tabulated benchmark spectra with assigned error ranges was developed for future reference.

  12. Construction diagrams, geophysical logs, and lithologic descriptions for boreholes USGS 126a, 126b, 127, 128, 129, 130, 131, 132, 133, and 134, Idaho National Laboratory, Idaho

    USGS Publications Warehouse

    Twining, Brian V.; Hodges, Mary K.V.; Orr, Stephanie

    2008-01-01

    This report summarizes construction, geophysical, and lithologic data collected from ten U.S. Geological Survey (USGS) boreholes completed between 1999 nd 2006 at the Idaho National Laboratory (INL): USGS 126a, 126b, 127, 128, 129, 130, 131, 132, 133, and 134. Nine boreholes were continuously cored; USGS 126b had 5 ft of core. Completion depths range from 472 to 1,238 ft. Geophysical data were collected for each borehole, and those data are summarized in this report. Cores were photographed and digitally logged using commercially available software. Digital core logs are in appendixes A through J. Borehole descriptions summarize location, completion date, and amount and type of core recovered. This report was prepared by the USGS in cooperation with the U.S. Department of Energy (DOE).

  13. 17 CFR Appendix B to Part 36 - Guidance on, and Acceptable Practices in, Compliance With Core Principles

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... spot-month positions. Spot-month limits should be adopted for significant price discovery contracts to... market or derivatives transaction execution facility should set the spot-month limit for its significant... designated contract market or derivatives transaction execution facility. In this case, the spot-month...

  14. Environmental Effects of Hydraulic Dredging for Clam Shells in Lake Pontchartrain, Louisiana,

    DTIC Science & Technology

    1981-06-01

    promotes the release of nutrients. Zicker et al. (1956) buried labelled phosphorus at various depths in sediment cores and measured the release of...433. Zicker , E., K. Berger, and A. Hasler. 1956. Phosphate release from Bog Lake muds. Limnol. Oceanogr. 1:296-303. 1 113 -t7- APPENDIX A BENTHIC

  15. 17 CFR Appendix B to Part 36 - Guidance on, and Acceptable Practices in, Compliance With Core Principles

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... decision-making process and the reasons for using its emergency action authority. Information on steps... have clear procedures and guidelines for decision-making regarding emergency intervention in the market, including procedures and guidelines to avoid conflicts of interest while carrying out such decision-making...

  16. 17 CFR Appendix B to Part 38 - Guidance on, and Acceptable Practices in, Compliance With Core Principles

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... markets, it is more appropriate to pay attention to the availability and pricing of the commodity making... decision-making regarding emergency intervention in the market, including procedures and guidelines to avoid conflicts of interest while carrying out such decision-making. A contract market should also have...

  17. 17 CFR Appendix B to Part 38 - Guidance on, and Acceptable Practices in, Compliance With Core Principles

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... markets, it is more appropriate to pay attention to the availability and pricing of the commodity making... decision-making regarding emergency intervention in the market, including procedures and guidelines to avoid conflicts of interest while carrying out such decision-making. A contract market should also have...

  18. 75 FR 76051 - Northern States Power Company-Minnesota, Prairie Island Nuclear Generating Plant, Units 1 and 2...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-07

    ..., 2010 (Agencywide Documents Access and Management System Accession Nos. ML093280883 and ML101480083... systems for light-water nuclear power reactors,'' and appendix K to 10 CFR part 50, ``ECCS Evaluation... core cooling system (ECCS) for reactors fueled with zircaloy or ZIRLO\\TM\\ cladding. In addition...

  19. 15 CFR Appendix B to Subpart R of... - Minor Projects for Purposes of § 922.193(a)(2)(iii)

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...), the Michigan Department of Environmental Quality (Department) issues permits for projects that are of... values or interests, including navigation and water quality. (h) Fish or wildlife habitat structures..., water monitoring devices, water quality testing devices, survey devices, and core sampling devices, if...

  20. Benchmarking health IT among OECD countries: better data for better policy

    PubMed Central

    Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K

    2014-01-01

    Objective To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. Materials and methods The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. Results The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Discussion Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. Conclusions As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this. PMID:23721983

  1. Benchmarking health IT among OECD countries: better data for better policy.

    PubMed

    Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K

    2014-01-01

    To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this.

  2. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHart, Mark D.; Mausolff, Zander; Weems, Zach

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outsidemore » of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.« less

  3. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  4. A computationally simple model for determining the time dependent spectral neutron flux in a nuclear reactor core

    NASA Astrophysics Data System (ADS)

    Schneider, E. A.; Deinert, M. R.; Cady, K. B.

    2006-10-01

    The balance of isotopes in a nuclear reactor core is key to understanding the overall performance of a given fuel cycle. This balance is in turn most strongly affected by the time and energy-dependent neutron flux. While many large and involved computer packages exist for determining this spectrum, a simplified approach amenable to rapid computation is missing from the literature. We present such a model, which accepts as inputs the fuel element/moderator geometry and composition, reactor geometry, fuel residence time and target burnup and we compare it to OECD/NEA benchmarks for homogeneous MOX and UOX LWR cores. Collision probability approximations to the neutron transport equation are used to decouple the spatial and energy variables. The lethargy dependent neutron flux, governed by coupled integral equations for the fuel and moderator/coolant regions is treated by multigroup thermalization methods, and the transport of neutrons through space is modeled by fuel to moderator transport and escape probabilities. Reactivity control is achieved through use of a burnable poison or adjustable control medium. The model calculates the buildup of 24 actinides, as well as fission products, along with the lethargy dependent neutron flux and the results of several simulations are compared with benchmarked standards.

  5. Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly

    NASA Astrophysics Data System (ADS)

    Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.

    2014-04-01

    We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.

  6. Xenon-induced power oscillations in a generic small modular reactor

    NASA Astrophysics Data System (ADS)

    Kitcher, Evans Damenortey

    As world demand for energy continues to grow at unprecedented rates, the world energy portfolio of the future will inevitably include a nuclear energy contribution. It has been suggested that the Small Modular Reactor (SMR) could play a significant role in the spread of civilian nuclear technology to nations previously without nuclear energy. As part of the design process, the SMR design must be assessed for the threat to operations posed by xenon-induced power oscillations. In this research, a generic SMR design was analyzed with respect to just such a threat. In order to do so, a multi-physics coupling routine was developed with MCNP/MCNPX as the neutronics solver. Thermal hydraulic assessments were performed using a single channel analysis tool developed in Python. Fuel and coolant temperature profiles were implemented in the form of temperature dependent fuel cross sections generated using the SIGACE code and reactor core coolant densities. The Power Axial Offset (PAO) and Xenon Axial Offset (XAO) parameters were chosen to quantify any oscillatory behavior observed. The methodology was benchmarked against results from literature of startup tests performed at a four-loop PWR in Korea. The developed benchmark model replicated the pertinent features of the reactor within ten percent of the literature values. The results of the benchmark demonstrated that the developed methodology captured the desired phenomena accurately. Subsequently, a high fidelity SMR core model was developed and assessed. Results of the analysis revealed an inherently stable SMR design at beginning of core life and end of core life under full-power and half-power conditions. The effect of axial discretization, stochastic noise and convergence of the Monte Carlo tallies in the calculations of the PAO and XAO parameters was investigated. All were found to be quite small and the inherently stable nature of the core design with respect to xenon-induced power oscillations was confirmed. Finally, a preliminary investigation into excess reactivity control options for the SMR design was conducted confirming the generally held notion that existing PWR control mechanisms can be used in iPWR SMRs with similar effectiveness. With the desire to operate the SMR under the boron free coolant condition, erbium oxide fuel integral burnable absorber rods were identified as a possible means to retain the dispersed absorber effect of soluble boron in the reactor coolant in replacement.

  7. The Management Development Program: A Competency-Based Model for Preparing Hospitality Leaders.

    ERIC Educational Resources Information Center

    Brownell, Judi; Chung, Beth G.

    2001-01-01

    The master of management program at Cornell University focused on competency-based development of skills for the hospitality industry through core courses, minicourses, skill benchmarking, and continuous improvement. Benefits include a shift in the teacher role to advocate/coach, increased information sharing, student satisfaction, and clear…

  8. ICT Proficiency and Gender: A Validation on Training and Development

    ERIC Educational Resources Information Center

    Lin, Shinyi; Shih, Tse-Hua; Lu, Ruiling

    2013-01-01

    Use of innovative learning/instruction mode, embedded in the Certification Pathway System (CPS) developed by Certiport TM, is geared toward Internet and Computing Benchmark & Mentor specifically for IC[superscript 3] certification. The Internet and Computing Core Certification (IC[superscript 3]), as an industry-based credentialing program,…

  9. The Cognitive Science behind the Common Core

    ERIC Educational Resources Information Center

    Marchitello, Max; Wilhelm, Megan

    2014-01-01

    Raising academic standards has been part of the education policy discourse for decades. As early as the 1990s, states and school districts attempted to raise student achievement by developing higher standards and measuring student progress according to more rigorous benchmarks. However, the caliber of the standards--and their assessments--varied…

  10. The Effect of a High School Financial Literacy Course on Student Financial Knowledge

    ERIC Educational Resources Information Center

    McCann, Karen L.

    2010-01-01

    New Jersey school districts establish curriculums to meet the proficiencies found in the New Jersey Core Curriculum Content Standards (NJCCCS). The research focuses on the effectiveness of the Washington Township High School Career and Technology Education Department's curriculum in addressing the NJCCS Financial Literacy benchmarks. The…

  11. a Dosimetry Assessment for the Core Restraint of AN Advanced Gas Cooled Reactor

    NASA Astrophysics Data System (ADS)

    Thornton, D. A.; Allen, D. A.; Tyrrell, R. J.; Meese, T. C.; Huggon, A. P.; Whiley, G. S.; Mossop, J. R.

    2009-08-01

    This paper describes calculations of neutron damage rates within the core restraint structures of Advanced Gas Cooled Reactors (AGRs). Using advanced features of the Monte Carlo radiation transport code MCBEND, and neutron source data from core follow calculations performed with the reactor physics code PANTHER, a detailed model of the reactor cores of two of British Energy's AGR power plants has been developed for this purpose. Because there are no relevant neutron fluence measurements directly supporting this assessment, results of benchmark comparisons and successful validation of MCBEND for Magnox reactors have been used to estimate systematic and random uncertainties on the predictions. In particular, it has been necessary to address the known under-prediction of lower energy fast neutron responses associated with the penetration of large thicknesses of graphite.

  12. Deterministic Modeling of the High Temperature Test Reactor with DRAGON-HEXPEDITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Ortensi; M.A. Pope; R.M. Ferrer

    2010-10-01

    The Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine the INL’s current prismatic reactor analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 fuel column thin annular core, and the fully loaded core critical condition with 30 fuel columns. Special emphasis is devoted to physical phenomena and artifacts in HTTR that are similar to phenomena and artifacts in themore » NGNP base design. The DRAGON code is used in this study since it offers significant ease and versatility in modeling prismatic designs. DRAGON can generate transport solutions via Collision Probability (CP), Method of Characteristics (MOC) and Discrete Ordinates (Sn). A fine group cross-section library based on the SHEM 281 energy structure is used in the DRAGON calculations. The results from this study show reasonable agreement in the calculation of the core multiplication factor with the MC methods, but a consistent bias of 2–3% with the experimental values is obtained. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement partially stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less

  13. 10 CFR Appendix E to Part 50 - Emergency Planning and Preparedness for Production and Utilization Facilities

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... could communicate with a safety system. In this case, appropriate isolation devices would be required at..., feedwater flow, and reactor power; (2) Safety injection: Reactor core isolation cooling flow, high-pressure... data points identified in the ERDS Data Point Library 9 (site specific data base residing on the ERDS...

  14. 10 CFR Appendix E to Part 50 - Emergency Planning and Preparedness for Production and Utilization Facilities

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... could communicate with a safety system. In this case, appropriate isolation devices would be required at..., feedwater flow, and reactor power; (2) Safety injection: Reactor core isolation cooling flow, high-pressure... data points identified in the ERDS Data Point Library 9 (site specific data base residing on the ERDS...

  15. 10 CFR Appendix A to Part 110 - Illustrative List of Nuclear Reactor Equipment Under NRC Export Licensing Authority

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Illustrative List of Nuclear Reactor Equipment Under NRC... List of Nuclear Reactor Equipment Under NRC Export Licensing Authority Note: A nuclear reactor... core of a nuclear reactor and capable of withstanding the operating pressure of the primary coolant. (2...

  16. Methodology of full-core Monte Carlo calculations with leakage parameter evaluations for benchmark critical experiment analysis

    NASA Astrophysics Data System (ADS)

    Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.

    1997-02-01

    The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.

  17. Optical interconnection network for parallel access to multi-rank memory in future computing systems.

    PubMed

    Wang, Kang; Gu, Huaxi; Yang, Yintang; Wang, Kun

    2015-08-10

    With the number of cores increasing, there is an emerging need for a high-bandwidth low-latency interconnection network, serving core-to-memory communication. In this paper, aiming at the goal of simultaneous access to multi-rank memory, we propose an optical interconnection network for core-to-memory communication. In the proposed network, the wavelength usage is delicately arranged so that cores can communicate with different ranks at the same time and broadcast for flow control can be achieved. A distributed memory controller architecture that works in a pipeline mode is also designed for efficient optical communication and transaction address processes. The scaling method and wavelength assignment for the proposed network are investigated. Compared with traditional electronic bus-based core-to-memory communication, the simulation results based on the PARSEC benchmark show that the bandwidth enhancement and latency reduction are apparent.

  18. Core-shell Au-Pd nanoparticles as cathode catalysts for microbial fuel cell applications

    PubMed Central

    Yang, Gaixiu; Chen, Dong; Lv, Pengmei; Kong, Xiaoying; Sun, Yongming; Wang, Zhongming; Yuan, Zhenhong; Liu, Hui; Yang, Jun

    2016-01-01

    Bimetallic nanoparticles with core-shell structures usually display enhanced catalytic properties due to the lattice strain created between the core and shell regions. In this study, we demonstrate the application of bimetallic Au-Pd nanoparticles with an Au core and a thin Pd shell as cathode catalysts in microbial fuel cells, which represent a promising technology for wastewater treatment, while directly generating electrical energy. In specific, in comparison with the hollow structured Pt nanoparticles, a benchmark for the electrocatalysis, the bimetallic core-shell Au-Pd nanoparticles are found to have superior activity and stability for oxygen reduction reaction in a neutral condition due to the strong electronic interaction and lattice strain effect between the Au core and the Pd shell domains. The maximum power density generated in a membraneless single-chamber microbial fuel cell running on wastewater with core-shell Au-Pd as cathode catalysts is ca. 16.0 W m−3 and remains stable over 150 days, clearly illustrating the potential of core-shell nanostructures in the applications of microbial fuel cells. PMID:27734945

  19. I Know What You Did Last Summer

    ERIC Educational Resources Information Center

    Opalinski, Gail; Ellers, Sherry; Goodman, Amy

    2004-01-01

    This article describes the revised summer school program developed by the Anchorage (AK) School District for students who received poor grades in their core classes or low scores in the Alaska Benchmark Examinations or California Achievement Tests. More than 500 middle school students from the district spent five weeks during the summer honing…

  20. Inclusion and Human Rights in Health Policies: Comparative and Benchmarking Analysis of 51 Policies from Malawi, Sudan, South Africa and Namibia

    PubMed Central

    MacLachlan, Malcolm; Amin, Mutamad; Mannan, Hasheem; El Tayeb, Shahla; Bedri, Nafisa; Swartz, Leslie; Munthali, Alister; Van Rooy, Gert; McVeigh, Joanne

    2012-01-01

    While many health services strive to be equitable, accessible and inclusive, peoples’ right to health often goes unrealized, particularly among vulnerable groups. The extent to which health policies explicitly seek to achieve such goals sets the policy context in which services are delivered and evaluated. An analytical framework was developed – EquiFrame – to evaluate 1) the extent to which 21 Core Concepts of human rights were addressed in policy documents, and 2) coverage of 12 Vulnerable Groups who might benefit from such policies. Using this framework, analysis of 51 policies across Malawi, Namibia, South Africa and Sudan, confirmed the relevance of all Core Concepts and Vulnerable Groups. Further, our analysis highlighted some very strong policies, serious shortcomings in others as well as country-specific patterns. If social inclusion and human rights do not underpin policy formation, it is unlikely they will be inculcated in service delivery. EquiFrame facilitates policy analysis and benchmarking, and provides a means for evaluating policy revision and development. PMID:22649488

  1. A heterogeneous computing accelerated SCE-UA global optimization method using OpenMP, OpenCL, CUDA, and OpenACC.

    PubMed

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang

    2017-10-01

    The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.

  2. Rubus: A compiler for seamless and extensible parallelism.

    PubMed

    Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.

  3. Rubus: A compiler for seamless and extensible parallelism

    PubMed Central

    Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758

  4. Flexible Tagged Architecture for Trustworthy Multi-core Platforms

    DTIC Science & Technology

    2015-06-01

    well as two kernel benchmarks for SHA - 256 and GMAC, which are popular cryptographic standards. We compared the execution time of these benchmarks...UMC UMC on Flex fabric (FPGA) 266 90,384 10.8% 21 5.8% DIFT DIFT on Flex fabric (FPGA) 256 123,471 14.8% 23 6.3% BC BC on Flex fabric (FPGA) 229...0.25X) (1X) (0.5X) (0.25X) (1X) (0.5X) (0.25X) (1X) (0.5X) (0.25X) sha 1.01 1.01 1.01 1.01 1.06 1.16 1.03 1.07 1.15 1.00 1.33 1.50 gmac 1.01 1.01 1.09

  5. Developments in lithium-ion battery technology in the Peoples Republic of China.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patil, P. G.; Energy Systems

    2008-02-28

    Argonne National Laboratory prepared this report, under the sponsorship of the Office of Vehicle Technologies (OVT) of the U.S. Department of Energy's (DOE's) Office of Energy Efficiency and Renewable Energy, for the Vehicles Technologies Team. The information in the report is based on the author's visit to Beijing; Tianjin; and Shanghai, China, to meet with representatives from several organizations (listed in Appendix A) developing and manufacturing lithium-ion battery technology for cell phones and electronics, electric bikes, and electric and hybrid vehicle applications. The purpose of the visit was to assess the status of lithium-ion battery technology in China and tomore » determine if lithium-ion batteries produced in China are available for benchmarking in the United States. With benchmarking, DOE and the U.S. battery development industry would be able to understand the status of the battery technology, which would enable the industry to formulate a long-term research and development program. This report also describes the state of lithium-ion battery technology in the United States, provides information on joint ventures, and includes information on government incentives and policies in the Peoples Republic of China (PRC).« less

  6. Fostering employee involvement.

    PubMed

    Beecher, G P

    1997-11-01

    Every year, the ODA's Economics of Practice Committee, with the help of an independent consulting firm, carries out the Cost of Practice Monitor which tracks the various costs of running a dental practice in Ontario. This article is the result of a joint ODA-Arthur Andersen initiative to provide members with detailed information from the Monitor. Over the next year, there will be a series of articles published under the heading "Best practises for Ontario's Dental Practices." The article featured in this issue focuses on wage expenses in dental practices and how to foster employee involvement as a means of addressing cost-productivity issues. Furthermore, information relating to wage expenses may be used by practitioners to benchmark their practice against the average Ontario dental practice. Appendix C was developed for this purpose. Through benchmarking, the practitioner may gain insight into ways of evaluating their practice and in addressing issues that could improve the management of the practice. For a long time, concepts of best business practises were applied only to manufacturing organizations or large multi-national corporations but experience has demonstrated that these activities are universal to all organizations, including service companies, schools, government and not-for-profit organizations.

  7. 77 FR 64955 - Hardwood and Decorative Plywood From the People's Republic of China: Initiation of Countervailing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-24

    .... Appendix I Scope of the Investigation Hardwood and decorative plywood is a panel composed of an assembly of two or more layers or plies of wood veneer(s) in combination with a core. The several layers, along... decorative plywood panel can be composed of one or more species of hardwoods, softwoods, or bamboo, (in...

  8. Evaluation of Ferroelectric Materials for Memory Applications

    DTIC Science & Technology

    1990-06-01

    as automobile odometers, access counters, and flight time recorders. Detailed product information is provided in Appendix A. 3. Optical Read...volatility but by definition are not reprogrammable , which severely restricts flexibility and makes error correction difficult. Magnetic core is non...battery-backed SRAMs as well. The programs for embedded controllers, such as those increasingly used in automobiles , are kept in nonvolatile memory. The

  9. GW100: Benchmarking G0W0 for Molecular Systems.

    PubMed

    van Setten, Michiel J; Caruso, Fabio; Sharifzadeh, Sahar; Ren, Xinguo; Scheffler, Matthias; Liu, Fang; Lischner, Johannes; Lin, Lin; Deslippe, Jack R; Louie, Steven G; Yang, Chao; Weigend, Florian; Neaton, Jeffrey B; Evers, Ferdinand; Rinke, Patrick

    2015-12-08

    We present the GW100 set. GW100 is a benchmark set of the ionization potentials and electron affinities of 100 molecules computed with the GW method using three independent GW codes and different GW methodologies. The quasi-particle energies of the highest-occupied molecular orbitals (HOMO) and lowest-unoccupied molecular orbitals (LUMO) are calculated for the GW100 set at the G0W0@PBE level using the software packages TURBOMOLE, FHI-aims, and BerkeleyGW. The use of these three codes allows for a quantitative comparison of the type of basis set (plane wave or local orbital) and handling of unoccupied states, the treatment of core and valence electrons (all electron or pseudopotentials), the treatment of the frequency dependence of the self-energy (full frequency or more approximate plasmon-pole models), and the algorithm for solving the quasi-particle equation. Primary results include reference values for future benchmarks, best practices for convergence within a particular approach, and average error bars for the most common approximations.

  10. LUMA: A many-core, Fluid-Structure Interaction solver based on the Lattice-Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Harwood, Adrian R. G.; O'Connor, Joseph; Sanchez Muñoz, Jonathan; Camps Santasmasas, Marta; Revell, Alistair J.

    2018-01-01

    The Lattice-Boltzmann Method at the University of Manchester (LUMA) project was commissioned to build a collaborative research environment in which researchers of all abilities can study fluid-structure interaction (FSI) problems in engineering applications from aerodynamics to medicine. It is built on the principles of accessibility, simplicity and flexibility. The LUMA software at the core of the project is a capable FSI solver with turbulence modelling and many-core scalability as well as a wealth of input/output and pre- and post-processing facilities. The software has been validated and several major releases benchmarked on supercomputing facilities internationally. The software architecture is modular and arranged logically using a minimal amount of object-orientation to maintain a simple and accessible software.

  11. An MPI-based MoSST core dynamics model

    NASA Astrophysics Data System (ADS)

    Jiang, Weiyuan; Kuang, Weijia

    2008-09-01

    Distributed systems are among the main cost-effective and expandable platforms for high-end scientific computing. Therefore scalable numerical models are important for effective use of such systems. In this paper, we present an MPI-based numerical core dynamics model for simulation of geodynamo and planetary dynamos, and for simulation of core-mantle interactions. The model is developed based on MPI libraries. Two algorithms are used for node-node communication: a "master-slave" architecture and a "divide-and-conquer" architecture. The former is easy to implement but not scalable in communication. The latter is scalable in both computation and communication. The model scalability is tested on Linux PC clusters with up to 128 nodes. This model is also benchmarked with a published numerical dynamo model solution.

  12. Core Competencies for Injury and Violence Prevention

    PubMed Central

    Stephens-Stidham, Shelli; Peek-Asa, Corinne; Bou-Saada, Ingrid; Hunter, Wanda; Lindemer, Kristen; Runyan, Carol

    2009-01-01

    Efforts to reduce the burden of injury and violence require a workforce that is knowledgeable and skilled in prevention. However, there has been no systematic process to ensure that professionals possess the necessary competencies. To address this deficiency, we developed a set of core competencies for public health practitioners in injury and violence prevention programs. The core competencies address domains including public health significance, data, the design and implementation of prevention activities, evaluation, program management, communication, stimulating change, and continuing education. Specific learning objectives establish goals for training in each domain. The competencies assist in efforts to reduce the burden of injury and violence and can provide benchmarks against which to assess progress in professional capacity for injury and violence prevention. PMID:19197083

  13. Test Scheduling for Core-Based SOCs Using Genetic Algorithm Based Heuristic Approach

    NASA Astrophysics Data System (ADS)

    Giri, Chandan; Sarkar, Soumojit; Chattopadhyay, Santanu

    This paper presents a Genetic algorithm (GA) based solution to co-optimize test scheduling and wrapper design for core based SOCs. Core testing solutions are generated as a set of wrapper configurations, represented as rectangles with width equal to the number of TAM (Test Access Mechanism) channels and height equal to the corresponding testing time. A locally optimal best-fit heuristic based bin packing algorithm has been used to determine placement of rectangles minimizing the overall test times, whereas, GA has been utilized to generate the sequence of rectangles to be considered for placement. Experimental result on ITC'02 benchmark SOCs shows that the proposed method provides better solutions compared to the recent works reported in the literature.

  14. Note-Taking Interventions to Assist Students with Disabilities in Content Area Classes

    ERIC Educational Resources Information Center

    Boyle, Joseph R.; Forchelli, Gina A.; Cariss, Kaitlyn

    2015-01-01

    As high-stakes testing, Common Core, and state standards become the new norms in schools, teachers are tasked with helping all students meet specific benchmarks. In conjunction with the influx of more students with disabilities being included in inclusive and general education classrooms where lectures with note-taking comprise a majority of…

  15. A Qualitative Study of Urban and Suburban Elementary Student Understandings of Pest-Related Science and Agricultural Education Benchmarks.

    ERIC Educational Resources Information Center

    Trexler, Cary J.

    2000-01-01

    Clinical interviews with nine fifth graders revealed that experiences play a pivotal role in their understanding of pests. They lack well-developed schema and language to discuss pest management. A foundation of core biological concepts was necessary for understanding pests and pest management. (Conatains 34 references.) (SK)

  16. A Psychometric Analysis of Teacher-Made Benchmark Assessments in English Language Arts

    ERIC Educational Resources Information Center

    Milligan, Andrea

    2017-01-01

    The implementation of the Common Core State Standards (CCSS) has placed increased accountability for outcomes on both students and teachers. To address the current youth literacy crisis in the United States, the CCSS call for students to read increasingly complex informational and literary texts. Since teachers are held accountable for students'…

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mac Donald, Philip Elsworth; Buongiorno, Jacopo; Davis, Cliff Bybee

    The purpose of this collaborative Idaho National Engineering and Environmental Laboratory (INEEL) and Massachusetts Institute of Technology (MIT) Laboratory Directed Research and Development (LDRD) project is to investigate the suitability of lead or lead-bismuth cooled fast reactors for producing low-cost electricity as well as for actinide burning. The goal is to identify and analyze the key technical issues in core neutronics, materials, thermal-hydraulics, fuels, and economics associated with the development of this reactor concept. Work has been accomplished in four major areas of research: core neutronic design, plant engineering, material compatibility studies, and coolant activation. The publications derived from workmore » on this project (since project inception) are listed in Appendix A.« less

  18. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  19. Lattice gas methods for computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Sparrow, Victor W.

    1995-01-01

    This paper presents the lattice gas solution to the category 1 problems of the ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics. The first and second problems were solved for Delta t = Delta x = 1, and additionally the second problem was solved for Delta t = 1/4 and Delta x = 1/2. The results are striking: even for these large time and space grids the lattice gas numerical solutions are almost indistinguishable from the analytical solutions. A simple bug in the Mathematica code was found in the solutions submitted for comparison, and the comparison plots shown at the end of this volume show the bug. An Appendix to the present paper shows an example lattice gas solution with and without the bug.

  20. Development of a New 47-Group Library for the CASL Neutronics Simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Williams, Mark L; Wiarda, Dorothea

    The CASL core simulator MPACT is under development for the neutronics and thermal-hydraulics coupled simulation for the pressurized light water reactors. The key characteristics of the MPACT code include a subgroup method for resonance self-shielding, and a whole core solver with a 1D/2D synthesis method. The ORNL AMPX/SCALE code packages have been significantly improved to support various intermediate resonance self-shielding approximations such as the subgroup and embedded self-shielding methods. New 47-group AMPX and MPACT libraries based on ENDF/B-VII.0 have been generated for the CASL core simulator MPACT of which group structure comes from the HELIOS library. The new 47-group MPACTmore » library includes all nuclear data required for static and transient core simulations. This study discusses a detailed procedure to generate the 47-group AMPX and MPACT libraries and benchmark results for the VERA progression problems.« less

  1. Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in themore » neutron field are reported.« less

  2. Affinity-aware checkpoint restart

    DOE PAGES

    Saini, Ajay; Rezaei, Arash; Mueller, Frank; ...

    2014-12-08

    Current checkpointing techniques employed to overcome faults for HPC applications result in inferior application performance after restart from a checkpoint for a number of applications. This is due to a lack of page and core affinity awareness of the checkpoint/restart (C/R) mechanism, i.e., application tasks originally pinned to cores may be restarted on different cores, and in case of non-uniform memory architectures (NUMA), quite common today, memory pages associated with tasks on a NUMA node may be associated with a different NUMA node after restart. Here, this work contributes a novel design technique for C/R mechanisms to preserve task-to-core mapsmore » and NUMA node specific page affinities across restarts. Experimental results with BLCR, a C/R mechanism, enhanced with affinity awareness demonstrate significant performance benefits of 37%-73% for the NAS Parallel Benchmark codes and 6-12% for NAMD with negligible overheads instead of up to nearly four times longer an execution times without affinity-aware restarts on 16 cores.« less

  3. Affinity-aware checkpoint restart

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saini, Ajay; Rezaei, Arash; Mueller, Frank

    Current checkpointing techniques employed to overcome faults for HPC applications result in inferior application performance after restart from a checkpoint for a number of applications. This is due to a lack of page and core affinity awareness of the checkpoint/restart (C/R) mechanism, i.e., application tasks originally pinned to cores may be restarted on different cores, and in case of non-uniform memory architectures (NUMA), quite common today, memory pages associated with tasks on a NUMA node may be associated with a different NUMA node after restart. Here, this work contributes a novel design technique for C/R mechanisms to preserve task-to-core mapsmore » and NUMA node specific page affinities across restarts. Experimental results with BLCR, a C/R mechanism, enhanced with affinity awareness demonstrate significant performance benefits of 37%-73% for the NAS Parallel Benchmark codes and 6-12% for NAMD with negligible overheads instead of up to nearly four times longer an execution times without affinity-aware restarts on 16 cores.« less

  4. Evaluation of the Pool Critical Assembly Benchmark with Explicitly-Modeled Geometry using MCNP6

    DOE PAGES

    Kulesza, Joel A.; Martz, Roger Lee

    2017-03-01

    Despite being one of the most widely used benchmarks for qualifying light water reactor (LWR) radiation transport methods and data, no benchmark calculation of the Oak Ridge National Laboratory (ORNL) Pool Critical Assembly (PCA) pressure vessel wall benchmark facility (PVWBF) using MCNP6 with explicitly modeled core geometry exists. As such, this paper provides results for such an analysis. First, a criticality calculation is used to construct the fixed source term. Next, ADVANTG-generated variance reduction parameters are used within the final MCNP6 fixed source calculations. These calculations provide unadjusted dosimetry results using three sets of dosimetry reaction cross sections of varyingmore » ages (those packaged with MCNP6, from the IRDF-2002 multi-group library, and from the ACE-formatted IRDFF v1.05 library). These results are then compared to two different sets of measured reaction rates. The comparison agrees in an overall sense within 2% and on a specific reaction- and dosimetry location-basis within 5%. Except for the neptunium dosimetry, the individual foil raw calculation-to-experiment comparisons usually agree within 10% but is typically greater than unity. Finally, in the course of developing these calculations, geometry that has previously not been completely specified is provided herein for the convenience of future analysts.« less

  5. ViSAPy: a Python tool for biophysics-based generation of virtual spiking activity for evaluation of spike-sorting algorithms.

    PubMed

    Hagen, Espen; Ness, Torbjørn V; Khosrowshahi, Amir; Sørensen, Christina; Fyhn, Marianne; Hafting, Torkel; Franke, Felix; Einevoll, Gaute T

    2015-04-30

    New, silicon-based multielectrodes comprising hundreds or more electrode contacts offer the possibility to record spike trains from thousands of neurons simultaneously. This potential cannot be realized unless accurate, reliable automated methods for spike sorting are developed, in turn requiring benchmarking data sets with known ground-truth spike times. We here present a general simulation tool for computing benchmarking data for evaluation of spike-sorting algorithms entitled ViSAPy (Virtual Spiking Activity in Python). The tool is based on a well-established biophysical forward-modeling scheme and is implemented as a Python package built on top of the neuronal simulator NEURON and the Python tool LFPy. ViSAPy allows for arbitrary combinations of multicompartmental neuron models and geometries of recording multielectrodes. Three example benchmarking data sets are generated, i.e., tetrode and polytrode data mimicking in vivo cortical recordings and microelectrode array (MEA) recordings of in vitro activity in salamander retinas. The synthesized example benchmarking data mimics salient features of typical experimental recordings, for example, spike waveforms depending on interspike interval. ViSAPy goes beyond existing methods as it includes biologically realistic model noise, synaptic activation by recurrent spiking networks, finite-sized electrode contacts, and allows for inhomogeneous electrical conductivities. ViSAPy is optimized to allow for generation of long time series of benchmarking data, spanning minutes of biological time, by parallel execution on multi-core computers. ViSAPy is an open-ended tool as it can be generalized to produce benchmarking data or arbitrary recording-electrode geometries and with various levels of complexity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  6. A theoretical and experimental benchmark study of core-excited states in nitrogen

    NASA Astrophysics Data System (ADS)

    Myhre, Rolf H.; Wolf, Thomas J. A.; Cheng, Lan; Nandi, Saikat; Coriani, Sonia; Gühr, Markus; Koch, Henrik

    2018-02-01

    The high resolution near edge X-ray absorption fine structure spectrum of nitrogen displays the vibrational structure of the core-excited states. This makes nitrogen well suited for assessing the accuracy of different electronic structure methods for core excitations. We report high resolution experimental measurements performed at the SOLEIL synchrotron facility. These are compared with theoretical spectra calculated using coupled cluster theory and algebraic diagrammatic construction theory. The coupled cluster singles and doubles with perturbative triples model known as CC3 is shown to accurately reproduce the experimental excitation energies as well as the spacing of the vibrational transitions. The computational results are also shown to be systematically improved within the coupled cluster hierarchy, with the coupled cluster singles, doubles, triples, and quadruples method faithfully reproducing the experimental vibrational structure.

  7. Assessment of competency in endoscopy: establishing and validating generalizable competency benchmarks for colonoscopy.

    PubMed

    Sedlack, Robert E; Coyle, Walter J

    2016-03-01

    The Mayo Colonoscopy Skills Assessment Tool (MCSAT) has previously been used to describe learning curves and competency benchmarks for colonoscopy; however, these data were limited to a single training center. The newer Assessment of Competency in Endoscopy (ACE) tool is a refinement of the MCSAT tool put forth by the Training Committee of the American Society for Gastrointestinal Endoscopy, intended to include additional important quality metrics. The goal of this study is to validate the changes made by updating this tool and establish more generalizable and reliable learning curves and competency benchmarks for colonoscopy by examining a larger national cohort of trainees. In a prospective, multicenter trial, gastroenterology fellows at all stages of training had their core cognitive and motor skills in colonoscopy assessed by staff. Evaluations occurred at set intervals of every 50 procedures throughout the 2013 to 2014 academic year. Skills were graded by using the ACE tool, which uses a 4-point grading scale defining the continuum from novice to competent. Average learning curves for each skill were established at each interval in training and competency benchmarks for each skill were established using the contrasting groups method. Ninety-three gastroenterology fellows at 10 U.S. academic institutions had 1061 colonoscopies assessed by using the ACE tool. Average scores of 3.5 were found to be inclusive of all minimal competency thresholds identified for each core skill. Cecal intubation times of less than 15 minutes and independent cecal intubation rates of 90% were also identified as additional competency thresholds during analysis. The average fellow achieved all cognitive and motor skill endpoints by 250 procedures, with >90% surpassing these thresholds by 300 procedures. Nationally generalizable learning curves for colonoscopy skills in gastroenterology fellows are described. Average ACE scores of 3.5, cecal intubation rates of 90%, and intubation times less than 15 minutes are recommended as minimal competency criteria. On average, it takes 250 procedures to achieve competence in colonoscopy. The thresholds found in this multicenter cohort by using the ACE tool are nearly identical to the previously established MCSAT benchmarks and are consistent with recent gastroenterology training recommendations but far higher than current training requirements in other specialties. Copyright © 2016 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  8. Social Studies on the Outside Looking In: Redeeming the Neglected Curriculum

    ERIC Educational Resources Information Center

    Hermeling, Andrew Dyrli

    2013-01-01

    Many social studies teachers are nervous about the coming of Common Core State Standards. With so much emphasis placed on literacy, social studies teachers fear they will see content slashed to leave time for meeting English's non-fiction standards. Already reeling from a lack of attention from the benchmarks put in place by No Child Left Behind,…

  9. ZPR-6 assembly 7 high {sup 240}Pu core experiments : a fast reactor core with mixed (Pu,U)-oxide fuel and a centeral high{sup 240}Pu zone.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R. M.; Morman, J. A.; Schaefer, R.W.

    ZPR-6 Assembly 7 (ZPR-6/7) encompasses a series of experiments performed at the ZPR-6 facility at Argonne National Laboratory in 1970 and 1971 as part of the Demonstration Reactor Benchmark Program (Reference 1). Assembly 7 simulated a large sodium-cooled LMFBR with mixed oxide fuel, depleted uranium radial and axial blankets, and a core H/D near unity. ZPR-6/7 was designed to test fast reactor physics data and methods, so configurations in the Assembly 7 program were as simple as possible in terms of geometry and composition. ZPR-6/7 had a very uniform core assembled from small plates of depleted uranium, sodium, iron oxide,more » U{sub 3}O{sub 8} and Pu-U-Mo alloy loaded into stainless steel drawers. The steel drawers were placed in square stainless steel tubes in the two halves of a split table machine. ZPR-6/7 had a simple, symmetric core unit cell whose neutronic characteristics were dominated by plutonium and {sup 238}U. The core was surrounded by thick radial and axial regions of depleted uranium to simulate radial and axial blankets and to isolate the core from the surrounding room. The ZPR-6/7 program encompassed 139 separate core loadings which include the initial approach to critical and all subsequent core loading changes required to perform specific experiments and measurements. In this context a loading refers to a particular configuration of fueled drawers, radial blanket drawers and experimental equipment (if present) in the matrix of steel tubes. Two principal core configurations were established. The uniform core (Loadings 1-84) had a relatively uniform core composition. The high {sup 240}Pu core (Loadings 85-139) was a variant on the uniform core. The plutonium in the Pu-U-Mo fuel plates in the uniform core contains 11% {sup 240}Pu. In the high {sup 240}Pu core, all Pu-U-Mo plates in the inner core region (central 61 matrix locations per half of the split table machine) were replaced by Pu-U-Mo plates containing 27% {sup 240}Pu in the plutonium component to construct a central core zone with a composition closer to that in an LMFBR core with high burnup. The high {sup 240}Pu configuration was constructed for two reasons. First, the composition of the high {sup 240}Pu zone more closely matched the composition of LMFBR cores anticipated in design work in 1970. Second, comparison of measurements in the ZPR-6/7 uniform core with corresponding measurements in the high {sup 240}Pu zone provided an assessment of some of the effects of long-term {sup 240}Pu buildup in LMFBR cores. The uniform core version of ZPR-6/7 is evaluated in ZPR-LMFR-EXP-001. This document only addresses measurements in the high {sup 240}Pu core version of ZPR-6/7. Many types of measurements were performed as part of the ZPR-6/7 program. Measurements of criticality, sodium void worth, control rod worth and reaction rate distributions in the high {sup 240}Pu core configuration are evaluated here. For each category of measurements, the uncertainties are evaluated, and benchmark model data are provided.« less

  10. Theoretical studies of massive stars. I - Evolution of a 15-solar-mass star from the zero-age main sequence to neon ignition

    NASA Technical Reports Server (NTRS)

    Endal, A. S.

    1975-01-01

    The evolution of a star with mass 15 times that of the sun from the zero-age main sequence to neon ignition has been computed by the Henyey method. The hydrogen-rich envelope and all shell sources were explicitly included in the models. An algorithm has been developed for approximating the results of carbon burning, including the branching ratio for the C-12 + C-12 reaction and taking some secondary reactions into account. Penetration of the convective envelope into the core is found to be unimportant during the stages covered by the models. Energy transfer from the carbon-burning shell to the core by degenerate electron conduction becomes important after the core carbon-burning stage. Neon ignition will occur in a semidegenerate core and will lead to a mild 'flash.' Detailed numerical results are given in an appendix. Continuation of the calculations into later stages and variations with the total mass of the star will be discussed in later papers.

  11. 40 CFR Appendix B2 to Subpart F of... - Performance of Refrigerant Recovery, Recycling, and/or Reclaim Equipment

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...'s literature. (See Figure 2.) A 6.3 mm balance line shall be connected across the test apparatus... compressor high side. A 6.3 mm access port with a valve core shall be located in the balance line for the... recovery cylinder pressure no less than specified in 6.2.2. Place the test cylinder in liquid nitrogen for...

  12. 40 CFR Appendix B2 to Subpart F of... - Performance of Refrigerant Recovery, Recycling, and/or Reclaim Equipment

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...'s literature. (See Figure 2.) A 6.3 mm balance line shall be connected across the test apparatus... compressor high side. A 6.3 mm access port with a valve core shall be located in the balance line for the... recovery cylinder pressure no less than specified in 6.2.2. Place the test cylinder in liquid nitrogen for...

  13. 40 CFR Appendix B2 to Subpart F of... - Performance of Refrigerant Recovery, Recycling, and/or Reclaim Equipment

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...'s literature. (See Figure 2.) A 6.3 mm balance line shall be connected across the test apparatus... compressor high side. A 6.3 mm access port with a valve core shall be located in the balance line for the... recovery cylinder pressure no less than specified in 6.2.2. Place the test cylinder in liquid nitrogen for...

  14. Improving Defense Health Program Medical Research Processes

    DTIC Science & Technology

    2017-08-08

    needed for DHP medical research , such as the Army’s Clinical and Translational Research Program Office, 38 the Navy’s Research Methods Training Program... research stated, “key infrastructure for a learning health system will encompass three core elements: data networks, methods , and workforce.” 221 A 2012... Research Methods Training Program, 132 which will be further discussed in Appendix D.2. AIR FORCE Air Force Instruction 40-402, Protection of

  15. Ultrafast light matter interaction in CdSe/ZnS core-shell quantum dots

    NASA Astrophysics Data System (ADS)

    Yadav, Rajesh Kumar; Sharma, Rituraj; Mondal, Anirban; Adarsh, K. V.

    2018-04-01

    Core-shell quantum dot are imperative for carrier (electron and holes) confinement in core/shell, which provides a stage to explore the linear and nonlinear optical phenomena at the nanoscalelimit. Here we present a comprehensive study of ultrafast excitation dynamics and nonlinear optical absorption of CdSe/ZnS core shell quantum dot with the help of ultrafast spectroscopy. Pump-probe and time-resolved measurements revealed the drop of trapping at CdSe surface due to the presence of the ZnS shell, which makes more efficient photoluminescence. We have carried out femtosecond transient absorption studies of the CdSe/ZnS core-shell quantum dot by irradiation with 400 nm laser light, monitoring the transients in the visible region. The optical nonlinearity of the core-shell quantum dot studied by using the Z-scan technique with 120 fs pulses at the wavelengths of 800 nm. The value of two photon absorption coefficients (β) of core-shell QDs extracted as80cm/GW, and it shows excellent benchmark for the optical limiting onset of 2.5GW/cm2 with the low limiting differential transmittance of 0.10, that is an order of magnitude better than graphene based materials.

  16. Towards the development of a consensual chronostratigraphy for Arctic Ocean sedimentary records

    NASA Astrophysics Data System (ADS)

    Hillaire-Marcel, Claude; de Vernal, Anne; Polyak, Leonid; Stein, Rüdiger; Maccali, Jenny; Jacobel, Allison; Cuny, Kristan

    2017-04-01

    Deciphering Arctic paleoceanograpy and paleoclimate, and linking it to global marine and atmospheric records is much needed for comprehending the Earth's climate history. However, this task is hampered by multiple problems with dating Arctic Ocean sedimentary records related notably to low and highly variable sedimentation rates, scarce and discontinuous biogenic proxies due to low productivity and/or poor preservation, and difficulties correlating regional records to global stacks (e.g., paleomagnetic). Despite recent advances in developing an Arctic Ocean sedimentary stratigraphy, and attempts at setting radiometric benchmark ages of respectively 300 and 150 ka, based on the final decay of 230Th and 231Pa excesses (Thxs, Paxs) (Not et al., 2008), consensual age models are still missing, preventing reliable integration of Arctic records in a global paleoclimatic scheme. Here, we intend to illustrate these issues by comparing consistent Thxs-Paxs chronostratigraphic records from the Mendeleev-Alpha and Lomonosov ridges with the currently used age model based on climatostratigraphic interpretation of sedimentary records (e.g., Polyak et al., 2009; Stein et al., 2010). Data used were collected from the 2005 HOTRAX core MC-11 (northern Mendeleev Ridge) and the 2014 Polarstern core PS87-30 (Lomonosov Ridge). Total collapse depths of Thxs and Paxs are observed by a factor of 3 deeper in core PS87-30 vs core MC-11, indicating average sedimentation rates 3 times higher at the Lomonosov Ridge site. Litho-biostratigraphic markers, such as foraminiferal peaks and manganese-enriched layers, show a similar pattern, with their occurrence 3 times deeper in core PS87-30 than in core MC-11. These very consistent downcore features highlight a gaping difference between the benchmark ages assigned to the total decay of Paxs and Thxs and the current age model based on climatostratigraphic approach involving significantly higher sedimentation rates. This discrepancy begs for its in-depth investigation that would potentially result in a development of the consensual chronostratigraphy for Quaternary Arctic Ocean sediments, critical for integrating the Arctic into global paleoclimatic history.

  17. Relevance of East African Drill Cores to Human Evolution: the Case of the Olorgesailie Drilling Project

    NASA Astrophysics Data System (ADS)

    Potts, R.

    2016-12-01

    Drill cores reaching the local basement of the East African Rift were obtained in 2012 south of the Olorgesailie Basin, Kenya, 20 km from excavations that document key benchmarks in the origin of Homo sapiens. Sediments totaling 216 m were obtained from two drilling locations representing the past 1 million years. The cores were acquired to build a detailed environmental record spatially associated with the transition from Acheulean to Middle Stone Age technology and extensive turnover in mammalian species. The project seeks precise tests of how climate dynamics and tectonic events were linked with these transitions. Core lithology (A.K. Behrensmeyer), geochronology (A. Deino), diatoms (R.B. Owen), phytoliths (R. Kinyanjui), geochemistry (N. Rabideaux, D. Deocampo), among other indicators, show evidence of strong environmental variability in agreement with predicted high-eccentricity modulation of climate during the evolutionary transitions. Increase in hominin mobility, elaboration of symbolic behavior, and concurrent turnover in mammalian species indicating heightened adaptability to unpredictable ecosystems, point to a direct link between the evolutionary transitions and the landscape dynamics reflected in the Olorgesailie drill cores. For paleoanthropologists and Earth scientists, any link between evolutionary transitions and environmental dynamics requires robust evolutionary datasets pertinent to how selection, extinction, population divergence, and other evolutionary processes were impacted by the dynamics uncovered in drill core studies. Fossil and archeological data offer a rich source of data and of robust environment-evolution explanations that must be integrated into efforts by Earth scientists who seek to examine high-resolution climate records of human evolution. Paleoanthropological examples will illustrate the opportunities that exist for connecting evolutionary benchmarks to the data obtained from drilled African muds. Project members: R. Potts, A.K. Behrensmeyer, E. Beverly, K. Brady, J. Bright, E. Brown, J. Clark, A. Cohen, A. Deino, P. deMenocal, D. Deocampo, R. Dommain, J.T. Faith, J. King, R. Kinyanjui, N. Levin, J. Moerman, V. Muiruri, A. Noren, R.B. Owen, N. Rabideaux, R. Renaut, S. Rucina, J. Russell, J. Scott, M. Stockhecke, K. Uno

  18. Benchmarking the ATLAS software through the Kit Validation engine

    NASA Astrophysics Data System (ADS)

    De Salvo, Alessandro; Brasolin, Franco

    2010-04-01

    The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.

  19. 100-KE REACTOR CORE REMOVAL PROJECT ALTERNATIVE ANALYSIS WORKSHOP REPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HARRINGTON RA

    2010-01-15

    On December 15-16, 2009, a 100-KE Reactor Core Removal Project Alternative Analysis Workshop was conducted at the Washington State University Consolidated Information Center, Room 214. Colburn Kennedy, Project Director, CH2M HILL Plateau Remediation Company (CHPRC) requested the workshop and Richard Harrington provided facilitation. The purpose of the session was to select the preferred Bio Shield Alternative, for integration with the Thermal Shield and Core Removal and develop the path forward to proceed with project delivery. Prior to this workshop, the S.A. Robotics (SAR) Obstruction Removal Alternatives Analysis (565-DLV-062) report was issued, for use prior to and throughout the session, tomore » all the team members. The multidisciplinary team consisted ofrepresentatives from 100-KE Project Management, Engineering, Radcon, Nuclear Safety, Fire Protection, Crane/Rigging, SAR Project Engineering, the Department of Energy Richland Field Office, Environmental Protection Agency, Washington State Department of Ecology, Defense Nuclear Facility Safety Board, and Deactivation and Decommission subject matter experts from corporate CH2M HILL and Lucas. Appendix D contains the workshop agenda, guidelines and expectations, opening remarks, and attendance roster going into followed throughout the workshop. The team was successful in selecting the preferred alternative and developing an eight-point path forward action plan to proceed with conceptual design. Conventional Demolition was selected as the preferred alternative over two other alternatives: Diamond Wire with Options, and Harmonic Delamination with Conventional Demolition. The teams preferred alternative aligned with the SAR Obstruction Removal Alternative Analysis report conclusion. However, the team identified several Path Forward actions, in Appendix A, which upon completion will solidify and potentially enhance the Conventional Demolition alternative with multiple options and approaches to achieve project delivery. In brief, the Path Forward was developed to reconsider potential open air demolition areas; characterize to determine if any zircaloy exists, evaluate existing concrete data to determine additional characterization needs, size the new building to accommodate human machine interface and tooling, consider bucket thumb and use ofshape-charges in design, and finally to utilize complex-wide and industry explosive demolition lessons learned in the design approach. Appendix B documents these results from the team's use ofValue Engineering process tools entitled Weighted Analysis Alternative Matrix, Matrix Conclusions, Evaluation Criteria, and Alternative Advantages and Disadvantages. These results were further supported with the team's validation of parking-lot information sheets: memories (potential ideas to consider), issues/concerns, and assumptions, contained in Appendix C. Appendix C also includes the recorded workshop flipchart notes taken from the SAR Alternatives and Project Overview presentations. The SAR workshop presentations, including a 3-D graphic illustration demonstration video have been retained in the CHPRC project file, and were not included in this report due to size limitations. The workshop concluded with a round robin close-out where each member was engaged for any last minute items and meeting utility. In summary, the team felt the session was value added and looked forward to proceeding with the recommended actions and conceptual design.« less

  20. Neutron Reference Benchmark Field Specification: ACRR 44 Inch Lead-Boron (LB44) Bucket Environment (ACRR-LB44-CC-32-CL).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the 44 inch Lead-Boron (LB44) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results ofmore » 31 integral dosimetry measurements in the neutron field are reported.« less

  1. Neutron Reference Benchmark Field Specifications: ACRR Polyethylene-Lead-Graphite (PLG) Bucket Environment (ACRR-PLG-CC-32-CL).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parm, Edward J.; Griffin, Patrick J.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the Polyethylene-Lead-Graphite (PLG) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 37 integralmore » dosimetry measurements in the neutron field are reported.« less

  2. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these banchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  3. Student Progress to Graduation in New York City High Schools: A Metric Designed by New Visions for Public Schools. Part I: Core Components

    ERIC Educational Resources Information Center

    Fairchild, Susan; Gunton, Brad; Donohue, Beverly; Berry, Carolyn; Genn, Ruth; Knevals, Jessica

    2011-01-01

    Students who achieve critical academic benchmarks such as high attendance rates, continuous levels of credit accumulation, and high grades have a greater likelihood of success throughout high school and beyond. However, keeping students on track toward meeting graduation requirements and quickly identifying students who are at risk of falling off…

  4. Using Localized Survey Items to Augment Standardized Benchmarking Measures: A LibQUAL+[TM] Study

    ERIC Educational Resources Information Center

    Thompson, Bruce; Cook, Colleen; Kyrillidou, Martha

    2006-01-01

    The LibQUAL+[TM] protocol solicits open-ended comments from users with regard to library service quality, gathers data on 22 core items, and, at the option of individual libraries, also garners ratings on five items drawn from a pool of more than 100 choices selected by libraries. In this article, the relationship of scores on these locally…

  5. Selecting a Benchmark Suite to Profile High-Performance Computing (HPC) Machines

    DTIC Science & Technology

    2014-11-01

    architectures. Machines now contain central processing units (CPUs), graphics processing units (GPUs), and many integrated core ( MIC ) architecture all...evaluate the feasibility and applicability of a new architecture just released to the market . Researchers are often unsure how available resources will...architectures. Having a suite of programs running on different architectures, such as GPUs, MICs , and CPUs, adds complexity and technical challenges

  6. LU Factorization with Partial Pivoting for a Multi-CPU, Multi-GPU Shared Memory System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurzak, Jakub; Luszczek, Pitior; Faverge, Mathieu

    2012-03-01

    LU factorization with partial pivoting is a canonical numerical procedure and the main component of the High Performance LINPACK benchmark. This article presents an implementation of the algorithm for a hybrid, shared memory, system with standard CPU cores and GPU accelerators. Performance in excess of one TeraFLOPS is achieved using four AMD Magny Cours CPUs and four NVIDIA Fermi GPUs.

  7. Energy efficient engine. Volume 2. Appendix A: Component development and integration program

    NASA Technical Reports Server (NTRS)

    Moracz, D. J.; Cook, C. R.

    1981-01-01

    The large size and the requirement for precise lightening cavities in a considerable portion of the titanium fan blades necessitated the development of a new manufacturing method. The approach which was selected for development incorporated several technologies including HIP diffusion bonding of titanium sheet laminates containing removable cores and isothermal forging of the blade form. The technology bases established in HIP/DB for composite blades and in isothermal forging for fan blades were applicable for development of the manufacturing process. The process techniques and parameters for producing and inspecting the cored diffusion bonded titanium laminate blade preform were established. The method was demonstrated with the production of twelve hollow simulated blade shapes for evaluation. Evaluations of the critical experiments conducted to establish procedures to produce hollow structures by a laminate/core/diffusion bonding approach are included. In addition the transfer of this technology to produce a hollow fan blade is discussed.

  8. Numerical modeling of fluid and electrical currents through geometries based on synchrotron X-ray tomographic images of reservoir rocks using Avizo and COMSOL

    NASA Astrophysics Data System (ADS)

    Bird, M. B.; Butler, S. L.; Hawkes, C. D.; Kotzer, T.

    2014-12-01

    The use of numerical simulations to model physical processes occurring within subvolumes of rock samples that have been characterized using advanced 3D imaging techniques is becoming increasingly common. Not only do these simulations allow for the determination of macroscopic properties like hydraulic permeability and electrical formation factor, but they also allow the user to visualize processes taking place at the pore scale and they allow for multiple different processes to be simulated on the same geometry. Most efforts to date have used specialized research software for the purpose of simulations. In this contribution, we outline the steps taken to use commercial software Avizo to transform a 3D synchrotron X-ray-derived tomographic image of a rock core sample to an STL (STereoLithography) file which can be imported into the commercial multiphysics modeling package COMSOL. We demonstrate that the use of COMSOL to perform fluid and electrical current flow simulations through the pore spaces. The permeability and electrical formation factor of the sample are calculated and compared with laboratory-derived values and benchmark calculations. Although the simulation domains that we were able to model on a desk top computer were significantly smaller than representative elementary volumes, and we were able to establish Kozeny-Carman and Archie's Law trends on which laboratory measurements and previous benchmark solutions fall. The rock core samples include a Fountainebleau sandstone used for benchmarking and a marly dolostone sampled from a well in the Weyburn oil field of southeastern Saskatchewan, Canada. Such carbonates are known to have complicated pore structures compared with sandstones, yet we are able to calculate reasonable macroscopic properties. We discuss the computing resources required.

  9. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pecchia, M.; D'Auria, F.; Mazzantini, O.

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI formore » performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)« less

  10. A theoretical and experimental benchmark study of core-excited states in nitrogen

    DOE PAGES

    Myhre, Rolf H.; Wolf, Thomas J. A.; Cheng, Lan; ...

    2018-02-14

    The high resolution near edge X-ray absorption fine structure spectrum of nitrogen displays the vibrational structure of the core-excited states. This makes nitrogen well suited for assessing the accuracy of different electronic structure methods for core excitations. We report high resolution experimental measurements performed at the SOLEIL synchrotron facility. These are compared with theoretical spectra calculated using coupled cluster theory and algebraic diagrammatic construction theory. The coupled cluster singles and doubles with perturbative triples model known as CC3 is shown to accurately reproduce the experimental excitation energies as well as the spacing of the vibrational transitions. In conclusion, the computationalmore » results are also shown to be systematically improved within the coupled cluster hierarchy, with the coupled cluster singles, doubles, triples, and quadruples method faithfully reproducing the experimental vibrational structure.« less

  11. Coupled Neutronics Thermal-Hydraulic Solution of a Full-Core PWR Using VERA-CS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clarno, Kevin T; Palmtag, Scott; Davidson, Gregory G

    2014-01-01

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a core simulator called VERA-CS to model operating PWR reactors with high resolution. This paper describes how the development of VERA-CS is being driven by a set of progression benchmark problems that specify the delivery of useful capability in discrete steps. As part of this development, this paper will describe the current capability of VERA-CS to perform a multiphysics simulation of an operating PWR at Hot Full Power (HFP) conditions using a set of existing computer codes coupled together in a novel method. Results for several single-assembly casesmore » are shown that demonstrate coupling for different boron concentrations and power levels. Finally, high-resolution results are shown for a full-core PWR reactor modeled in quarter-symmetry.« less

  12. A theoretical and experimental benchmark study of core-excited states in nitrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myhre, Rolf H.; Wolf, Thomas J. A.; Cheng, Lan

    The high resolution near edge X-ray absorption fine structure spectrum of nitrogen displays the vibrational structure of the core-excited states. This makes nitrogen well suited for assessing the accuracy of different electronic structure methods for core excitations. We report high resolution experimental measurements performed at the SOLEIL synchrotron facility. These are compared with theoretical spectra calculated using coupled cluster theory and algebraic diagrammatic construction theory. The coupled cluster singles and doubles with perturbative triples model known as CC3 is shown to accurately reproduce the experimental excitation energies as well as the spacing of the vibrational transitions. In conclusion, the computationalmore » results are also shown to be systematically improved within the coupled cluster hierarchy, with the coupled cluster singles, doubles, triples, and quadruples method faithfully reproducing the experimental vibrational structure.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephen Johnson; Mehdi Salehi; Karl Eisert

    This report describes the progress of our research during the first 30 months (10/01/2004 to 03/31/2007) of the original three-year project cycle. The project was terminated early due to DOE budget cuts. This was a joint project between the Tertiary Oil Recovery Project (TORP) at the University of Kansas and the Idaho National Laboratory (INL). The objective was to evaluate the use of low-cost biosurfactants produced from agriculture process waste streams to improve oil recovery in fractured carbonate reservoirs through wettability mediation. Biosurfactant for this project was produced using Bacillus subtilis 21332 and purified potato starch as the growth medium.more » The INL team produced the biosurfactant and characterized it as surfactin. INL supplied surfactin as required for the tests at KU as well as providing other microbiological services. Interfacial tension (IFT) between Soltrol 130 and both potential benchmark chemical surfactants and crude surfactin was measured over a range of concentrations. The performance of the crude surfactin preparation in reducing IFT was greater than any of the synthetic compounds throughout the concentration range studied but at low concentrations, sodium laureth sulfate (SLS) was closest to the surfactin, and was used as the benchmark in subsequent studies. Core characterization was carried out using both traditional flooding techniques to find porosity and permeability; and NMR/MRI to image cores and identify pore architecture and degree of heterogeneity. A cleaning regime was identified and developed to remove organic materials from cores and crushed carbonate rock. This allowed cores to be fully characterized and returned to a reproducible wettability state when coupled with a crude-oil aging regime. Rapid wettability assessments for crushed matrix material were developed, and used to inform slower Amott wettability tests. Initial static absorption experiments exposed limitations in the use of HPLC and TOC to determine surfactant concentrations. To reliably quantify both benchmark surfactants and surfactin, a surfactant ion-selective electrode was used as an indicator in the potentiometric titration of the anionic surfactants with Hyamine 1622. The wettability change mediated by dilute solutions of a commercial preparation of SLS (STEOL CS-330) and surfactin was assessed using two-phase separation, and water flotation techniques; and surfactant loss due to retention and adsorption on the rock was determined. Qualitative tests indicated that on a molar basis, surfactin is more effective than STEOL CS-330 in altering wettability of crushed Lansing-Kansas City carbonates from oil-wet to water-wet state. Adsorption isotherms of STEOL CS-330 and surfactin on crushed Lansing-Kansas City outcrop and reservoir material showed that surfactin has higher specific adsorption on these oomoldic carbonates. Amott wettability studies confirmed that cleaned cores are mixed-wet, and that the aging procedure renders them oil-wet. Tests of aged cores with no initial water saturation resulted in very little spontaneous oil production, suggesting that water-wet pathways into the matrix are required for wettability change to occur. Further investigation of spontaneous imbibition and forced imbibition of water and surfactant solutions into LKC cores under a variety of conditions--cleaned vs. crude oil-aged; oil saturated vs. initial water saturation; flooded with surfactant vs. not flooded--indicated that in water-wet or intermediate wet cores, sodium laureth sulfate is more effective at enhancing spontaneous imbibition through wettability change. However, in more oil-wet systems, surfactin at the same concentration performs significantly better.« less

  14. A System Approach to Navy Medical Education and Training. Appendix 45. Competency Curricula for Dental Prosthetic Assistant and Dental Prosthetic Technician.

    DTIC Science & Technology

    1974-08-31

    Removable Partial Dentures ..................... 34 XI. Fixed Partial Denture Construction .. ........ 35 l. Construct Master Cast with Removable...Dies . . . 36 2. Construct Patterns for Fixed Partial Dentures .. . ..... 37 3. Spruing and Investing oeu . . . 38 4. Wax Elimination and Casting...42 S. Re3in Jacket Crowns . . ............ 43 9. Temporary Crowns and Fixed Partial Dentures . . 44 10. Post and Core Techniques . . o

  15. Common Core State Standards for English Language Arts & Literacy in History/Social Studies, Science, and Technical Subjects. Appendix B: Text Exemplars and Sample Performance Tasks

    ERIC Educational Resources Information Center

    Common Core State Standards Initiative, 2010

    2010-01-01

    The text samples presented in this document primarily serve to exemplify the level of complexity and quality that the Standards require all students in a given grade band to engage with. Additionally, they are suggestive of the breadth of texts that students should encounter in the text types required by the Standards. The choices should serve as…

  16. DOD Weapon Systems Software Management Study, Appendix B. Shipborne Systems

    DTIC Science & Technology

    1975-06-01

    program management, from Inception to development maintenance, 2. Detailed documentation requirements, 3. Standard high -level language development (CS-1...the Guided Missile School (GMS) at Dam Neck. The APL Land-Based Test Site (LETS) consisted of a Mk 152 digital fire control computer, SPG-55B radar...instruction and data segments are respectively placed in low and high core addresses to take advantage of UYK-7 memory accessing time savings. UYK-7

  17. Benchmark CCSD(T) and DFT study of binding energies in Be7 - 12: in search of reliable DFT functional for beryllium clusters

    NASA Astrophysics Data System (ADS)

    Labanc, Daniel; Šulka, Martin; Pitoňák, Michal; Černušák, Ivan; Urban, Miroslav; Neogrády, Pavel

    2018-05-01

    We present a computational study of the stability of small homonuclear beryllium clusters Be7 - 12 in singlet electronic states. Our predictions are based on highly correlated CCSD(T) coupled cluster calculations. Basis set convergence towards the complete basis set limit as well as the role of the 1s core electron correlation are carefully examined. Our CCSD(T) data for binding energies of Be7 - 12 clusters serve as a benchmark for performance assessment of several density functional theory (DFT) methods frequently used in beryllium cluster chemistry. We observe that, from Be10 clusters on, the deviation from CCSD(T) benchmarks is stable with respect to size, and fluctuating within 0.02 eV error bar for most examined functionals. This opens up the possibility of scaling the DFT binding energies for large Be clusters using CCSD(T) benchmark values for smaller clusters. We also tried to find analogies between the performance of DFT functionals for Be clusters and for the valence-isoelectronic Mg clusters investigated recently in Truhlar's group. We conclude that it is difficult to find DFT functionals that perform reasonably well for both beryllium and magnesium clusters. Out of 12 functionals examined, only the M06-2X functional gives reasonably accurate and balanced binding energies for both Be and Mg clusters.

  18. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-02-15

    We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 millionmore » vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  19. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-05-29

    We present a new lock-free parallel algorithm for computing betweenness centrality of massive small-world networks. With minor changes to the data structures, our algorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in the HPCS SSCA#2 Graph Analysis benchmark, which has been extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the ThreadStorm processor, and a single-socket Sun multicore server with the UltraSparc T2 processor.more » For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  20. Core Collapse: The Race Between Stellar Evolution and Binary Heating

    NASA Astrophysics Data System (ADS)

    Converse, Joseph M.; Chandar, R.

    2012-01-01

    The dynamical formation of binary stars can dramatically affect the evolution of their host star clusters. In relatively small clusters (M < 6000 Msun) the most massive stars rapidly form binaries, heating the cluster and preventing any significant contraction of the core. The situation in much larger globular clusters (M 105 Msun) is quite different, with many showing collapsed cores, implying that binary formation did not affect them as severely as lower mass clusters. More massive clusters, however, should take longer to form their binaries, allowing stellar evolution more time to prevent the heating by causing the larger stars to die off. Here, we simulate the evolution of clusters between those of open and globular clusters in order to find at what size a star cluster is able to experience true core collapse. Our simulations make use of a new GPU-based computing cluster recently purchased at the University of Toledo. We also present some benchmarks of this new computational resource.

  1. Benchmarking of Neutron Flux Parameters at the USGS TRIGA Reactor in Lakewood, Colorado

    NASA Astrophysics Data System (ADS)

    Alzaabi, Osama E.

    The USGS TRIGA Reactor (GSTR) located at the Denver Federal Center in Lakewood Colorado provides opportunities to Colorado School of Mines students to do experimental research in the field of neutron activation analysis. The scope of this thesis is to obtain precise knowledge of neutron flux parameters at the GSTR. The Colorado School of Mines Nuclear Physics group intends to develop several research projects at the GSTR, which requires the precise knowledge of neutron fluxes and energy distributions in several irradiation locations. The fuel burn-up of the new GSTR fuel configuration and the thermal neutron flux of the core were recalculated since the GSTR core configuration had been changed with the addition of two new fuel elements. Therefore, a MCNP software package was used to incorporate the burn up of reactor fuel and to determine the neutron flux at different irradiation locations and at flux monitoring bores. These simulation results were compared with neutron activation analysis results using activated diluted gold wires. A well calibrated and stable germanium detector setup as well as fourteen samplers were designed and built to achieve accuracy in the measurement of the neutron flux. Furthermore, the flux monitoring bores of the GSTR core were used for the first time to measure neutron flux experimentally and to compare to MCNP simulation. In addition, International Atomic Energy Agency (IAEA) standard materials were used along with USGS national standard materials in a previously well calibrated irradiation location to benchmark simulation, germanium detector calibration and sample measurements to international standards.

  2. Beyond core count: a look at new mainstream computing platforms for HEP workloads

    NASA Astrophysics Data System (ADS)

    Szostek, P.; Nowak, A.; Bitzes, G.; Valsan, L.; Jarp, S.; Dotti, A.

    2014-06-01

    As Moore's Law continues to deliver more and more transistors, the mainstream processor industry is preparing to expand its investments in areas other than simple core count. These new interests include deep integration of on-chip components, advanced vector units, memory, cache and interconnect technologies. We examine these moving trends with parallelized and vectorized High Energy Physics workloads in mind. In particular, we report on practical experience resulting from experiments with scalable HEP benchmarks on the Intel "Ivy Bridge-EP" and "Haswell" processor families. In addition, we examine the benefits of the new "Haswell" microarchitecture and its impact on multiple facets of HEP software. Finally, we report on the power efficiency of new systems.

  3. Defining College Readiness: Where Are We Now, and Where Do We Need to Be? The Progress of Education Reform. Volume 13, Number 2

    ERIC Educational Resources Information Center

    Zinth, Jennifer Dounay

    2012-01-01

    Multiple catalysts are fueling states' increased urgency to establish a definition of "college readiness". Some states are creating a "college readiness" definition that describes what a student will know and be able to do in such core academic courses as English language arts and math, and that identifies items or benchmarks on state assessments…

  4. Analysis of dosimetry from the H.B. Robinson unit 2 pressure vessel benchmark using RAPTOR-M3G and ALPAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, G.A.

    2011-07-01

    Document available in abstract form only, full text of document follows: The dosimetry from the H. B. Robinson Unit 2 Pressure Vessel Benchmark is analyzed with a suite of Westinghouse-developed codes and data libraries. The radiation transport from the reactor core to the surveillance capsule and ex-vessel locations is performed by RAPTOR-M3G, a parallel deterministic radiation transport code that calculates high-resolution neutron flux information in three dimensions. The cross-section library used in this analysis is the ALPAN library, an Evaluated Nuclear Data File (ENDF)/B-VII.0-based library designed for reactor dosimetry and fluence analysis applications. Dosimetry is evaluated with the industry-standard SNLRMLmore » reactor dosimetry cross-section data library. (authors)« less

  5. Qualitative Analysis of Common Definitions for Core Advanced Pharmacy Practice Experiences

    PubMed Central

    Danielson, Jennifer; Weber, Stanley S.

    2014-01-01

    Objective. To determine how colleges and schools of pharmacy interpreted the Accreditation Council for Pharmacy Education’s (ACPE’s) Standards 2007 definitions for core advanced pharmacy practice experiences (APPEs), and how they differentiated community and institutional practice activities for introductory pharmacy practice experiences (IPPEs) and APPEs. Methods. A cross-sectional, qualitative, thematic analysis was done of survey data obtained from experiential education directors in US colleges and schools of pharmacy. Open-ended responses to invited descriptions of the 4 core APPEs were analyzed using grounded theory to determine common themes. Type of college or school of pharmacy (private vs public) and size of program were compared. Results. Seventy-one schools (72%) with active APPE programs at the time of the survey responded. Lack of strong frequent themes describing specific activities for the acute care/general medicine core APPE indicated that most respondents agreed on the setting (hospital or inpatient) but the student experience remained highly variable. Themes were relatively consistent between public and private institutions, but there were differences across programs of varying size. Conclusion. Inconsistencies existed in how colleges and schools of pharmacy defined the core APPEs as required by ACPE. More specific descriptions of core APPEs would help to standardize the core practice experiences across institutions and provide an opportunity for quality benchmarking. PMID:24954931

  6. Use of non-invasive genetics to generate core-area population estimates of a threatened predator in the Superior National Forest, USA

    USGS Publications Warehouse

    Barber-Meyer, Shannon; Ryan, Daniel; Grosshuesch, David; Catton, Timothy; Malick-Wahls, Sarah

    2018-01-01

    core areas and averaged 52.3 (SD=8.3, range=43-59) during 2015-2017 in the larger core areas. We found no evidence for a decrease or increase in abundance during either period. Lynx density estimates were approximately 7-10 times lower than densities of lynx in northern populations at the low of the snowshoe hare (Lepus americanus) population cycle. To our knowledge, our results are the first attempt to estimate abundance, trend and density of lynx in Minnesota using non-invasive genetic capture-mark-recapture. Estimates such as ours provide useful benchmarks for future comparisons by providing a context with which to assess 1) potential changes in forest management that may affect lynx recovery and conservation, and 2) possible effects of climate change on the depth, density, and duration of annual snow cover and correspondingly, potential effects on snowshoe hares as well.

  7. Experimental physics characteristics of a heavy-metal-reflected fast-spectrum critical assembly

    NASA Technical Reports Server (NTRS)

    Heneveld, W. H.; Paschall, R. K.; Springer, T. H.; Swanson, V. A.; Thiele, A. W.; Tuttle, R. J.

    1972-01-01

    A zero-power critical assembly was designed, constructed, and operated for the purpose of conducting a series of benchmark experiments dealing with the physics characteristics of a UN-fueled, Li-cooled, Mo-reflected, drum-controlled compact fast reactor for use with a space-power electric conversion system. The range of the previous experimental investigations has been expanded to include the reactivity effects of:(1) surrounding the reactor with 15.24 cm (6 in.) of polyethylene, (2) reducing the heights of a portion of the upper and lower axial reflectors by factors of 2 and 4, (3) adding 45 kg of W to the core uniformly in two steps, (4) adding 9.54 kg of Ta to the core uniformly, and (5) inserting 2.3 kg of polyethylene into the core proper and determining the effect of a Ta addition on the polyethylene worth.

  8. Structure analysis for hole-nuclei close to 132Sn by a large-scale shell-model calculation

    NASA Astrophysics Data System (ADS)

    Wang, Han-Kui; Sun, Yang; Jin, Hua; Kaneko, Kazunari; Tazaki, Shigeru

    2013-11-01

    The structure of neutron-rich nuclei with a few holes in respect of the doubly magic nucleus 132Sn is investigated by means of large-scale shell-model calculations. For a considerably large model space, including orbitals allowing both neutron and proton core excitations, an effective interaction for the extended pairing-plus-quadrupole model with monopole corrections is tested through detailed comparison between the calculation and experimental data. By using the experimental energy of the core-excited 21/2+ level in 131In as a benchmark, monopole corrections are determined that describe the size of the neutron N=82 shell gap. The level spectra, up to 5 MeV of excitation in 131In, 131Sn, 130In, 130Cd, and 130Sn, are well described and clearly explained by couplings of single-hole orbitals and by core excitations.

  9. Ecological Investigation of a Greentree Reservoir in the Delta National Forest, Mississippi.

    DTIC Science & Technology

    1981-09-01

    elm (Ulmus americana) and slippery elm (U. rubra) oc- curred in the study area, with American elm the more abundant species. No distinction was made...spectively, on both areas (see Appendix C for nomenclature). American elm * and water hickory ranked fifth and sixth in importance on the ref- erence area...between these two species; the label "American elm " includes data for both. 18 boundaries, water hickory increment cores were not analyzed. Table 4

  10. Resolved-particle simulation by the Physalis method: Enhancements and new capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierakowski, Adam J., E-mail: sierakowski@jhu.edu; Prosperetti, Andrea; Faculty of Science and Technology and J.M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede

    2016-03-15

    We present enhancements and new capabilities of the Physalis method for simulating disperse multiphase flows using particle-resolved simulation. The current work enhances the previous method by incorporating a new type of pressure-Poisson solver that couples with a new Physalis particle pressure boundary condition scheme and a new particle interior treatment to significantly improve overall numerical efficiency. Further, we implement a more efficient method of calculating the Physalis scalar products and incorporate short-range particle interaction models. We provide validation and benchmarking for the Physalis method against experiments of a sedimenting particle and of normal wall collisions. We conclude with an illustrativemore » simulation of 2048 particles sedimenting in a duct. In the appendix, we present a complete and self-consistent description of the analytical development and numerical methods.« less

  11. Calculation of the Phenix end-of-life test 'Control Rod Withdrawal' with the ERANOS code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiberi, V.

    2012-07-01

    The Inst. of Radiological Protection and Nuclear Safety (IRSN) acts as technical support to French public authorities. As such, IRSN is in charge of safety assessment of operating and under construction reactors, as well as future projects. In this framework, one current objective of IRSN is to evaluate the ability and accuracy of numerical tools to foresee consequences of accidents. Neutronic studies step in the safety assessment from different points of view among which the core design and its protection system. They are necessary to evaluate the core behavior in case of accident in order to assess the integrity ofmore » the first barrier and the absence of a prompt criticality risk. To reach this objective one main physical quantity has to be evaluated accurately: the neutronic power distribution in core during whole reactor lifetime. Phenix end of life tests, carried out in 2009, aim at increasing the experience feedback on sodium cooled fast reactors. These experiments have been done in the framework of the development of the 4. generation of nuclear reactors. Ten tests have been carried out: 6 on neutronic and fuel aspects, 2 on thermal hydraulics and 2 for the emergency shutdown. Two of them have been chosen for an international exercise on thermal hydraulics and neutronics in the frame of an IAEA Coordinated Research Project. Concerning neutronics, the Control Rod Withdrawal test is relevant for safety because it allows evaluating the capability of calculation tools to compute the radial power distribution on fast reactors core configurations in which the flux field is very deformed. IRSN participated to this benchmark with the ERANOS code developed by CEA for fast reactors studies. This paper presents the results obtained in the framework of the benchmark activity. A relatively good agreement was found with available measures considering the approximations done in the modeling. The work underlines the importance of burn-up calculations in order to have a fine core concentrations mesh for the calculation of the power distribution. (authors)« less

  12. Automatic Thread-Level Parallelization in the Chombo AMR Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christen, Matthias; Keen, Noel; Ligocki, Terry

    2011-05-26

    The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less

  13. New core-reflector boundary conditions for transient nodal reactor calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, E.K.; Kim, C.H.; Joo, H.K.

    1995-09-01

    New core-reflector boundary conditions designed for the exclusion of the reflector region in transient nodal reactor calculations are formulated. Spatially flat frequency approximations for the temporal neutron behavior and two types of transverse leakage approximations in the reflector region are introduced to solve the transverse-integrated time-dependent one-dimensional diffusion equation and then to obtain relationships between net current and flux at the core-reflector interfaces. To examine the effectiveness of new core-reflector boundary conditions in transient nodal reactor computations, nodal expansion method (NEM) computations with and without explicit representation of the reflector are performed for Laboratorium fuer Reaktorregelung und Anlagen (LRA) boilingmore » water reactor (BWR) and Nuclear Energy Agency Committee on Reactor Physics (NEACRP) pressurized water reactor (PWR) rod ejection kinetics benchmark problems. Good agreement between two NEM computations is demonstrated in all the important transient parameters of two benchmark problems. A significant amount of CPU time saving is also demonstrated with the boundary condition model with transverse leakage (BCMTL) approximations in the reflector region. In the three-dimensional LRA BWR, the BCMTL and the explicit reflector model computations differ by {approximately}4% in transient peak power density while the BCMTL results in >40% of CPU time saving by excluding both the axial and the radial reflector regions from explicit computational nodes. In the NEACRP PWR problem, which includes six different transient cases, the largest difference is 24.4% in the transient maximum power in the one-node-per-assembly B1 transient results. This difference in the transient maximum power of the B1 case is shown to reduce to 11.7% in the four-node-per-assembly computations. As for the computing time, BCMTL is shown to reduce the CPU time >20% in all six transient cases of the NEACRP PWR.« less

  14. RAMONA-4B a computer code with three-dimensional neutron kinetics for BWR and SBWR system transient - user`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.

    This document is the User`s Manual for the Boiling Water Reactor (BWR), and Simplified Boiling Water Reactor (SBWR) systems transient code RAMONA-4B. The code uses a three-dimensional neutron-kinetics model coupled with a multichannel, nonequilibrium, drift-flux, phase-flow model of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients. Chapter 1 gives an overview of the code`s capabilities and limitations; Chapter 2 describes the code`s structure, lists major subroutines, and discusses the computer requirements. Chapter 3 is on code, auxillary codes, and instructions for running RAMONA-4B on Sun SPARCmore » and IBM Workstations. Chapter 4 contains component descriptions and detailed card-by-card input instructions. Chapter 5 provides samples of the tabulated output for the steady-state and transient calculations and discusses the plotting procedures for the steady-state and transient calculations. Three appendices contain important user and programmer information: lists of plot variables (Appendix A) listings of input deck for sample problem (Appendix B), and a description of the plotting program PAD (Appendix C). 24 refs., 18 figs., 11 tabs.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rouxelin, Pascal Nicolas; Strydom, Gerhard

    Best-estimate plus uncertainty analysis of reactors is replacing the traditional conservative (stacked uncertainty) method for safety and licensing analysis. To facilitate uncertainty analysis applications, a comprehensive approach and methodology must be developed and applied. High temperature gas cooled reactors (HTGRs) have several features that require techniques not used in light-water reactor analysis (e.g., coated-particle design and large graphite quantities at high temperatures). The International Atomic Energy Agency has therefore launched the Coordinated Research Project on HTGR Uncertainty Analysis in Modeling to study uncertainty propagation in the HTGR analysis chain. The benchmark problem defined for the prismatic design is represented bymore » the General Atomics Modular HTGR 350. The main focus of this report is the compilation and discussion of the results obtained for various permutations of Exercise I 2c and the use of the cross section data in Exercise II 1a of the prismatic benchmark, which is defined as the last and first steps of the lattice and core simulation phases, respectively. The report summarizes the Idaho National Laboratory (INL) best estimate results obtained for Exercise I 2a (fresh single-fuel block), Exercise I 2b (depleted single-fuel block), and Exercise I 2c (super cell) in addition to the first results of an investigation into the cross section generation effects for the super-cell problem. The two dimensional deterministic code known as the New ESC based Weighting Transport (NEWT) included in the Standardized Computer Analyses for Licensing Evaluation (SCALE) 6.1.2 package was used for the cross section evaluation, and the results obtained were compared to the three dimensional stochastic SCALE module KENO VI. The NEWT cross section libraries were generated for several permutations of the current benchmark super-cell geometry and were then provided as input to the Phase II core calculation of the stand alone neutronics Exercise II 1a. The steady state core calculations were simulated with the INL coupled-code system known as the Parallel and Highly Innovative Simulation for INL Code System (PHISICS) and the system thermal-hydraulics code known as the Reactor Excursion and Leak Analysis Program (RELAP) 5 3D using the nuclear data libraries previously generated with NEWT. It was observed that significant differences in terms of multiplication factor and neutron flux exist between the various permutations of the Phase I super-cell lattice calculations. The use of these cross section libraries only leads to minor changes in the Phase II core simulation results for fresh fuel but shows significantly larger discrepancies for spent fuel cores. Furthermore, large incongruities were found between the SCALE NEWT and KENO VI results for the super cells, and while some trends could be identified, a final conclusion on this issue could not yet be reached. This report will be revised in mid 2016 with more detailed analyses of the super-cell problems and their effects on the core models, using the latest version of SCALE (6.2). The super-cell models seem to show substantial improvements in terms of neutron flux as compared to single-block models, particularly at thermal energies.« less

  16. Fabrication and Benchmarking of a Stratix V FPGA with Monolithic Integrated Microfluidic Cooling

    DTIC Science & Technology

    2017-03-01

    run. The output from all cores were monitored through the Altera Signaltap tool in order to detect glitches which occurred in the output...dependence on temperature, and static/ leakage power, which comes from several components, such as subthreshold leakage , gate leakage , and reverse bias 220...junction current. Subthreshold leakage current tends to be the most significant temperature dependent component of the power [6,7] and is given by

  17. Interior Head Impact Protective Components and Materials for Use in US Army Vehicles

    DTIC Science & Technology

    2015-08-01

    benchmarked the automotive industry to identify potential commercial-off-the-shelf (COTS) materials. TARDEC initially tested the energy attenuating...this effort leverages the performance criterion used in the automotive industry according to SAE TP201U-01, FMVSS (Federal Motor Vehicle Safety...of the core material not being fully engaged on the Ancra tract. The backing of material ID 14 was reinforced with steel , this resulted in the

  18. A formative evaluation of CU-SeeMe

    NASA Astrophysics Data System (ADS)

    Bibeau, Michael

    1995-02-01

    CU-SeeMe is a video conferencing software package that was designed and programmed at Cornell University. The program works with the TCP/IP network protocol and allows two or more parties to conduct a real-time video conference with full audio support. In this paper we evaluate CU-SeeMe through the process of Formative Evaluation. We first perform a Critical Review of the software using a subset of the Smith and Mosier Guidelines for Human-Computer Interaction. Next, we empirically review the software interface through a series of benchmark tests that are derived directly from a set of scenarios. The scenarios attempt to model real world situations that might be encountered by an individual in the target user class. Designing benchmark tasks becomes a natural and straightforward process when they are derived from the scenario set. Empirical measures are taken for each task, including completion times and error counts. These measures are accompanied by critical incident analysis 2 7 13 which serves to identify problems with the interface and the cognitive roots of those problems. The critical incidents reported by participants are accompanied by explanations of what caused the problem and why This helps in the process of formulating solutions for observed usability problems. All the testing results are combined in the Appendix in an illustrated partial redesign of the CU-SeeMe Interface.

  19. UI Review Results and NARAC Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisher, J.; Eme, B.; Kim, S.

    2017-03-08

    This report describes the results of an inter-program design review completed February 16th, 2017, during the second year of a FY16-FY18 NA-84 Technology Integration (TI) project to modernize the core software system used in DOE/NNSA's National Atmospheric Release Advisory Center (NARAC, narac.llnl.gov). This review focused on the graphical user interfaces (GUI) frameworks. Reviewers (described in Appendix 2) were selected from multiple areas of the LLNL Computation directorate, based on their expertise in GUI and Web technologies.

  20. Systematic void fraction studies with RELAP5, FRANCESCA and HECHAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stosic, Z.; Preusche, G.

    1996-08-01

    In enhancing the scope of standard thermal-hydraulic codes applications beyond its capabilities, i.e. coupling with a one and/or three-dimensional kinetics core model, the void fraction, transferred from thermal-hydraulics to the core model, plays a determining role in normal operating range and high core flow, as the generated heat and axial power profiles are direct functions of void distribution in the core. Hence, it is very important to know if the void quality models in the programs which have to be coupled are compatible to allow the interactive exchange of data which are based on these constitutive void-quality relations. The presentedmore » void fraction study is performed in order to give the basis for the conclusion whether a transient core simulation using the RELAP5 void fractions can calculate the axial power shapes adequately. Because of that, the void fractions calculated with RELAP5 are compared with those calculated by BWR safety code for licensing--FRANCESCA and the best estimate model for pre- and post-dryout calculation in BWR heated channel--HECHAN. In addition, a comparison with standard experimental void-quality benchmark tube data is performed for the HECHAN code.« less

  1. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    DOE PAGES

    Yankov, Artem; Collins, Benjamin; Klein, Markus; ...

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less

  2. A solid reactor core thermal model for nuclear thermal rockets

    NASA Astrophysics Data System (ADS)

    Rider, William J.; Cappiello, Michael W.; Liles, Dennis R.

    1991-01-01

    A Helium/Hydrogen Cooled Reactor Analysis (HERA) computer code has been developed. HERA has the ability to model arbitrary geometries in three dimensions, which allows the user to easily analyze reactor cores constructed of prismatic graphite elements. The code accounts for heat generation in the fuel, control rods, and other structures; conduction and radiation across gaps; convection to the coolant; and a variety of boundary conditions. The numerical solution scheme has been optimized for vector computers, making long transient analyses economical. Time integration is either explicit or implicit, which allows the use of the model to accurately calculate both short- or long-term transients with an efficient use of computer time. Both the basic spatial and temporal integration schemes have been benchmarked against analytical solutions.

  3. Benchmark results and theoretical treatments for valence-to-core x-ray emission spectroscopy in transition metal compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mortensen, D. R.; Seidler, G. T.; Kas, Joshua J.

    We report measurement of the valence-to-core (VTC) region of the K-shell x-ray emission spectra from several Zn and Fe inorganic compounds, and their critical comparison with several existing theoretical treatments. We find generally good agreement between the respective theories and experiment, and in particular find an important admixture of dipole and quadrupole character for Zn materials that is much weaker in Fe-based systems. These results on materials whose simple crystal structures should not, a prior, pose deep challenges to theory, will prove useful in guiding the further development of DFT and time-dependent DFT methods for VTC-XES predictions and their comparisonmore » to experiment.« less

  4. CORAL: aligning conserved core regions across domain families.

    PubMed

    Fong, Jessica H; Marchler-Bauer, Aron

    2009-08-01

    Homologous protein families share highly conserved sequence and structure regions that are frequent targets for comparative analysis of related proteins and families. Many protein families, such as the curated domain families in the Conserved Domain Database (CDD), exhibit similar structural cores. To improve accuracy in aligning such protein families, we propose a profile-profile method CORAL that aligns individual core regions as gap-free units. CORAL computes optimal local alignment of two profiles with heuristics to preserve continuity within core regions. We benchmarked its performance on curated domains in CDD, which have pre-defined core regions, against COMPASS, HHalign and PSI-BLAST, using structure superpositions and comprehensive curator-optimized alignments as standards of truth. CORAL improves alignment accuracy on core regions over general profile methods, returning a balanced score of 0.57 for over 80% of all domain families in CDD, compared with the highest balanced score of 0.45 from other methods. Further, CORAL provides E-values to aid in detecting homologous protein families and, by respecting block boundaries, produces alignments with improved 'readability' that facilitate manual refinement. CORAL will be included in future versions of the NCBI Cn3D/CDTree software, which can be downloaded at http://www.ncbi.nlm.nih.gov/Structure/cdtree/cdtree.shtml. Supplementary data are available at Bioinformatics online.

  5. Monodisperse core/shell Ni/FePt nanoparticles and their con-version to Ni/Pt to catalyze oxygen reduction

    DOE PAGES

    Zhang, Sen; Hao, Yizhou; Su, Dong; ...

    2014-10-28

    We report a size-controllable synthesis of monodisperse core/shell Ni/FePt nanoparticles (NPs) via a seed-mediated growth and their subsequent conversion to Ni/Pt NPs. Preventing surface oxidation of the Ni seeds is essential for the growth of uniform FePt shells. These Ni/FePt NPs have a thin (≈ 1 nm) FePt shell, and can be converted to Ni/Pt by acetic acid wash to yield active catalysts for oxygen reduction reaction (ORR). Tuning the core size allow for optimization of their electrocatalytic activity. The specific activity and mass activity of 4.2 nm/0.8 nm core/shell Ni/FePt reach 1.95 mA/cm² and 490 mA/mg Pt at 0.9more » V ( vs. reversible hydrogen electrode, RHE), which are much higher than those of benchmark commercial Pt catalyst (0.34 mA/cm² and 92 mA/mg Pt at 0.9 V). Our studies provide a robust approach to monodisperse core/shell NPs with non-precious metal core, making it possible to develop advanced NP catalysts with ultralow Pt content for ORR and many other heterogeneous reactions.« less

  6. Highly Efficient Parallel Multigrid Solver For Large-Scale Simulation of Grain Growth Using the Structural Phase Field Crystal Model

    NASA Astrophysics Data System (ADS)

    Guan, Zhen; Pekurovsky, Dmitry; Luce, Jason; Thornton, Katsuyo; Lowengrub, John

    The structural phase field crystal (XPFC) model can be used to model grain growth in polycrystalline materials at diffusive time-scales while maintaining atomic scale resolution. However, the governing equation of the XPFC model is an integral-partial-differential-equation (IPDE), which poses challenges in implementation onto high performance computing (HPC) platforms. In collaboration with the XSEDE Extended Collaborative Support Service, we developed a distributed memory HPC solver for the XPFC model, which combines parallel multigrid and P3DFFT. The performance benchmarking on the Stampede supercomputer indicates near linear strong and weak scaling for both multigrid and transfer time between multigrid and FFT modules up to 1024 cores. Scalability of the FFT module begins to decline at 128 cores, but it is sufficient for the type of problem we will be examining. We have demonstrated simulations using 1024 cores, and we expect to achieve 4096 cores and beyond. Ongoing work involves optimization of MPI/OpenMP-based codes for the Intel KNL Many-Core Architecture. This optimizes the code for coming pre-exascale systems, in particular many-core systems such as Stampede 2.0 and Cori 2 at NERSC, without sacrificing efficiency on other general HPC systems.

  7. The subdwarf B star SB 290 - A fast rotator on the extreme horizontal branch

    NASA Astrophysics Data System (ADS)

    Geier, S.; Heber, U.; Heuser, C.; Classen, L.; O'Toole, S. J.; Edelmann, H.

    2013-03-01

    Hot subdwarf B stars (sdBs) are evolved core helium-burning stars with very thin hydrogen envelopes. To form an sdB, the progenitor has to lose almost all of its hydrogen envelope right at the tip of the red giant branch. In close binary systems, mass transfer to the companion provides the extraordinary mass loss required for their formation. However, apparently single sdBs exist as well, and their formation has been unclear for decades. The merger of helium white dwarfs leading to an ignition of core helium-burning or the merger of a helium core and a low-mass star during the common envelope phase have been proposed. Here we report the discovery of SB 290 as the first apparently single, fast-rotating sdB star located on the extreme horizontal branch, indicating that those stars may form from mergers. Appendix A is available in electronic form at http://www.aanda.org

  8. A Split Forcing Technique to Reduce Log-layer Mismatch in Wall-modeled Turbulent Channel Flows

    NASA Astrophysics Data System (ADS)

    Deleon, Rey; Senocak, Inanc

    2016-11-01

    The conventional approach to sustain a flow field in a periodic channel flow seems to be the culprit behind the log-law mismatch problem that has been reported in many studies hybridizing Reynolds-averaged Navier-Stokes (RANS) and large-eddy simulation (LES) techniques, commonly referred to as hybrid RANS-LES. To address this issue, we propose a split-forcing approach that relies only on the conservation of mass principle. We adopt a basic hybrid RANS-LES technique on a coarse mesh with wall-stress boundary conditions to simulate turbulent channel flows at friction Reynolds numbers of 2000 and 5200 and demonstrate good agreement with benchmark data. We also report a duality in velocity scale that is a specific consequence of the split forcing framework applied to hybrid RANS-LES. The first scale is the friction velocity derived from the wall shear stress. The second scale arises in the core LES region, a value different than at the wall. Second-order turbulence statistics agree well with the benchmark data when normalized by the core friction velocity, whereas the friction velocity at the wall remains the appropriate scale for the mean velocity profile. Based on our findings, we suggest reevaluating more sophisticated hybrid RANS-LES approaches within the split-forcing framework. Work funded by National Science Foundation under Grant No. 1056110 and 1229709. First author acknowledges the University of Idaho President's Doctoral Scholars Award.

  9. Spatial distribution and potential biological risk of some metals in relation to granulometric content in core sediments from Chilika Lake, India.

    PubMed

    Barik, Saroja K; Muduli, Pradipta R; Mohanty, Bita; Rath, Prasanta; Samanta, Srikanta

    2018-01-01

    The article presents first systematic report on the concentration of selected major elements [iron (Fe) and manganese (Mn)] and minor elements [zinc (Zn), copper (Cu), chromium (Cr), lead (Pb), nickel (Ni), and cobalt (Co)] from the core sediment of Chilika Lake, India. The analyzed samples revealed higher content of Pb than the background levels in the entire study area. The extent of contamination from minor and major elements is expressed by assessing (i) the metal enrichments in the sediment through the calculations of anthropogenic factor (AF), pollution load index (PLI), Enrichment factor (EF), and geoaccumulation index (Igeo) and (ii) potential biological risks by the use of sediment quality guidelines like effect range median (ERM) and effect range low (ERL) benchmarks. The estimated indices indicated that sediment is enriched with Pb, Ni, Cr, Cu and Co. The enrichment of these elements seems to be due to the fine granulometric characteristics of the sediment with Fe and Mn oxyhydroxides being the main metal carriers and fishing boats using low grade paints, fuel, and fishing technology using lead beads fixed to fishing nets. Trace element input to the Chilika lake needs to be monitored with due emphasis on Cr and Pb contaminations since the ERM and ERL benchmarks indicated potential biological risk with these metals.

  10. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr

    2014-12-15

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less

  11. CHIC - Coupling Habitability, Interior and Crust

    NASA Astrophysics Data System (ADS)

    Noack, Lena; Labbe, Francois; Boiveau, Thomas; Rivoldini, Attilio; Van Hoolst, Tim

    2014-05-01

    We present a new code developed for simulating convection in terrestrial planets and icy moons. The code CHIC is written in Fortran and employs the finite volume method and finite difference method for solving energy, mass and momentum equations in either silicate or icy mantles. The code uses either Cartesian (2D and 3D box) or spherical coordinates (2D cylinder or annulus). It furthermore contains a 1D parametrised model to obtain temperature profiles in specific regions, for example in the iron core or in the silicate mantle (solving only the energy equation). The 2D/3D convection model uses the same input parameters as the 1D model, which allows for comparison of the different models and adaptation of the 1D model, if needed. The code has already been benchmarked for the following aspects: - viscosity-dependent rheology (Blankenbach et al., 1989) - pseudo-plastic deformation (Tosi et al., in preparation phase) - subduction mechanism and plastic deformation (Quinquis et al., in preparation phase) New features that are currently developed and benchmarked include: - compressibility (following King et al., 2009 and Leng and Zhong, 2008) - different melt modules (Plesa et al., in preparation phase) - freezing of an inner core (comparison with GAIA code, Huettig and Stemmer, 2008) - build-up of oceanic and continental crust (Noack et al., in preparation phase) The code represents a useful tool to couple the interior with the surface of a planet (e.g. via build-up and erosion of crust) and it's atmosphere (via outgassing on the one hand and subduction of hydrated crust and carbonates back into the mantle). It will be applied to investigate several factors that might influence the habitability of a terrestrial planet, and will also be used to simulate icy bodies with high-pressure ice phases. References: Blankenbach et al. (1989). A benchmark comparison for mantle convection codes. GJI 98, 23-38. Huettig and Stemmer (2008). Finite volume discretization for dynamic viscosities on Voronoi grids. PEPI 171(1-4), 137-146. King et al. (2009). A Community Benchmark for 2D Cartesian Compressible Convection in the Earth's Mantle. GJI 179, 1-11. Leng and Zhong (2008). Viscous heating, adiabatic heating and energetic consistency in compressible mantle convection. GJI 173, 693-702.

  12. 1995 Pacific Northwest Loads and Resources Study, Technical Appendix: Volume 1.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    United States. Bonneville Power Administration.

    1995-12-01

    The Pacific Northwest Loads and Resources Study (WhiteBook), is published annually by BPA, and establishes the planning basis for supplying electricity to customers. It serves a dual purpose. First, the White Book presents projections of regional and Federal system load and resource capabilities, along with relevant definitions and explanations. Second, the White Book serves as a benchmark for annual BPA determinations made pursuant to the 1981 regional power sales contracts. Specifically, BPA uses the, information in the White Book for determining the notice required when customers request to increase or decrease the amount of power purchased from BPA. Aside frommore » these purposes, the White Book is used for input to BPA`s resource planning process. The White Book compiles information obtained from several formalized resource planning reports and data submittals, including those from the Northwest Power Planning Council (Council) and the Pacific Northwest Utilities Conference Committee (PNUCC).« less

  13. Demography of human supercentenarians.

    PubMed

    Coles, L Stephen

    2004-06-01

    An international committee of demographers has created a carefully documented list of worldwide living supercentenarians (> or =110 years old) that has been published by the Los Angeles Gerontology Research Group on its web site and updated on a weekly basis for the past 6 years [see "snapshot" for the year 2003 in the Appendix]. What can be learned by studying this distinguished group of individuals? Also, what are the implications for understanding the fundamental biological limits to human longevity and maximum life span? Our conclusion: Although everyone agrees that average life expectancy has systematically advanced linearly over the last century, it is not realistic to expect that this pace can continue indefinitely. Our data suggest that, without the invention of some new unknown form of medical breakthrough, the Guinness Book of World Records benchmark established by French woman Jeanne Calment of 122 years, set back in 1997, will be exceedingly difficult to break in our lifetime.

  14. The impact of Moore's Law and loss of Dennard scaling: Are DSP SoCs an energy efficient alternative to x86 SoCs?

    NASA Astrophysics Data System (ADS)

    Johnsson, L.; Netzer, G.

    2016-10-01

    Moore's law, the doubling of transistors per unit area for each CMOS technology generation, is expected to continue throughout the decade, while Dennard voltage scaling resulting in constant power per unit area stopped about a decade ago. The semiconductor industry's response to the loss of Dennard scaling and the consequent challenges in managing power distribution and dissipation has been leveled off clock rates, a die performance gain reduced from about a factor of 2.8 to 1.4 per technology generation, and multi-core processor dies with increased cache sizes. Increased caches sizes offers performance benefits for many applications as well as energy savings. Accessing data in cache is considerably more energy efficient than main memory accesses. Further, caches consume less power than a corresponding amount of functional logic. As feature sizes continue to be scaled down an increasing fraction of the die must be “underutilized” or “dark” due to power constraints. With power being a prime design constraint there is a concerted effort to find significantly more energy efficient chip architectures than dominant in servers today, with chips potentially incorporating several types of cores to cover a range of applications, or different functions in an application, as is already common for the mobile processor market. Digital Signal Processors (DSPs), largely targeting the embedded and mobile processor markets, typically have been designed for a power consumption of 10% or less of a typical x86 CPU, yet with much more than 10% of the floating-point capability of the same technology generation x86 CPUs. Thus, DSPs could potentially offer an energy efficient alternative to x86 CPUs. Here we report an assessment of the Texas Instruments TMS320C6678 DSP in regards to its energy efficiency for two common HPC benchmarks: STREAM (memory system benchmark) and HPL (CPU benchmark)

  15. Comparison of the PHISICS/RELAP5-3D Ring and Block Model Results for Phase I of the OECD MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2014-04-01

    The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1,more » a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.« less

  16. Visualization assisted by parallel processing

    NASA Astrophysics Data System (ADS)

    Lange, B.; Rey, H.; Vasques, X.; Puech, W.; Rodriguez, N.

    2011-01-01

    This paper discusses the experimental results of our visualization model for data extracted from sensors. The objective of this paper is to find a computationally efficient method to produce a real time rendering visualization for a large amount of data. We develop visualization method to monitor temperature variance of a data center. Sensors are placed on three layers and do not cover all the room. We use particle paradigm to interpolate data sensors. Particles model the "space" of the room. In this work we use a partition of the particle set, using two mathematical methods: Delaunay triangulation and Voronoý cells. Avis and Bhattacharya present these two algorithms in. Particles provide information on the room temperature at different coordinates over time. To locate and update particles data we define a computational cost function. To solve this function in an efficient way, we use a client server paradigm. Server computes data and client display this data on different kind of hardware. This paper is organized as follows. The first part presents related algorithm used to visualize large flow of data. The second part presents different platforms and methods used, which was evaluated in order to determine the better solution for the task proposed. The benchmark use the computational cost of our algorithm that formed based on located particles compared to sensors and on update of particles value. The benchmark was done on a personal computer using CPU, multi core programming, GPU programming and hybrid GPU/CPU. GPU programming method is growing in the research field; this method allows getting a real time rendering instates of a precompute rendering. For improving our results, we compute our algorithm on a High Performance Computing (HPC), this benchmark was used to improve multi-core method. HPC is commonly used in data visualization (astronomy, physic, etc) for improving the rendering and getting real-time.

  17. RELAP5-3D Results for Phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2012-06-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2.« less

  18. RELAP5-3D results for phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, G.; Epiney, A. S.

    2012-07-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2. (authors)« less

  19. ZPPR-20 phase D : a cylindrical assembly of polyethylene moderated U metal reflected by beryllium oxide and polyethylene.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R.; Grimm, K.; McKnight, R.

    The Zero Power Physics Reactor (ZPPR) fast critical facility was built at the Argonne National Laboratory-West (ANL-W) site in Idaho in 1969 to obtain neutron physics information necessary for the design of fast breeder reactors. The ZPPR-20D Benchmark Assembly was part of a series of cores built in Assembly 20 (References 1 through 3) of the ZPPR facility to provide data for developing a nuclear power source for space applications (SP-100). The assemblies were beryllium oxide reflected and had core fuel compositions containing enriched uranium fuel, niobium and rhenium. ZPPR-20 Phase C (HEU-MET-FAST-075) was built as the reference flight configuration.more » Two other configurations, Phases D and E, simulated accident scenarios. Phase D modeled the water immersion scenario during a launch accident, and Phase E (SUB-HEU-MET-FAST-001) modeled the earth burial scenario during a launch accident. Two configurations were recorded for the simulated water immersion accident scenario (Phase D); the critical configuration, documented here, and the subcritical configuration (SUB-HEU-MET-MIXED-001). Experiments in Assembly 20 Phases 20A through 20F were performed in 1988. The reference water immersion configuration for the ZPPR-20D assembly was obtained as reactor loading 129 on October 7, 1988 with a fissile mass of 167.477 kg and a reactivity of -4.626 {+-} 0.044{cents} (k {approx} 0.9997). The SP-100 core was to be constructed of highly enriched uranium nitride, niobium, rhenium and depleted lithium. The core design called for two enrichment zones with niobium-1% zirconium alloy fuel cladding and core structure. Rhenium was to be used as a fuel pin liner to provide shut down in the event of water immersion and flooding. The core coolant was to be depleted lithium metal ({sup 7}Li). The core was to be surrounded radially with a niobium reactor vessel and bypass which would carry the lithium coolant to the forward inlet plenum. Immediately inside the reactor vessel was a rhenium baffle which would act as a neutron curtain in the event of water immersion. A fission gas plenum and coolant inlet plenum were located axially forward of the core. Some material substitutions had to be made in mocking up the SP-100 design. The ZPPR-20 critical assemblies were fueled by 93% enriched uranium metal because uranium nitride, which was the SP-100 fuel type, was not available. ZPPR Assembly 20D was designed to simulate a water immersion accident. The water was simulated by polyethylene (CH{sub 2}), which contains a similar amount of hydrogen and has a similar density. A very accurate transformation to a simplified model is needed to make any of the ZPPR assemblies a practical criticality-safety benchmark. There is simply too much geometric detail in an exact model of a ZPPR assembly, particularly as complicated an assembly as ZPPR-20D. The transformation must reduce the detail to a practical level without masking any of the important features of the critical experiment. And it must do this without increasing the total uncertainty far beyond that of the original experiment. Such a transformation will be described in a later section. First, Assembly 20D was modeled in full detail--every plate, drawer, matrix tube, and air gap was modeled explicitly. Then the regionwise compositions and volumes from this model were converted to an RZ model. ZPPR Assembly 20D has been determined to be an acceptable criticality-safety benchmark experiment.« less

  20. HACC: Extreme Scaling and Performance Across Diverse Architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Morozov, Vitali; Frontiere, Nicholas; Finkel, Hal; Pope, Adrian; Heitmann, Katrin

    2013-11-01

    Supercomputing is evolving towards hybrid and accelerator-based architectures with millions of cores. The HACC (Hardware/Hybrid Accelerated Cosmology Code) framework exploits this diverse landscape at the largest scales of problem size, obtaining high scalability and sustained performance. Developed to satisfy the science requirements of cosmological surveys, HACC melds particle and grid methods using a novel algorithmic structure that flexibly maps across architectures, including CPU/GPU, multi/many-core, and Blue Gene systems. We demonstrate the success of HACC on two very different machines, the CPU/GPU system Titan and the BG/Q systems Sequoia and Mira, attaining unprecedented levels of scalable performance. We demonstrate strong and weak scaling on Titan, obtaining up to 99.2% parallel efficiency, evolving 1.1 trillion particles. On Sequoia, we reach 13.94 PFlops (69.2% of peak) and 90% parallel efficiency on 1,572,864 cores, with 3.6 trillion particles, the largest cosmological benchmark yet performed. HACC design concepts are applicable to several other supercomputer applications.

  1. Modeling Cardiac Electrophysiology at the Organ Level in the Peta FLOPS Computing Age

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Lawrence; Bishop, Martin; Hoetzl, Elena

    2010-09-30

    Despite a steep increase in available compute power, in-silico experimentation with highly detailed models of the heart remains to be challenging due to the high computational cost involved. It is hoped that next generation high performance computing (HPC) resources lead to significant reductions in execution times to leverage a new class of in-silico applications. However, performance gains with these new platforms can only be achieved by engaging a much larger number of compute cores, necessitating strongly scalable numerical techniques. So far strong scalability has been demonstrated only for a moderate number of cores, orders of magnitude below the range requiredmore » to achieve the desired performance boost.In this study, strong scalability of currently used techniques to solve the bidomain equations is investigated. Benchmark results suggest that scalability is limited to 512-4096 cores within the range of relevant problem sizes even when systems are carefully load-balanced and advanced IO strategies are employed.« less

  2. Energy Efficiency Evaluation and Benchmarking of AFRL’s Condor High Performance Computer

    DTIC Science & Technology

    2011-08-01

    AUG 2011 2. REPORT TYPE CONFERENCE PAPER (Post Print) 3. DATES COVERED (From - To) JAN 2011 – JUN 2011 4 . TITLE AND SUBTITLE ENERGY EFFICIENCY...1716 Sony PlayStation 3s (PS3s), adding up to a total of 69,940 cores and a theoretical peak performance of 500 TFLOPS. There are 84 subcluster head...Thus, a critical component to achieving maximum performance is to find the optimum division of processing load between the CPU and GPU. 4 The

  3. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures

    PubMed Central

    Manolakos, Elias S.

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub. PMID:26605332

  4. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures.

    PubMed

    Sharma, Anuj; Manolakos, Elias S

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub.

  5. Results of a Neutronic Simulation of HTR-Proteus Core 4.2 using PEBBED and other INL Reactor Physics Tools: FY-09 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hans D. Gougar

    The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. A combination of unit cell calculations (COMBINE-PEBDAN), 1-D discrete ordinates transport (SCAMP), and nodal diffusion calculations (PEBBED) were employed to yield keff and flux profiles. Preliminary results indicate that these tools, as currently configured and used, do not yield satisfactory estimates of keff. If control rods are not modeled, these methods can deliver much better agreement with experimental core eigenvalues which suggests that development efforts should focus on modeling control rod andmore » other absorber regions. Under some assumptions and in 1D subcore analyses, diffusion theory agrees well with transport. This suggests that developments in specific areas can produce a viable core simulation approach. Some corrections have been identified and can be further developed, specifically: treatment of the upper void region, treatment of inter-pebble streaming, and explicit (multiscale) transport modeling of TRISO fuel particles as a first step in cross section generation. Until corrections are made that yield better agreement with experiment, conclusions from core design and burnup analyses should be regarded as qualitative and not benchmark quality.« less

  6. Experimental results from the VENUS-F critical reference state for the GUINEVERE accelerator driven system project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uyttenhove, W.; Baeten, P.; Ban, G.

    The GUINEVERE (Generation of Uninterrupted Intense Neutron pulses at the lead Venus Reactor) project was launched in 2006 within the framework of FP6 EUROTRANS in order to validate on-line reactivity monitoring and subcriticality level determination in Accelerator Driven Systems. Therefore the VENUS reactor at SCK.CEN in Mol (Belgium) was modified towards a fast core (VENUS-F) and coupled to the GENEPI-3C accelerator built by CNRS The accelerator can operate in both continuous and pulsed mode. The VENUS-F core is loaded with enriched Uranium and reflected with solid lead. A well-chosen critical reference state is indispensable for the validation of the on-linemore » subcriticality monitoring methodology. Moreover a benchmarking tool is required for nuclear data research and code validation. In this paper the design and the importance of the critical reference state for the GUINEVERE project are motivated. The results of the first experimental phase on the critical core are presented. The control rods worth is determined by the rod drop technique and the application of the Modified Source Multiplication (MSM) method allows the determination of the worth of the safety rods. The results are implemented in the VENUS-F core certificate for full exploitation of the critical core. (authors)« less

  7. Experimental results from the VENUS-F critical reference state for the GUINEVERE accelerator driven system project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uyttenhove, W.; Baeten, P.; Kochetkov, A.

    The GUINEVERE (Generation of Uninterrupted Intense Neutron pulses at the lead Venus Reactor) project was launched in 2006 within the framework of FP6 EUROTRANS in order to validate online reactivity monitoring and subcriticality level determination in accelerator driven systems (ADS). Therefore, the VENUS reactor at SCK.CEN in Mol, Belgium, was modified towards a fast core (VENUS-F) and coupled to the GENEPI-3C accelerator built by CNRS. The accelerator can operate in both continuous and pulsed mode. The VENUS-F core is loaded with enriched Uranium and reflected with solid lead. A well-chosen critical reference state is indispensable for the validation of themore » online subcriticality monitoring methodology. Moreover, a benchmarking tool is required for nuclear data research and code validation. In this paper, the design and the importance of the critical reference state for the GUINEVERE project are motivated. The results of the first experimental phase on the critical core are presented. The control rods worth is determined by the positive period method and the application of the Modified Source Multiplication (MSM) method allows the determination of the worth of the safety rods. The results are implemented in the VENUS-F core certificate for full exploitation of the critical core. (authors)« less

  8. Orthogonal recursive bisection as data decomposition strategy for massively parallel cardiac simulations.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Pitman, Michael C; Rice, John J

    2011-06-01

    We present the orthogonal recursive bisection algorithm that hierarchically segments the anatomical model structure into subvolumes that are distributed to cores. The anatomy is derived from the Visible Human Project, with electrophysiology based on the FitzHugh-Nagumo (FHN) and ten Tusscher (TT04) models with monodomain diffusion. Benchmark simulations with up to 16,384 and 32,768 cores on IBM Blue Gene/P and L supercomputers for both FHN and TT04 results show good load balancing with almost perfect speedup factors that are close to linear with the number of cores. Hence, strong scaling is demonstrated. With 32,768 cores, a 1000 ms simulation of full heart beat requires about 6.5 min of wall clock time for a simulation of the FHN model. For the largest machine partitions, the simulations execute at a rate of 0.548 s (BG/P) and 0.394 s (BG/L) of wall clock time per 1 ms of simulation time. To our knowledge, these simulations show strong scaling to substantially higher numbers of cores than reported previously for organ-level simulation of the heart, thus significantly reducing run times. The ability to reduce runtimes could play a critical role in enabling wider use of cardiac models in research and clinical applications.

  9. Surface Water Investigations in Afghanistan: A Summary of Activities from 1952 to 1969. Appendix 14: Hydrology Training Manual Number 1: Basic Streamaging

    DTIC Science & Technology

    1966-01-01

    insulated core is covered by 33 galvanized wires, of which the inner 15 are wrapped in one direction and the outer 18 are wrapped in the reverse...foot marks with Porcelain Enamel Iron Figure Plates. Figure 2•—Non-recording staff gages • Shelters should be well ventilated, especially in...vertical staff gage usually consists of porcelain -enameled iron sections. The sections are usually screwed to a board which is fastened to a suitable

  10. Phase Equilibrium Investigations of Planetary Materials

    NASA Technical Reports Server (NTRS)

    Grove, T. L.

    1997-01-01

    This grant provided funds to carry out experimental studies designed to illuminate the conditions of melting and chemical differentiation that has occurred in planetary interiors. Studies focused on the conditions of mare basalt generation in the moon's interior and on processes that led to core formation in the Shergottite Parent Body (Mars). Studies also examined physical processes that could lead to the segregation of metal-rich sulfide melts in an olivine-rich solid matrix. The major results of each paper are discussed below and copies of the papers are attached as Appendix I.

  11. Environment, Safety, and Health Self-Assessment Report, Fiscal Year 2008

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chernowski, John

    2009-02-27

    Lawrence Berkeley National Laboratory's Environment, Safety, and Health (ES&H) Self-Assessment Program ensures that Integrated Safety Management (ISM) is implemented institutionally and by all divisions. The Self-Assessment Program, managed by the Office of Contract Assurance (OCA), provides for an internal evaluation of all ES&H programs and systems at LBNL. The functions of the program are to ensure that work is conducted safely, and with minimal negative impact to workers, the public, and the environment. The Self-Assessment Program is also the mechanism used to institute continuous improvements to the Laboratory's ES&H programs. The program is described in LBNL/PUB 5344, Environment, Safety, andmore » Health Self-Assessment Program and is composed of four distinct assessments: the Division Self-Assessment, the Management of Environment, Safety, and Health (MESH) review, ES&H Technical Assurance, and the Appendix B Self-Assessment. The Division Self-Assessment uses the five core functions and seven guiding principles of ISM as the basis of evaluation. Metrics are created to measure performance in fulfilling ISM core functions and guiding principles, as well as promoting compliance with applicable regulations. The five core functions of ISM are as follows: (1) Define the Scope of Work; (2) Identify and Analyze Hazards; (3) Control the Hazards; (4) Perform the Work; and (5) Feedback and Improvement. The seven guiding principles of ISM are as follows: (1) Line Management Responsibility for ES&H; (2) Clear Roles and Responsibilities; (3) Competence Commensurate with Responsibilities; (4) Balanced Priorities; (5) Identification of ES&H Standards and Requirements; (6) Hazard Controls Tailored to the Work Performed; and (7) Operations Authorization. Performance indicators are developed by consensus with OCA, representatives from each division, and Environment, Health, and Safety (EH&S) Division program managers. Line management of each division performs the Division Self-Assessment annually. The primary focus of the review is workplace safety. The MESH review is an evaluation of division management of ES&H in its research and operations, focusing on implementation and effectiveness of the division's ISM plan. It is a peer review performed by members of the LBNL Safety Review Committee (SRC), with staff support from OCA. Each division receives a MESH review every two to four years, depending on the results of the previous review. The ES&H Technical Assurance Program (TAP) provides the framework for systematic reviews of ES&H programs and processes. The intent of ES&H Technical Assurance assessments is to provide assurance that ES&H programs and processes comply with their guiding regulations, are effective, and are properly implemented by LBNL divisions. The Appendix B Performance Evaluation and Measurement Plan (PEMP) requires that LBNL sustain and enhance the effectiveness of integrated safety, health, and environmental protection through a strong and well-deployed system. Information required for Appendix B is provided by EH&S Division functional managers. The annual Appendix B report is submitted at the close of the fiscal year. This assessment is the Department of Energy's (DOE) primary mechanism for evaluating LBNL's contract performance in ISM.« less

  12. FY2012 summary of tasks completed on PROTEUS-thermal work.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.H.; Smith, M.A.

    2012-06-06

    PROTEUS is a suite of the neutronics codes, both old and new, that can be used within the SHARP codes being developed under the NEAMS program. Discussion here is focused on updates and verification and validation activities of the SHARP neutronics code, DeCART, for application to thermal reactor analysis. As part of the development of SHARP tools, the different versions of the DeCART code created for PWR, BWR, and VHTR analysis were integrated. Verification and validation tests for the integrated version were started, and the generation of cross section libraries based on the subgroup method was revisited for the targetedmore » reactor types. The DeCART code has been reorganized in preparation for an efficient integration of the different versions for PWR, BWR, and VHTR analysis. In DeCART, the old-fashioned common blocks and header files have been replaced by advanced memory structures. However, the changing of variable names was minimized in order to limit problems with the code integration. Since the remaining stability problems of DeCART were mostly caused by the CMFD methodology and modules, significant work was performed to determine whether they could be replaced by more stable methods and routines. The cross section library is a key element to obtain accurate solutions. Thus, the procedure for generating cross section libraries was revisited to provide libraries tailored for the targeted reactor types. To improve accuracy in the cross section library, an attempt was made to replace the CENTRM code by the MCNP Monte Carlo code as a tool obtaining reference resonance integrals. The use of the Monte Carlo code allows us to minimize problems or approximations that CENTRM introduces since the accuracy of the subgroup data is limited by that of the reference solutions. The use of MCNP requires an additional set of libraries without resonance cross sections so that reference calculations can be performed for a unit cell in which only one isotope of interest includes resonance cross sections, among the isotopes in the composition. The OECD MHTGR-350 benchmark core was simulated using DeCART as initial focus of the verification/validation efforts. Among the benchmark problems, Exercise 1 of Phase 1 is a steady-state benchmark case for the neutronics calculation for which block-wise cross sections were provided in 26 energy groups. This type of problem was designed for a homogenized geometry solver like DIF3D rather than the high-fidelity code DeCART. Instead of the homogenized block cross sections given in the benchmark, the VHTR-specific 238-group ENDF/B-VII.0 library of DeCART was directly used for preliminary calculations. Initial results showed that the multiplication factors of a fuel pin and a fuel block with or without a control rod hole were off by 6, -362, and -183 pcm Dk from comparable MCNP solutions, respectively. The 2-D and 3-D one-third core calculations were also conducted for the all-rods-out (ARO) and all-rods-in (ARI) configurations, producing reasonable results. Figure 1 illustrates the intermediate (1.5 eV - 17 keV) and thermal (below 1.5 eV) group flux distributions. As seen from VHTR cores with annular fuels, the intermediate group fluxes are relatively high in the fuel region, but the thermal group fluxes are higher in the inner and outer graphite reflector regions than in the fuel region. To support the current project, a new three-year I-NERI collaboration involving ANL and KAERI was started in November 2011, focused on performing in-depth verification and validation of high-fidelity multi-physics simulation codes for LWR and VHTR. The work scope includes generating improved cross section libraries for the targeted reactor types, developing benchmark models for verification and validation of the neutronics code with or without thermo-fluid feedback, and performing detailed comparisons of predicted reactor parameters against both Monte Carlo solutions and experimental measurements. The following list summarizes the work conducted so far for PROTEUS-Thermal Tasks: Unification of different versions of DeCART was initiated, and at the same time code modernization was conducted to make code unification efficient; (2) Regeneration of cross section libraries was attempted for the targeted reactor types, and the procedure for generating cross section libraries was updated by replacing CENTRM with MCNP for reference resonance integrals; (3) The MHTGR-350 benchmark core was simulated using DeCART with VHTR-specific 238-group ENDF/B-VII.0 library, and MCNP calculations were performed for comparison; and (4) Benchmark problems for PWR and BWR analysis were prepared for the DeCART verification/validation effort. In the coming months, the work listed above will be completed. Cross section libraries will be generated with optimized group structures for specific reactor types.« less

  13. Antibody to endotoxin core glycolipid reverses reticuloendothelial system depression in an animal model of severe sepsis and surgical injury

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldridge, M.C.; Chadwick, S.J.; Cheslyn-Curtis, S.

    To study the effect of severe sepsis on the function of the reticuloendothelial system (RES) we have measured the clearance kinetics and organ distribution of both low-dose technetium tin colloid (TTC) and /sup 75/selenomethionine-labelled E. coli in rabbits 24 hours after either sham laparotomy or appendix devascularization. Sepsis resulted in similar delayed blood clearance and reduced liver (Kupffer cell) uptake of both TTC and E. coli. To investigate the ability of polyclonal antibody to E. coli-J-5 (core glycolipid) to improve RES function in the same model of sepsis, further animals were pretreated with either core glycolipid antibody or control serummore » (10 ml IV) 2 hours before induction of sepsis. TTC clearance kinetics were determined 24 hours later. Antibody pretreated animals showed: a reduced incidence of bacteremia; normalization of the rate of blood clearance and liver uptake of TTC; and a 'rebound' increase in splenic uptake of TTC. We conclude that antibody to E. coli-J-5 enhances bacterial clearance by the RES.« less

  14. Quiet Clean Short-haul Experimental Engine (QCSEE). Aerodynamic and aeromechanical performance of a 50.8 cm (20 inch) diameter 1.34 PR variable pitch fan with core flow

    NASA Technical Reports Server (NTRS)

    Giffin, R. G.; Mcfalls, R. A.; Beacher, B. F.

    1977-01-01

    The fan aerodynamic and aeromechanical performance tests of the quiet clean short haul experimental engine under the wing fan and inlet with a simulated core flow are described. Overall forward mode fan performance is presented at each rotor pitch angle setting with conventional flow pressure ratio efficiency fan maps, distinguishing the performance characteristics of the fan bypass and fan core regions. Effects of off design bypass ratio, hybrid inlet geometry, and tip radial inlet distortion on fan performance are determined. The nonaxisymmetric bypass OGV and pylon configuration is assessed relative to both total pressure loss and induced circumferential flow distortion. Reverse mode performance, obtained by resetting the rotor blades through both the stall pitch and flat pitch directions, is discussed in terms of the conventional flow pressure ratio relationship and its implications upon achievable reverse thrust. Core performance in reverse mode operation is presented in terms of overall recovery levels and radial profiles existing at the simulated core inlet plane. Observations of the starting phenomena associated with the initiation of stable rotor flow during acceleration in the reverse mode are briefly discussed. Aeromechanical response characteristics of the fan blades are presented as a separate appendix, along with a description of the vehicle instrumentation and method of data reduction.

  15. Verification of ARES transport code system with TAKEDA benchmarks

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue

    2015-10-01

    Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.

  16. Accelerating cardiac bidomain simulations using graphics processing units.

    PubMed

    Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G

    2012-08-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.

  17. FFTF Passive Safety Test Data for Benchmarks for New LMR Designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wootan, David W.; Casella, Andrew M.

    Liquid Metal Reactors (LMRs) continue to be considered as an attractive concept for advanced reactor design. Software packages such as SASSYS are being used to im-prove new LMR designs and operating characteristics. Significant cost and safety im-provements can be realized in advanced liquid metal reactor designs by emphasizing inherent or passive safety through crediting the beneficial reactivity feedbacks associ-ated with core and structural movement. This passive safety approach was adopted for the Fast Flux Test Facility (FFTF), and an experimental program was conducted to characterize the structural reactivity feedback. The FFTF passive safety testing pro-gram was developed to examine howmore » specific design elements influenced dynamic re-activity feedback in response to a reactivity input and to demonstrate the scalability of reactivity feedback results to reactors of current interest. The U.S. Department of En-ergy, Office of Nuclear Energy Advanced Reactor Technology program is in the pro-cess of preserving, protecting, securing, and placing in electronic format information and data from the FFTF, including the core configurations and data collected during the passive safety tests. Benchmarks based on empirical data gathered during operation of the Fast Flux Test Facility (FFTF) as well as design documents and post-irradiation examination will aid in the validation of these software packages and the models and calculations they produce. Evaluation of these actual test data could provide insight to improve analytical methods which may be used to support future licensing applications for LMRs« less

  18. Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

    PubMed Central

    Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf

    2013-01-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867

  19. Accelerating finite-rate chemical kinetics with coprocessors: Comparing vectorization methods on GPUs, MICs, and CPUs

    NASA Astrophysics Data System (ADS)

    Stone, Christopher P.; Alferman, Andrew T.; Niemeyer, Kyle E.

    2018-05-01

    Accurate and efficient methods for solving stiff ordinary differential equations (ODEs) are a critical component of turbulent combustion simulations with finite-rate chemistry. The ODEs governing the chemical kinetics at each mesh point are decoupled by operator-splitting allowing each to be solved concurrently. An efficient ODE solver must then take into account the available thread and instruction-level parallelism of the underlying hardware, especially on many-core coprocessors, as well as the numerical efficiency. A stiff Rosenbrock and a nonstiff Runge-Kutta ODE solver are both implemented using the single instruction, multiple thread (SIMT) and single instruction, multiple data (SIMD) paradigms within OpenCL. Both methods solve multiple ODEs concurrently within the same instruction stream. The performance of these parallel implementations was measured on three chemical kinetic models of increasing size across several multicore and many-core platforms. Two separate benchmarks were conducted to clearly determine any performance advantage offered by either method. The first benchmark measured the run-time of evaluating the right-hand-side source terms in parallel and the second benchmark integrated a series of constant-pressure, homogeneous reactors using the Rosenbrock and Runge-Kutta solvers. The right-hand-side evaluations with SIMD parallelism on the host multicore Xeon CPU and many-core Xeon Phi co-processor performed approximately three times faster than the baseline multithreaded C++ code. The SIMT parallel model on the host and Phi was 13%-35% slower than the baseline while the SIMT model on the NVIDIA Kepler GPU provided approximately the same performance as the SIMD model on the Phi. The runtimes for both ODE solvers decreased significantly with the SIMD implementations on the host CPU (2.5-2.7 ×) and Xeon Phi coprocessor (4.7-4.9 ×) compared to the baseline parallel code. The SIMT implementations on the GPU ran 1.5-1.6 times faster than the baseline multithreaded CPU code; however, this was significantly slower than the SIMD versions on the host CPU or the Xeon Phi. The performance difference between the three platforms was attributed to thread divergence caused by the adaptive step-sizes within the ODE integrators. Analysis showed that the wider vector width of the GPU incurs a higher level of divergence than the narrower Sandy Bridge or Xeon Phi. The significant performance improvement provided by the SIMD parallel strategy motivates further research into more ODE solver methods that are both SIMD-friendly and computationally efficient.

  20. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  1. Investigating the impact of the cielo cray XE6 architecture on scientific application codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajan, Mahesh; Barrett, Richard; Pedretti, Kevin Thomas Tauke

    2010-12-01

    Cielo, a Cray XE6, is the Department of Energy NNSA Advanced Simulation and Computing (ASC) campaign's newest capability machine. Rated at 1.37 PFLOPS, it consists of 8,944 dual-socket oct-core AMD Magny-Cours compute nodes, linked using Cray's Gemini interconnect. Its primary mission objective is to enable a suite of the ASC applications implemented using MPI to scale to tens of thousands of cores. Cielo is an evolutionary improvement to a successful architecture previously available to many of our codes, thus enabling a basis for understanding the capabilities of this new architecture. Using three codes strategically important to the ASC campaign, andmore » supplemented with some micro-benchmarks that expose the fundamental capabilities of the XE6, we report on the performance characteristics and capabilities of Cielo.« less

  2. Examining national trends in worker health with the National Health Interview Survey.

    PubMed

    Luckhaupt, Sara E; Sestito, John P

    2013-12-01

    To describe data from the National Health Interview Survey (NHIS), both the annual core survey and periodic occupational health supplements (OHSs), available for examining national trends in worker health. The NHIS is an annual in-person household survey with a cross-sectional multistage clustered sample design to produce nationally representative health data. The 2010 NHIS included an OHS. Prevalence rates of various health conditions and health behaviors among workers based on multiple years of NHIS core data are available. In addition, the 2010 NHIS-OHS data provide prevalence rates of selected health conditions, work organization factors, and occupational exposures among US workers by industry and occupation. The publicly available NHIS data can be used to identify areas of concern for various industries and for benchmarking data from specific worker groups against national averages.

  3. Atoms and Molecules Interacting with Light

    NASA Astrophysics Data System (ADS)

    van der Straten, Peter; Metcalf, Harold

    2016-02-01

    Part I. Atom-Light Interaction: 1. The classical physics pathway; Appendix 1.A. Damping force on an accelerating charge; Appendix 1.B. Hanle effect; Appendix 1.C. Optical tweezers; 2. Interaction of two-level atoms and light; Appendix 2.A. Pauli matrices for motion of the bloch vector; Appendix 2.B. The Ramsey method; Appendix 2.C. Echoes and interferometry; Appendix 2.D. Adiabatic rapid passage; Appendix 2.E Superposition and entanglement; 3. The atom-light interaction; Appendix 3.A. Proof of the oscillator strength theorem; Appendix 3.B. Electromagnetic fields; Appendix 3.C. The dipole approximation; Appendix 3.D. Time resolved fluorescence from multi-level atoms; 4. 'Forbidden' transitions; Appendix 4.A. Higher order approximations; 5. Spontaneous emission; Appendix 5.A. The quantum mechanical harmonic oscillator; Appendix 5.B. Field quantization; Appendix 5.C. Alternative theories to QED; 6. The density matrix; Appendix 6.A. The Liouville-von Neumann equation; Part II. Internal Structure: 7. The hydrogen atom; Appendix 7.A. Center-of-mass motion; Appendix 7.B. Coordinate systems; Appendix 7.C. Commuting operators; Appendix 7.D. Matrix elements of the radial wavefunctions; 8. Fine structure; Appendix 8.A. The Sommerfeld fine-structure constant; Appendix 8.B. Measurements of the fine structure 9. Effects of the nucleus; Appendix 9.A. Interacting magnetic dipoles; Appendix 9.B. Hyperfine structure for two spin =2 particles; Appendix 9.C. The hydrogen maser; 10. The alkali-metal atoms; Appendix 10.A. Quantum defects for the alkalis; Appendix 10.B. Numerov method; 11. Atoms in magnetic fields; Appendix 11.A. The ground state of atomic hydrogen; Appendix 11.B. Positronium; Appendix 11.C. The non-crossing theorem; Appendix 11.D. Passage through an anticrossing: Landau-Zener transitions; 12. Atoms in electric fields; 13. Rydberg atoms; 14. The helium atom; Appendix 14.A. Variational calculations; Appendix 14.B. Detail on the variational calculations of the ground state; 15. The periodic system of the elements; Appendix 15. A paramagnetism; Appendix 15.B. The color of gold; 16. Molecules; Appendix 16.A. Morse potential; 17. Binding in the hydrogen molecule; Appendix 17.A. Confocal elliptical coordinates; Appendix 17.B. One-electron two-center integrals; Appendix 17.C. Electron-electron interaction in molecular hydrogen; 18. Ultra-cold chemistry; Part III. Applications: 19. Optical forces and laser cooling; 20. Confinement of neutral atoms; 21. Bose-Einstein condensation; Appendix 21.A. Distribution functions; Appendix 21.B. Density of states; 22. Cold molecules; 23. Three level systems; Appendix 23.A. General Case for _1 , _2; 24. Fundamental physics; Part IV. Appendices: Appendix A. Notation and definitions; Appendix B. Units and notation; Appendix C. Angular momentum in quantum mechanics; Appendix D. Transition strengths; References; Index.

  4. Physics of Electronic Materials

    NASA Astrophysics Data System (ADS)

    Rammer, Jørgen

    2017-03-01

    1. Quantum mechanics; 2. Quantum tunneling; 3. Standard metal model; 4. Standard conductor model; 5. Electric circuit theory; 6. Quantum wells; 7. Particle in a periodic potential; 8. Bloch currents; 9. Crystalline solids; 10. Semiconductor doping; 11. Transistors; 12. Heterostructures; 13. Mesoscopic physics; 14. Arithmetic, logic and machines; Appendix A. Principles of quantum mechanics; Appendix B. Dirac's delta function; Appendix C. Fourier analysis; Appendix D. Classical mechanics; Appendix E. Wave function properties; Appendix F. Transfer matrix properties; Appendix G. Momentum; Appendix H. Confined particles; Appendix I. Spin and quantum statistics; Appendix J. Statistical mechanics; Appendix K. The Fermi-Dirac distribution; Appendix L. Thermal current fluctuations; Appendix M. Gaussian wave packets; Appendix N. Wave packet dynamics; Appendix O. Screening by symmetry method; Appendix P. Commutation and common eigenfunctions; Appendix Q. Interband coupling; Appendix R. Common crystal structures; Appendix S. Effective mass approximation; Appendix T. Integral doubling formula; Bibliography; Index.

  5. Highly Enriched Uranium Metal Cylinders Surrounded by Various Reflector Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard Jones; J. Blair Briggs; Leland Monteirth

    A series of experiments was performed at Los Alamos Scientific Laboratory in 1958 to determine critical masses of cylinders of Oralloy (Oy) reflected by a number of materials. The experiments were all performed on the Comet Universal Critical Assembly Machine, and consisted of discs of highly enriched uranium (93.3 wt.% 235U) reflected by half-inch and one-inch-thick cylindrical shells of various reflector materials. The experiments were performed by members of Group N-2, particularly K. W. Gallup, G. E. Hansen, H. C. Paxton, and R. H. White. This experiment was intended to ascertain critical masses for criticality safety purposes, as well asmore » to compare neutron transport cross sections to those obtained from danger coefficient measurements with the Topsy Oralloy-Tuballoy reflected and Godiva unreflected critical assemblies. The reflector materials examined in this series of experiments are as follows: magnesium, titanium, aluminum, graphite, mild steel, nickel, copper, cobalt, molybdenum, natural uranium, tungsten, beryllium, aluminum oxide, molybdenum carbide, and polythene (polyethylene). Also included are two special configurations of composite beryllium and iron reflectors. Analyses were performed in which uncertainty associated with six different parameters was evaluated; namely, extrapolation to the uranium critical mass, uranium density, 235U enrichment, reflector density, reflector thickness, and reflector impurities. In addition to the idealizations made by the experimenters (removal of the platen and diaphragm), two simplifications were also made to the benchmark models that resulted in a small bias and additional uncertainty. First of all, since impurities in core and reflector materials are only estimated, they are not included in the benchmark models. Secondly, the room, support structure, and other possible surrounding equipment were not included in the model. Bias values that result from these two simplifications were determined and associated uncertainty in the bias values were included in the overall uncertainty in benchmark keff values. Bias values were very small, ranging from 0.0004 ?k low to 0.0007 ?k low. Overall uncertainties range from ? 0.0018 to ? 0.0030. Major contributors to the overall uncertainty include uncertainty in the extrapolation to the uranium critical mass and the uranium density. Results are summarized in Figure 1. Figure 1. Experimental, Benchmark-Model, and MCNP/KENO Calculated Results The 32 configurations described and evaluated under ICSBEP Identifier HEU-MET-FAST-084 are judged to be acceptable for use as criticality safety benchmark experiments and should be valuable integral benchmarks for nuclear data testing of the various reflector materials. Details of the benchmark models, uncertainty analyses, and final results are given in this paper.« less

  6. Benchmarking GPU and CPU codes for Heisenberg spin glass over-relaxation

    NASA Astrophysics Data System (ADS)

    Bernaschi, M.; Parisi, G.; Parisi, L.

    2011-06-01

    We present a set of possible implementations for Graphics Processing Units (GPU) of the Over-relaxation technique applied to the 3D Heisenberg spin glass model. The results show that a carefully tuned code can achieve more than 100 GFlops/s of sustained performance and update a single spin in about 0.6 nanoseconds. A multi-hit technique that exploits the GPU shared memory further reduces this time. Such results are compared with those obtained by means of a highly-tuned vector-parallel code on latest generation multi-core CPUs.

  7. CERN Computing in Commercial Clouds

    NASA Astrophysics Data System (ADS)

    Cordeiro, C.; Field, L.; Garrido Bear, B.; Giordano, D.; Jones, B.; Keeble, O.; Manzi, A.; Martelli, E.; McCance, G.; Moreno-García, D.; Traylen, S.

    2017-10-01

    By the end of 2016 more than 10 Million core-hours of computing resources have been delivered by several commercial cloud providers to the four LHC experiments to run their production workloads, from simulation to full chain processing. In this paper we describe the experience gained at CERN in procuring and exploiting commercial cloud resources for the computing needs of the LHC experiments. The mechanisms used for provisioning, monitoring, accounting, alarming and benchmarking will be discussed, as well as the involvement of the LHC collaborations in terms of managing the workflows of the experiments within a multicloud environment.

  8. Core competencies for pharmaceutical physicians and drug development scientists

    PubMed Central

    Silva, Honorio; Stonier, Peter; Buhler, Fritz; Deslypere, Jean-Paul; Criscuolo, Domenico; Nell, Gerfried; Massud, Joao; Geary, Stewart; Schenk, Johanna; Kerpel-Fronius, Sandor; Koski, Greg; Clemens, Norbert; Klingmann, Ingrid; Kesselring, Gustavo; van Olden, Rudolf; Dubois, Dominique

    2013-01-01

    Professional groups, such as IFAPP (International Federation of Pharmaceutical Physicians and Pharmaceutical Medicine), are expected to produce the defined core competencies to orient the discipline and the academic programs for the development of future competent professionals and to advance the profession. On the other hand, PharmaTrain, an Innovative Medicines Initiative project, has become the largest public-private partnership in biomedicine in the European Continent and aims to provide postgraduate courses that are designed to meet the needs of professionals working in medicines development. A working group was formed within IFAPP including representatives from PharmaTrain, academic institutions and national member associations, with special interest and experience on Quality Improvement through education. The objectives were: to define a set of core competencies for pharmaceutical physicians and drug development scientists, to be summarized in a Statement of Competence and to benchmark and align these identified core competencies with the Learning Outcomes (LO) of the PharmaTrain Base Course. The objectives were successfully achieved. Seven domains and 60 core competencies were identified and aligned accordingly. The effective implementation of training programs using the competencies or the PharmaTrain LO anywhere in the world may transform the drug development process to an efficient and integrated process for better and safer medicines. The PharmaTrain Base Course might provide the cognitive framework to achieve the desired Statement of Competence for Pharmaceutical Physicians and Drug Development Scientists worldwide. PMID:23986704

  9. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    NASA Astrophysics Data System (ADS)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  10. Soft-core processor study for node-based architectures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Houten, Jonathan Roger; Jarosz, Jason P.; Welch, Benjamin James

    2008-09-01

    Node-based architecture (NBA) designs for future satellite projects hold the promise of decreasing system development time and costs, size, weight, and power and positioning the laboratory to address other emerging mission opportunities quickly. Reconfigurable Field Programmable Gate Array (FPGA) based modules will comprise the core of several of the NBA nodes. Microprocessing capabilities will be necessary with varying degrees of mission-specific performance requirements on these nodes. To enable the flexibility of these reconfigurable nodes, it is advantageous to incorporate the microprocessor into the FPGA itself, either as a hardcore processor built into the FPGA or as a soft-core processor builtmore » out of FPGA elements. This document describes the evaluation of three reconfigurable FPGA based processors for use in future NBA systems--two soft cores (MicroBlaze and non-fault-tolerant LEON) and one hard core (PowerPC 405). Two standard performance benchmark applications were developed for each processor. The first, Dhrystone, is a fixed-point operation metric. The second, Whetstone, is a floating-point operation metric. Several trials were run at varying code locations, loop counts, processor speeds, and cache configurations. FPGA resource utilization was recorded for each configuration. Cache configurations impacted the results greatly; for optimal processor efficiency it is necessary to enable caches on the processors. Processor caches carry a penalty; cache error mitigation is necessary when operating in a radiation environment.« less

  11. Coupled-cluster based approach for core-level states in condensed phase: Theory and application to different protonated forms of aqueous glycine

    DOE PAGES

    Sadybekov, Arman; Krylov, Anna I.

    2017-07-07

    A theoretical approach for calculating core-level states in condensed phase is presented. The approach is based on equation-of-motion coupled-cluster theory (EOMCC) and effective fragment potential (EFP) method. By introducing an approximate treatment of double excitations in the EOM-CCSD (EOM-CC with single and double substitutions) ansatz, we address poor convergence issues that are encountered for the core-level states and significantly reduce computational costs. While the approximations introduce relatively large errors in the absolute values of transition energies, the errors are systematic. Consequently, chemical shifts, changes in ionization energies relative to reference systems, are reproduced reasonably well. By using different protonation formsmore » of solvated glycine as a benchmark system, we show that our protocol is capable of reproducing the experimental chemical shifts with a quantitative accuracy. The results demonstrate that chemical shifts are very sensitive to the solvent interactions and that explicit treatment of solvent, such as EFP, is essential for achieving quantitative accuracy.« less

  12. Accelerating 3D Hall MHD Magnetosphere Simulations with Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Bard, C.; Dorelli, J.

    2017-12-01

    The resolution required to simulate planetary magnetospheres with Hall magnetohydrodynamics result in program sizes approaching several hundred million grid cells. These would take years to run on a single computational core and require hundreds or thousands of computational cores to complete in a reasonable time. However, this requires access to the largest supercomputers. Graphics processing units (GPUs) provide a viable alternative: one GPU can do the work of roughly 100 cores, bringing Hall MHD simulations of Ganymede within reach of modest GPU clusters ( 8 GPUs). We report our progress in developing a GPU-accelerated, three-dimensional Hall magnetohydrodynamic code and present Hall MHD simulation results for both Ganymede (run on 8 GPUs) and Mercury (56 GPUs). We benchmark our Ganymede simulation with previous results for the Galileo G8 flyby, namely that adding the Hall term to ideal MHD simulations changes the global convection pattern within the magnetosphere. Additionally, we present new results for the G1 flyby as well as initial results from Hall MHD simulations of Mercury and compare them with the corresponding ideal MHD runs.

  13. Targeted proteomics coming of age - SRM, PRM and DIA performance evaluated from a core facility perspective.

    PubMed

    Kockmann, Tobias; Trachsel, Christian; Panse, Christian; Wahlander, Asa; Selevsek, Nathalie; Grossmann, Jonas; Wolski, Witold E; Schlapbach, Ralph

    2016-08-01

    Quantitative mass spectrometry is a rapidly evolving methodology applied in a large number of omics-type research projects. During the past years, new designs of mass spectrometers have been developed and launched as commercial systems while in parallel new data acquisition schemes and data analysis paradigms have been introduced. Core facilities provide access to such technologies, but also actively support the researchers in finding and applying the best-suited analytical approach. In order to implement a solid fundament for this decision making process, core facilities need to constantly compare and benchmark the various approaches. In this article we compare the quantitative accuracy and precision of current state of the art targeted proteomics approaches single reaction monitoring (SRM), parallel reaction monitoring (PRM) and data independent acquisition (DIA) across multiple liquid chromatography mass spectrometry (LC-MS) platforms, using a readily available commercial standard sample. All workflows are able to reproducibly generate accurate quantitative data. However, SRM and PRM workflows show higher accuracy and precision compared to DIA approaches, especially when analyzing low concentrated analytes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Coupled-cluster based approach for core-level states in condensed phase: Theory and application to different protonated forms of aqueous glycine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadybekov, Arman; Krylov, Anna I.

    A theoretical approach for calculating core-level states in condensed phase is presented. The approach is based on equation-of-motion coupled-cluster theory (EOMCC) and effective fragment potential (EFP) method. By introducing an approximate treatment of double excitations in the EOM-CCSD (EOM-CC with single and double substitutions) ansatz, we address poor convergence issues that are encountered for the core-level states and significantly reduce computational costs. While the approximations introduce relatively large errors in the absolute values of transition energies, the errors are systematic. Consequently, chemical shifts, changes in ionization energies relative to reference systems, are reproduced reasonably well. By using different protonation formsmore » of solvated glycine as a benchmark system, we show that our protocol is capable of reproducing the experimental chemical shifts with a quantitative accuracy. The results demonstrate that chemical shifts are very sensitive to the solvent interactions and that explicit treatment of solvent, such as EFP, is essential for achieving quantitative accuracy.« less

  15. Waste-Management Education and Research Consortium (WERC) annual progress report, 1991--1992. Appendixes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-04-07

    This report contains the following appendices: Appendix A - Requirements for Undergraduate Level; Appendix B - Requirements for Graduate Level; Appendix C - Graduate Degree In Environmental Engineering; Appendix D - Non-degree Certificate Program; Appendix E - Curriculum for Associate Degree Program; Appendix F - Curriculum for NCC Program; Appendix G - Information 1991 Teleconference Series; Appendix H - Information on 1992 Teleconference Series; Appendix I - WERC interactive Television Courses; Appendix J - WERC Research Seminar Series; Appendix K - Sites for Hazardous/Radioactive Waste Management Series; Appendix L- Summary of Technology Development of the Second Year; Appendix M -more » List of Major Publications Resulting from WERC; Appendix N - Types of Equipment at WERC Laboratories.« less

  16. Space Station Furnace Facility Preliminary Project Implementation Plan (PIP). Volume 2, Appendix 2

    NASA Technical Reports Server (NTRS)

    Perkey, John K.

    1992-01-01

    The Space Station Furnace Facility (SSFF) is an advanced facility for materials research in the microgravity environment of the Space Station Freedom and will consist of Core equipment and various sets of Furnace Module (FM) equipment in a three-rack configuration. This Project Implementation Plan (PIP) document was developed to satisfy the requirements of Data Requirement Number 4 for the SSFF study (Phase B). This PIP shall address the planning of the activities required to perform the detailed design and development of the SSFF for the Phase C/D portion of this contract.

  17. Comparison of the PHISICS/RELAP5-3D ring and block model results for phase I of the OECD/NEA MHTGR-350 benchmark

    DOE PAGES

    Strydom, G.; Epiney, A. S.; Alfonsi, Andrea; ...

    2015-12-02

    The PHISICS code system has been under development at INL since 2010. It consists of several modules providing improved coupled core simulation capability: INSTANT (3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and modules performing criticality searches, fuel shuffling and generalized perturbation. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D was finalized in 2013, and as part of the verification and validation effort the first phase of the OECD/NEA MHTGR-350 Benchmark has now been completed. The theoretical basis and latest development status of the coupled PHISICS/RELAP5-3D tool are described in more detailmore » in a concurrent paper. This paper provides an overview of the OECD/NEA MHTGR-350 Benchmark and presents the results of Exercises 2 and 3 defined for Phase I. Exercise 2 required the modelling of a stand-alone thermal fluids solution at End of Equilibrium Cycle for the Modular High Temperature Reactor (MHTGR). The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 required a coupled neutronics and thermal fluids solution, and the PHISICS/RELAP5-3D code suite was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of results obtained with the traditional RELAP5-3D “ring” model approach against a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity that can be obtained by this “block” model is illustrated with comparison results on the temperature, power density and flux distributions. Furthermore, it is shown that the ring model leads to significantly lower fuel temperatures (up to 10%) when compared with the higher fidelity block model, and that the additional model development and run-time efforts are worth the gains obtained in the improved spatial temperature and flux distributions.« less

  18. Using in-situ observations of atmospheric water vapor isotopes to benchmark and isotope-enabled General Circulation Models and improve ice core paleo-climate reconstruction

    NASA Astrophysics Data System (ADS)

    Steen-Larsen, Hans Christian; Sveinbjörnsdottir, Arny; Masson-Delmotte, Valerie; Werner, Martin; Risi, Camille; Yoshimura, Kei

    2016-04-01

    We have since 2010 carried out in-situ continuous water vapor isotope observations on top of the Greenland Ice Sheet (3 seasons at NEEM), in Svalbard (1 year), in Iceland (4 years), in Bermuda (4 years). The expansive dataset containing high accuracy and precision measurements of δ18O, δD, and the d-excess allow us to validate and benchmark the treatment of the atmospheric hydrological cycle's processes in General Circulation Models using simulations nudged to reanalysis products. Recent findings from both Antarctica and Greenland have documented strong interaction between the snow surface isotopes and the near surface atmospheric water vapor isotopes on diurnal to synoptic time scales. In fact, it has been shown that the snow surface isotopes take up the synoptic driven atmospheric water vapor isotopic signal in-between precipitation events, erasing the precipitation isotope signal in the surface snow. This highlights the importance of using General or Regional Climate Models, which accurately are able to simulate the atmospheric water vapor isotopic composition, to understand and interpret the ice core isotope signal. With this in mind we have used three isotope-enabled General Circulation Models (isoGSM, ECHAM5-wiso, and LMDZiso) nudged to reanalysis products. We have compared the simulations of daily mean isotope values directly with our in-situ observations. This has allowed us to characterize the variability of the isotopic composition in the models and compared it to our observations. We have specifically focused on the d-excess in order to characterize why both the mean and the variability is significantly lower than our observations. We argue that using water vapor isotopes to benchmark General Circulation Models offers an excellent tool for improving the treatment and parameterization of the atmospheric hydrological cycle. Recent studies have documented a very large inter-model dispersion in the treatment of the Arctic water cycle under a future global warming and greenhouse gas emission scenario. Our results call for action to create an international pan-Arctic monitoring water vapor isotope network in order to improve future projections of Arctic climate.

  19. Correlation consistent valence basis sets for use with the Stuttgart-Dresden-Bonn relativistic effective core potentials: The atoms Ga-Kr and In-Xe

    NASA Astrophysics Data System (ADS)

    Martin, Jan M. L.; Sundermann, Andreas

    2001-02-01

    We propose large-core correlation-consistent (cc) pseudopotential basis sets for the heavy p-block elements Ga-Kr and In-Xe. The basis sets are of cc-pVTZ and cc-pVQZ quality, and have been optimized for use with the large-core (valence-electrons only) Stuttgart-Dresden-Bonn (SDB) relativistic pseudopotentials. Validation calculations on a variety of third-row and fourth-row diatomics suggest them to be comparable in quality to the all-electron cc-pVTZ and cc-pVQZ basis sets for lighter elements. Especially the SDB-cc-pVQZ basis set in conjunction with a core polarization potential (CPP) yields excellent agreement with experiment for compounds of the later heavy p-block elements. For accurate calculations on Ga (and, to a lesser extent, Ge) compounds, explicit treatment of 13 valence electrons appears to be desirable, while it seems inevitable for In compounds. For Ga and Ge, we propose correlation consistent basis sets extended for (3d) correlation. For accurate calculations on organometallic complexes of interest to homogenous catalysis, we recommend a combination of the standard cc-pVTZ basis set for first- and second-row elements, the presently derived SDB-cc-pVTZ basis set for heavier p-block elements, and for transition metals, the small-core [6s5p3d] Stuttgart-Dresden basis set-relativistic effective core potential combination supplemented by (2f1g) functions with exponents given in the Appendix to the present paper.

  20. Container Technology Study : Volume 2. Appendixes.

    DOT National Transportation Integrated Search

    1980-10-01

    Volume II has nine appendixes as follows: Appendix A - Railroad Flatcar Data; Appendix B - Calculations; Appendix C - Record of Telephone Calls; Appendix D - Industry Interviews; Appendix E - Field Trips and Conferences; Appendix F - Annotated biblio...

  1. Waste-Management Education and Research Consortium (WERC) annual progress report, 1991--1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maji, A. K.; Thomson, Bruce M.; Samani, Zohrab A.

    1992-04-07

    This report contains the following appendices: Appendix A - Requirements for Undergraduate Level; Appendix B - Requirements for Graduate Level; Appendix C - Graduate Degree In Environmental Engineering; Appendix D - Non-degree Certificate Program; Appendix E - Curriculum for Associate Degree Program; Appendix F - Curriculum for NCC Program; Appendix G - Information 1991 Teleconference Series; Appendix H - Information on 1992 Teleconference Series; Appendix I - WERC interactive Television Courses; Appendix J - WERC Research Seminar Series; Appendix K - Sites for Hazardous/Radioactive Waste Management Series; Appendix L- Summary of Technology Development of the Second Year; Appendix M -more » List of Major Publications Resulting from WERC; Appendix N - Types of Equipment at WERC Laboratories.« less

  2. Flowing gas, non-nuclear experiments on the gas core reactor

    NASA Technical Reports Server (NTRS)

    Kunze, J. F.; Suckling, D. H.; Copper, C. G.

    1972-01-01

    Flow tests were conducted on models of the gas core (cavity) reactor. Variations in cavity wall and injection configurations were aimed at establishing flow patterns that give a maximum of the nuclear criticality eigenvalue. Correlation with the nuclear effect was made using multigroup diffusion theory normalized by previous benchmark critical experiments. Air was used to simulate the hydrogen propellant in the flow tests, and smoked air, argon, or freon to simulate the central nuclear fuel gas. All tests were run in the down-firing direction so that gravitational effects simulated the acceleration effect of a rocket. Results show that acceptable flow patterns with high volume fraction for the simulated nuclear fuel gas and high flow rate ratios of propellant to fuel can be obtained. Using a point injector for the fuel, good flow patterns are obtained by directing the outer gas at high velocity along the cavity wall, using louvered or oblique-angle-honeycomb injection schemes.

  3. Recommendations for Training in Pediatric Psychology: Defining Core Competencies Across Training Levels

    PubMed Central

    Janicke, David M.; McQuaid, Elizabeth L.; Mullins, Larry L.; Robins, Paul M.; Wu, Yelena P.

    2014-01-01

    Objective As a field, pediatric psychology has focused considerable efforts on the education and training of students and practitioners. Alongside a broader movement toward competency attainment in professional psychology and within the health professions, the Society of Pediatric Psychology commissioned a Task Force to establish core competencies in pediatric psychology and address the need for contemporary training recommendations. Methods The Task Force adapted the framework proposed by the Competency Benchmarks Work Group on preparing psychologists for health service practice and defined competencies applicable across training levels ranging from initial practicum training to entry into the professional workforce in pediatric psychology. Results Competencies within 6 cluster areas, including science, professionalism, interpersonal, application, education, and systems, and 1 crosscutting cluster, crosscutting knowledge competencies in pediatric psychology, are presented in this report. Conclusions Recommendations for the use of, and the further refinement of, these suggested competencies are discussed. PMID:24719239

  4. Standards for vision science libraries: 2014 revision.

    PubMed

    Motte, Kristin; Caldwell, C Brooke; Lamson, Karen S; Ferimer, Suzanne; Nims, J Chris

    2014-10-01

    This Association of Vision Science Librarians revision of the "Standards for Vision Science Libraries" aspires to provide benchmarks to address the needs for the services and resources of modern vision science libraries (academic, medical or hospital, pharmaceutical, and so on), which share a core mission, are varied by type, and are located throughout the world. Through multiple meeting discussions, member surveys, and a collaborative revision process, the standards have been updated for the first time in over a decade. While the range of types of libraries supporting vision science services, education, and research is wide, all libraries, regardless of type, share core attributes, which the standards address. The current standards can and should be used to help develop new vision science libraries or to expand the growth of existing libraries, as well as to support vision science librarians in their work to better provide services and resources to their respective users.

  5. Standards for vision science libraries: 2014 revision

    PubMed Central

    Motte, Kristin; Caldwell, C. Brooke; Lamson, Karen S.; Ferimer, Suzanne; Nims, J. Chris

    2014-01-01

    Objective: This Association of Vision Science Librarians revision of the “Standards for Vision Science Libraries” aspires to provide benchmarks to address the needs for the services and resources of modern vision science libraries (academic, medical or hospital, pharmaceutical, and so on), which share a core mission, are varied by type, and are located throughout the world. Methods: Through multiple meeting discussions, member surveys, and a collaborative revision process, the standards have been updated for the first time in over a decade. Results: While the range of types of libraries supporting vision science services, education, and research is wide, all libraries, regardless of type, share core attributes, which the standards address. Conclusions: The current standards can and should be used to help develop new vision science libraries or to expand the growth of existing libraries, as well as to support vision science librarians in their work to better provide services and resources to their respective users. PMID:25349547

  6. Reversibility of Pt-Skin and Pt-Skeleton Nanostructures in Acidic Media.

    PubMed

    Durst, Julien; Lopez-Haro, Miguel; Dubau, Laetitia; Chatenet, Marian; Soldo-Olivier, Yvonne; Guétaz, Laure; Bayle-Guillemaud, Pascale; Maillard, Frédéric

    2014-02-06

    Following a well-defined series of acid and heat treatments on a benchmark Pt3Co/C sample, three different nanostructures of interest for the electrocatalysis of the oxygen reduction reaction were tailored. These nanostructures could be sorted into the "Pt-skin" structure, made of one pure Pt overlayer, and the "Pt-skeleton" structure, made of 2-3 Pt overlayers surrounding the Pt-Co alloy core. Using a unique combination of high-resolution aberration-corrected STEM-EELS, XRD, EXAFS, and XANES measurements, we provide atomically resolved pictures of these different nanostructures, including measurement of the Pt-shell thickness forming in acidic media and the resulting changes of the bulk and core chemical composition. It is shown that the Pt-skin is reverted toward the Pt-skeleton upon contact with acid electrolyte. This change in structure causes strong variations of the chemical composition.

  7. Network evolution model for supply chain with manufactures as the core.

    PubMed

    Fang, Haiyang; Jiang, Dali; Yang, Tinghong; Fang, Ling; Yang, Jian; Li, Wu; Zhao, Jing

    2018-01-01

    Building evolution model of supply chain networks could be helpful to understand its development law. However, specific characteristics and attributes of real supply chains are often neglected in existing evolution models. This work proposes a new evolution model of supply chain with manufactures as the core, based on external market demand and internal competition-cooperation. The evolution model assumes the external market environment is relatively stable, considers several factors, including specific topology of supply chain, external market demand, ecological growth and flow conservation. The simulation results suggest that the networks evolved by our model have similar structures as real supply chains. Meanwhile, the influences of external market demand and internal competition-cooperation to network evolution are analyzed. Additionally, 38 benchmark data sets are applied to validate the rationality of our evolution model, in which, nine manufacturing supply chains match the features of the networks constructed by our model.

  8. Network evolution model for supply chain with manufactures as the core

    PubMed Central

    Jiang, Dali; Fang, Ling; Yang, Jian; Li, Wu; Zhao, Jing

    2018-01-01

    Building evolution model of supply chain networks could be helpful to understand its development law. However, specific characteristics and attributes of real supply chains are often neglected in existing evolution models. This work proposes a new evolution model of supply chain with manufactures as the core, based on external market demand and internal competition-cooperation. The evolution model assumes the external market environment is relatively stable, considers several factors, including specific topology of supply chain, external market demand, ecological growth and flow conservation. The simulation results suggest that the networks evolved by our model have similar structures as real supply chains. Meanwhile, the influences of external market demand and internal competition-cooperation to network evolution are analyzed. Additionally, 38 benchmark data sets are applied to validate the rationality of our evolution model, in which, nine manufacturing supply chains match the features of the networks constructed by our model. PMID:29370201

  9. Computational Performance of a Parallelized Three-Dimensional High-Order Spectral Element Toolbox

    NASA Astrophysics Data System (ADS)

    Bosshard, Christoph; Bouffanais, Roland; Clémençon, Christian; Deville, Michel O.; Fiétier, Nicolas; Gruber, Ralf; Kehtari, Sohrab; Keller, Vincent; Latt, Jonas

    In this paper, a comprehensive performance review of an MPI-based high-order three-dimensional spectral element method C++ toolbox is presented. The focus is put on the performance evaluation of several aspects with a particular emphasis on the parallel efficiency. The performance evaluation is analyzed with help of a time prediction model based on a parameterization of the application and the hardware resources. A tailor-made CFD computation benchmark case is introduced and used to carry out this review, stressing the particular interest for clusters with up to 8192 cores. Some problems in the parallel implementation have been detected and corrected. The theoretical complexities with respect to the number of elements, to the polynomial degree, and to communication needs are correctly reproduced. It is concluded that this type of code has a nearly perfect speed up on machines with thousands of cores, and is ready to make the step to next-generation petaflop machines.

  10. Interactive high-resolution isosurface ray casting on multicore processors.

    PubMed

    Wang, Qin; JaJa, Joseph

    2008-01-01

    We present a new method for the interactive rendering of isosurfaces using ray casting on multi-core processors. This method consists of a combination of an object-order traversal that coarsely identifies possible candidate 3D data blocks for each small set of contiguous pixels, and an isosurface ray casting strategy tailored for the resulting limited-size lists of candidate 3D data blocks. While static screen partitioning is widely used in the literature, our scheme performs dynamic allocation of groups of ray casting tasks to ensure almost equal loads among the different threads running on multi-cores while maintaining spatial locality. We also make careful use of memory management environment commonly present in multi-core processors. We test our system on a two-processor Clovertown platform, each consisting of a Quad-Core 1.86-GHz Intel Xeon Processor, for a number of widely different benchmarks. The detailed experimental results show that our system is efficient and scalable, and achieves high cache performance and excellent load balancing, resulting in an overall performance that is superior to any of the previous algorithms. In fact, we achieve an interactive isosurface rendering on a 1024(2) screen for all the datasets tested up to the maximum size of the main memory of our platform.

  11. Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domino, Stefan P.; Ananthan, Shreyas; Knaus, Robert C.

    The former Nalu interior heterogeneous algorithm design, which was originally designed to manage matrix assembly operations over all elemental topology types, has been modified to operate over homogeneous collections of mesh entities. This newly templated kernel design allows for removal of workset variable resize operations that were formerly required at each loop over a Sierra ToolKit (STK) bucket (nominally, 512 entities in size). Extensive usage of the Standard Template Library (STL) std::vector has been removed in favor of intrinsic Kokkos memory views. In this milestone effort, the transition to Kokkos as the underlying infrastructure to support performance and portability onmore » many-core architectures has been deployed for key matrix algorithmic kernels. A unit-test driven design effort has developed a homogeneous entity algorithm that employs a team-based thread parallelism construct. The STK Single Instruction Multiple Data (SIMD) infrastructure is used to interleave data for improved vectorization. The collective algorithm design, which allows for concurrent threading and SIMD management, has been deployed for the core low-Mach element- based algorithm. Several tests to ascertain SIMD performance on Intel KNL and Haswell architectures have been carried out. The performance test matrix includes evaluation of both low- and higher-order methods. The higher-order low-Mach methodology builds on polynomial promotion of the core low-order control volume nite element method (CVFEM). Performance testing of the Kokkos-view/SIMD design indicates low-order matrix assembly kernel speed-up ranging between two and four times depending on mesh loading and node count. Better speedups are observed for higher-order meshes (currently only P=2 has been tested) especially on KNL. The increased workload per element on higher-order meshes bene ts from the wide SIMD width on KNL machines. Combining multiple threads with SIMD on KNL achieves a 4.6x speedup over the baseline, with assembly timings faster than that observed on Haswell architecture. The computational workload of higher-order meshes, therefore, seems ideally suited for the many-core architecture and justi es further exploration of higher-order on NGP platforms. A Trilinos/Tpetra-based multi-threaded GMRES preconditioned by symmetric Gauss Seidel (SGS) represents the core solver infrastructure for the low-Mach advection/diffusion implicit solves. The threaded solver stack has been tested on small problems on NREL's Peregrine system using the newly developed and deployed Kokkos-view/SIMD kernels. fforts are underway to deploy the Tpetra-based solver stack on NERSC Cori system to benchmark its performance at scale on KNL machines.« less

  12. SpaceCubeX: A Framework for Evaluating Hybrid Multi-Core CPU FPGA DSP Architectures

    NASA Technical Reports Server (NTRS)

    Schmidt, Andrew G.; Weisz, Gabriel; French, Matthew; Flatley, Thomas; Villalpando, Carlos Y.

    2017-01-01

    The SpaceCubeX project is motivated by the need for high performance, modular, and scalable on-board processing to help scientists answer critical 21st century questions about global climate change, air quality, ocean health, and ecosystem dynamics, while adding new capabilities such as low-latency data products for extreme event warnings. These goals translate into on-board processing throughput requirements that are on the order of 100-1,000 more than those of previous Earth Science missions for standard processing, compression, storage, and downlink operations. To study possible future architectures to achieve these performance requirements, the SpaceCubeX project provides an evolvable testbed and framework that enables a focused design space exploration of candidate hybrid CPU/FPGA/DSP processing architectures. The framework includes ArchGen, an architecture generator tool populated with candidate architecture components, performance models, and IP cores, that allows an end user to specify the type, number, and connectivity of a hybrid architecture. The framework requires minimal extensions to integrate new processors, such as the anticipated High Performance Spaceflight Computer (HPSC), reducing time to initiate benchmarking by months. To evaluate the framework, we leverage a wide suite of high performance embedded computing benchmarks and Earth science scenarios to ensure robust architecture characterization. We report on our projects Year 1 efforts and demonstrate the capabilities across four simulation testbed models, a baseline SpaceCube 2.0 system, a dual ARM A9 processor system, a hybrid quad ARM A53 and FPGA system, and a hybrid quad ARM A53 and DSP system.

  13. Disaster metrics: quantitative benchmarking of hospital surge capacity in trauma-related multiple casualty events.

    PubMed

    Bayram, Jamil D; Zuabi, Shawki; Subbarao, Italo

    2011-06-01

    Hospital surge capacity in multiple casualty events (MCE) is the core of hospital medical response, and an integral part of the total medical capacity of the community affected. To date, however, there has been no consensus regarding the definition or quantification of hospital surge capacity. The first objective of this study was to quantitatively benchmark the various components of hospital surge capacity pertaining to the care of critically and moderately injured patients in trauma-related MCE. The second objective was to illustrate the applications of those quantitative parameters in local, regional, national, and international disaster planning; in the distribution of patients to various hospitals by prehospital medical services; and in the decision-making process for ambulance diversion. A 2-step approach was adopted in the methodology of this study. First, an extensive literature search was performed, followed by mathematical modeling. Quantitative studies on hospital surge capacity for trauma injuries were used as the framework for our model. The North Atlantic Treaty Organization triage categories (T1-T4) were used in the modeling process for simplicity purposes. Hospital Acute Care Surge Capacity (HACSC) was defined as the maximum number of critical (T1) and moderate (T2) casualties a hospital can adequately care for per hour, after recruiting all possible additional medical assets. HACSC was modeled to be equal to the number of emergency department beds (#EDB), divided by the emergency department time (EDT); HACSC = #EDB/EDT. In trauma-related MCE, the EDT was quantitatively benchmarked to be 2.5 (hours). Because most of the critical and moderate casualties arrive at hospitals within a 6-hour period requiring admission (by definition), the hospital bed surge capacity must match the HACSC at 6 hours to ensure coordinated care, and it was mathematically benchmarked to be 18% of the staffed hospital bed capacity. Defining and quantitatively benchmarking the different components of hospital surge capacity is vital to hospital preparedness in MCE. Prospective studies of our mathematical model are needed to verify its applicability, generalizability, and validity.

  14. Waste-Management Education and Research Consortium (WERC) annual progress report, 1992--1993. Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1993-02-15

    This report contains the following appendices: Appendix A - Requirements for Undergraduate Level; Appendix B - Requirements for Graduate Level; Appendix C - Graduate Degree In Environmental Engineeringat New Mexico State University; Appendix D - Non-degree Certificate program; Appendix E - Curriculum for Associate Degree Program in Radioactive & Hazardous Waste Materials; Appendix F - Curriculum for NCC Program in Earth & Environmental Sciences; Appendix G - Brochure of 1992 Teleconference Series; Appendix H - Sites for Hazardous/Radioactive Waste Management Series; Appendix I - WERC Interactive Television Courses; Appendix J - WERC Research Seminar Series Brochures; Appendix K - Summarymore » of Technology Development of the Third Year; Appendix L - List of Major Publications Resulting From WERC; Appendix M - Types of Equipment at WERC Laboratories; and Appendix N - WERC Newsletter Examples.« less

  15. Waste-Management Education and Research Consortium (WERC) annual progress report, 1992--1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eiceman, Gary A.; King, J. Phillip; Smith, Geoffrey B.

    1993-02-15

    This report contains the following appendices: Appendix A - Requirements for Undergraduate Level; Appendix B - Requirements for Graduate Level; Appendix C - Graduate Degree In Environmental Engineeringat New Mexico State University; Appendix D - Non-degree Certificate program; Appendix E - Curriculum for Associate Degree Program in Radioactive Hazardous Waste Materials; Appendix F - Curriculum for NCC Program in Earth Environmental Sciences; Appendix G - Brochure of 1992 Teleconference Series; Appendix H - Sites for Hazardous/Radioactive Waste Management Series; Appendix I - WERC Interactive Television Courses; Appendix J - WERC Research Seminar Series Brochures; Appendix K - Summary of Technologymore » Development of the Third Year; Appendix L - List of Major Publications Resulting From WERC; Appendix M - Types of Equipment at WERC Laboratories; and Appendix N - WERC Newsletter Examples.« less

  16. Columbia Accident Investigation Board Report. Volume Two

    NASA Technical Reports Server (NTRS)

    Barry, J. R.; Jenkins, D. R.; White, D. J.; Goodman, P. A.; Reingold, L. A.

    2003-01-01

    Volume II of the Report contains appendices that were cited in Volume I. The Columbia Accident Investigation Board produced many of these appendices as working papers during the investigation into the February 1, 2003 destruction of the Space Shuttle Columbia. Other appendices were produced by other organizations (mainly NASA) in support of the Board investigation. In the case of documents that have been published by others, they are included here in the interest of establishing a complete record, but often at less than full page size. Contents include: CAIB Technical Documents Cited in the Report: Reader's Guide to Volume II; Appendix D. a Supplement to the Report; Appendix D.b Corrections to Volume I of the Report; Appendix D.1 STS-107 Training Investigation; Appendix D.2 Payload Operations Checklist 3; Appendix D.3 Fault Tree Closure Summary; Appendix D.4 Fault Tree Elements - Not Closed; Appendix D.5 Space Weather Conditions; Appendix D.6 Payload and Payload Integration; Appendix D.7 Working Scenario; Appendix D.8 Debris Transport Analysis; Appendix D.9 Data Review and Timeline Reconstruction Report; Appendix D.10 Debris Recovery; Appendix D.11 STS-107 Columbia Reconstruction Report; Appendix D.12 Impact Modeling; Appendix D.13 STS-107 In-Flight Options Assessment; Appendix D.14 Orbiter Major Modification (OMM) Review; Appendix D.15 Maintenance, Material, and Management Inputs; Appendix D.16 Public Safety Analysis; Appendix D.17 MER Manager's Tiger Team Checklist; Appendix D.18 Past Reports Review; Appendix D.19 Qualification and Interpretation of Sensor Data from STS-107; Appendix D.20 Bolt Catcher Debris Analysis.

  17. ADDITIONAL STRESS AND FRACTURE MECHANICS ANALYSES OF PRESSURIZED WATER REACTOR PRESSURE VESSEL NOZZLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, Matthew; Yin, Shengjun; Stevens, Gary

    2012-01-01

    In past years, the authors have undertaken various studies of nozzles in both boiling water reactors (BWRs) and pressurized water reactors (PWRs) located in the reactor pressure vessel (RPV) adjacent to the core beltline region. Those studies described stress and fracture mechanics analyses performed to assess various RPV nozzle geometries, which were selected based on their proximity to the core beltline region, i.e., those nozzle configurations that are located close enough to the core region such that they may receive sufficient fluence prior to end-of-life (EOL) to require evaluation of embrittlement as part of the RPV analyses associated with pressure-temperaturemore » (P-T) limits. In this paper, additional stress and fracture analyses are summarized that were performed for additional PWR nozzles with the following objectives: To expand the population of PWR nozzle configurations evaluated, which was limited in the previous work to just two nozzles (one inlet and one outlet nozzle). To model and understand differences in stress results obtained for an internal pressure load case using a two-dimensional (2-D) axi-symmetric finite element model (FEM) vs. a three-dimensional (3-D) FEM for these PWR nozzles. In particular, the ovalization (stress concentration) effect of two intersecting cylinders, which is typical of RPV nozzle configurations, was investigated. To investigate the applicability of previously recommended linear elastic fracture mechanics (LEFM) hand solutions for calculating the Mode I stress intensity factor for a postulated nozzle corner crack for pressure loading for these PWR nozzles. These analyses were performed to further expand earlier work completed to support potential revision and refinement of Title 10 to the U.S. Code of Federal Regulations (CFR), Part 50, Appendix G, Fracture Toughness Requirements, and are intended to supplement similar evaluation of nozzles presented at the 2008, 2009, and 2011 Pressure Vessels and Piping (PVP) Conferences. This work is also relevant to the ongoing efforts of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel (B&PV) Code, Section XI, Working Group on Operating Plant Criteria (WGOPC) efforts to incorporate nozzle fracture mechanics solutions into a revision to ASME B&PV Code, Section XI, Nonmandatory Appendix G.« less

  18. Roofline model toolkit: A practical tool for architectural and program analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo, Yu Jung; Williams, Samuel; Van Straalen, Brian

    We present preliminary results of the Roofline Toolkit for multicore, many core, and accelerated architectures. This paper focuses on the processor architecture characterization engine, a collection of portable instrumented micro benchmarks implemented with Message Passing Interface (MPI), and OpenMP used to express thread-level parallelism. These benchmarks are specialized to quantify the behavior of different architectural features. Compared to previous work on performance characterization, these microbenchmarks focus on capturing the performance of each level of the memory hierarchy, along with thread-level parallelism, instruction-level parallelism and explicit SIMD parallelism, measured in the context of the compilers and run-time environments. We also measuremore » sustained PCIe throughput with four GPU memory managed mechanisms. By combining results from the architecture characterization with the Roofline model based solely on architectural specifications, this work offers insights for performance prediction of current and future architectures and their software systems. To that end, we instrument three applications and plot their resultant performance on the corresponding Roofline model when run on a Blue Gene/Q architecture.« less

  19. Recovery of time evolution of Grad-Shafranov equilibria from single-spacecraft data: Benchmarking and application to a flux transfer event

    NASA Astrophysics Data System (ADS)

    Hasegawa, Hiroshi; Sonnerup, Bengt U. Ã.-.; Nakamura, Takuma K. M.

    2010-11-01

    First results are presented of a method, developed by Sonnerup and Hasegawa (2010), for analyzing time evolution of magnetohydrostatic Grad-Shafranov (GS) equilibria, using data recorded by an observing probe as it traverses a quasi-static, two-dimensional (2D), magnetic-field/plasma structure. The method recovers spatial initial values used in the classical GS reconstruction for an interval before and after the time of actual measurements, by advancing them backward and forward in time based on a set of equations for an incompressible plasma; the consequence is generation of multiple GS maps or a movie of the 2D field structure. The method is successfully benchmarked by use of a 2D magnetohydrodynamic simulation of time-dependent magnetic reconnection, and then is applied to a flux transfer event (FTE) seen by the Cluster spacecraft at the dayside high-latitude magnetopause. The application shows that the field lines constituting the FTE flux rope were contracting toward its center as a result of modest convective flow in the region around the core of the flux rope.

  20. Validation of the BUGJEFF311.BOLIB, BUGENDF70.BOLIB and BUGLE-B7 broad-group libraries on the PCA-Replica (H2O/Fe) neutron shielding benchmark experiment

    NASA Astrophysics Data System (ADS)

    Pescarini, Massimo; Orsi, Roberto; Frisoni, Manuela

    2016-03-01

    The PCA-Replica 12/13 (H2O/Fe) neutron shielding benchmark experiment was analysed using the TORT-3.2 3D SN code. PCA-Replica reproduces a PWR ex-core radial geometry with alternate layers of water and steel including a pressure vessel simulator. Three broad-group coupled neutron/photon working cross section libraries in FIDO-ANISN format with the same energy group structure (47 n + 20 γ) and based on different nuclear data were alternatively used: the ENEA BUGJEFF311.BOLIB (JEFF-3.1.1) and UGENDF70.BOLIB (ENDF/B-VII.0) libraries and the ORNL BUGLE-B7 (ENDF/B-VII.0) library. Dosimeter cross sections derived from the IAEA IRDF-2002 dosimetry file were employed. The calculated reaction rates for the Rh-103(n,n')Rh-103m, In-115(n,n')In-115m and S-32(n,p)P-32 threshold activation dosimeters and the calculated neutron spectra are compared with the corresponding experimental results.

  1. Coupled Monte Carlo neutronics and thermal hydraulics for power reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernnat, W.; Buck, M.; Mattes, M.

    The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code ormore » memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)« less

  2. Global Futures: a multithreaded execution model for Global Arrays-based applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Krishnamoorthy, Sriram; Vishnu, Abhinav

    2012-05-31

    We present Global Futures (GF), an execution model extension to Global Arrays, which is based on a PGAS-compatible Active Message-based paradigm. We describe the design and implementation of Global Futures and illustrate its use in a computational chemistry application benchmark (Hartree-Fock matrix construction using the Self-Consistent Field method). Our results show how we used GF to increase the scalability of the Hartree-Fock matrix build to up to 6,144 cores of an Infiniband cluster. We also show how GF's multithreaded execution has comparable performance to the traditional process-based SPMD model.

  3. Development and Applications of Orthogonality Constrained Density Functional Theory for the Accurate Simulation of X-Ray Absorption Spectroscopy

    NASA Astrophysics Data System (ADS)

    Derricotte, Wallace D.

    The aim of this dissertation is to address the theoretical challenges of calculating core-excited states within the framework of orthogonality constrained density functional theory (OCDFT). OCDFT is a well-established variational, time independent formulation of DFT for the computation of electronic excited states. In this work, the theory is first extended to compute core-excited states and generalized to calculate multiple excited state solutions. An initial benchmark is performed on a set of 40 unique core-excitations, highlighting that OCDFT excitation energies have a mean absolute error of 1.0 eV. Next, a novel implementation of the spin-free exact-two-component (X2C) one-electron treatment of scalar relativistic effects is presented and combined with OCDFT in an effort to calculate core excited states of transition metal complexes. The X2C-OCDFT spectra of three organotitanium complexes (TiCl4, TiCpCl3, and TiCp2Cl2) are shown to be in good agreement with experimental results and show a maximum absolute error of 5-6 eV. Next the issue of assigning core excited states is addressed by introducing an automated approach to analyzing the excited state MO by quantifying its local contributions using a unique orbital basis known as localized intrinsic valence virtual orbitals (LIVVOs). The utility of this approach is highlighted by studying sulfur core-excitations in ethanethiol and benzenethiol, as well as the hydrogen bonding in the water dimer. Finally, an approach to selectively target specic core-excited states in OCDFT based on atomic orbital subspace projection is presented in an effort to target core excited states of chemisorbed organic molecules. The core excitation spectrum of pyrazine chemisorbed on Si(100) is calculated using OCDFT and further characterized using the LIVVO approach.

  4. Kinematics of Parsec-scale Jets of Gamma-Ray Blazars at 43 GHz within the VLBA-BU-BLAZAR Program

    NASA Astrophysics Data System (ADS)

    Jorstad, Svetlana G.; Marscher, Alan P.; Morozova, Daria A.; Troitsky, Ivan S.; Agudo, Iván; Casadio, Carolina; Foord, Adi; Gómez, José L.; MacDonald, Nicholas R.; Molina, Sol N.; Lähteenmäki, Anne; Tammi, Joni; Tornikoski, Merja

    2017-09-01

    We analyze the parsec-scale jet kinematics from 2007 June to 2013 January of a sample of γ-ray bright blazars monitored roughly monthly with the Very Long Baseline Array at 43 GHz. In a total of 1929 images, we measure apparent speeds of 252 emission knots in 21 quasars, 12 BL Lacertae objects (BLLacs), and 3 radio galaxies, ranging from 0.02c to 78c; 21% of the knots are quasi-stationary. Approximately one-third of the moving knots execute non-ballistic motions, with the quasars exhibiting acceleration along the jet within 5 pc (projected) of the core, and knots in BLLacs tending to decelerate near the core. Using the apparent speeds of the components and the timescales of variability from their light curves, we derive the physical parameters of 120 superluminal knots, including variability Doppler factors, Lorentz factors, and viewing angles. We estimate the half-opening angle of each jet based on the projected opening angle and scatter of intrinsic viewing angles of knots. We determine characteristic values of the physical parameters for each jet and active galactic nucleus class based on the range of values obtained for individual features. We calculate the intrinsic brightness temperatures of the cores, {T}{{b},{int}}{core}, at all epochs, finding that the radio galaxies usually maintain equipartition conditions in the cores, while ˜30% of {T}{{b},{int}}{core} measurements in the quasars and BLLacs deviate from equipartition values by a factor >10. This probably occurs during transient events connected with active states. In the Appendix, we briefly describe the behavior of each blazar during the period analyzed.

  5. Compression After Impact on Honeycomb Core Sandwich Panels with Thin Facesheets, Part 2: Analysis

    NASA Technical Reports Server (NTRS)

    Mcquigg, Thomas D.; Kapania, Rakesh K.; Scotti, Stephen J.; Walker, Sandra P.

    2012-01-01

    A two part research study has been completed on the topic of compression after impact (CAI) of thin facesheet honeycomb core sandwich panels. The research has focused on both experiments and analysis in an effort to establish and validate a new understanding of the damage tolerance of these materials. Part 2, the subject of the current paper, is focused on the analysis, which corresponds to the CAI testings described in Part 1. Of interest, are sandwich panels, with aerospace applications, which consist of very thin, woven S2-fiberglass (with MTM45-1 epoxy) facesheets adhered to a Nomex honeycomb core. Two sets of materials, which were identical with the exception of the density of the honeycomb core, were tested in Part 1. The results highlighted the need for analysis methods which taken into account multiple failure modes. A finite element model (FEM) is developed here, in Part 2. A commercial implementation of the Multicontinuum Failure Theory (MCT) for progressive failure analysis (PFA) in composite laminates, Helius:MCT, is included in this model. The inclusion of PFA in the present model provided a new, unique ability to account for multiple failure modes. In addition, significant impact damage detail is included in the model. A sensitivity study, used to assess the effect of each damage parameter on overall analysis results, is included in an appendix. Analysis results are compared to the experimental results for each of the 32 CAI sandwich panel specimens tested to failure. The failure of each specimen is predicted using the high-fidelity, physicsbased analysis model developed here, and the results highlight key improvements in the understanding of honeycomb core sandwich panel CAI failure. Finally, a parametric study highlights the strength benefits compared to mass penalty for various core densities.

  6. Glacial to interglacial surface nutrient variations of Bering Deep Basins recorded by δ13C and δ15N of sedimentary organic matter

    NASA Astrophysics Data System (ADS)

    Nakatsuka, Takeshi; Watanabe, Kazuki; Handa, Nobuhiko; Matsumoto, Eiji; Wada, Eitaro

    1995-12-01

    Stable carbon and nitrogen isotopic ratios (δ13C and δ15N) of organic matter were measured in three sediment cores from deep basins of the Bering Sea to investigate past changes in surface nutrient conditions. For surface water reconstructions, hemipelagic layers in the cores were distinguished from turbidite layers (on the basis of their sedimentary structures and 14C ages) and analyzed for isotopic studies. Although δ13C profiles may have been affected by diagenesis, both δ15N and δ13C values showed common positive anomalies during the last deglaciation. We explain these anomalies as reflecting suppressed vertical mixing and low nutrient concentrations in surface waters caused by injection of meltwater from alpine glaciers around the Bering Sea. Appendix tables are available with entire article on microfiche. Order from American Geophysical Union, 2000 Florida Avenue, N.W., Washington , DC 20009. Document P95-003; $2.50. Payment must accompany order.

  7. Benchmark Credentialing Results for NRG-BR001: The First National Cancer Institute-Sponsored Trial of Stereotactic Body Radiation Therapy for Multiple Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Hallaq, Hania A., E-mail: halhallaq@radonc.uchicago.edu; Chmura, Steven J.; Salama, Joseph K.

    Purpose: The NRG-BR001 trial is the first National Cancer Institute–sponsored trial to treat multiple (range 2-4) extracranial metastases with stereotactic body radiation therapy. Benchmark credentialing is required to ensure adherence to this complex protocol, in particular, for metastases in close proximity. The present report summarizes the dosimetric results and approval rates. Methods and Materials: The benchmark used anonymized data from a patient with bilateral adrenal metastases, separated by <5 cm of normal tissue. Because the planning target volume (PTV) overlaps with organs at risk (OARs), institutions must use the planning priority guidelines to balance PTV coverage (45 Gy in 3 fractions) againstmore » OAR sparing. Submitted plans were processed by the Imaging and Radiation Oncology Core and assessed by the protocol co-chairs by comparing the doses to targets, OARs, and conformity metrics using nonparametric tests. Results: Of 63 benchmarks submitted through October 2015, 94% were approved, with 51% approved at the first attempt. Most used volumetric arc therapy (VMAT) (78%), a single plan for both PTVs (90%), and prioritized the PTV over the stomach (75%). The median dose to 95% of the volume was 44.8 ± 1.0 Gy and 44.9 ± 1.0 Gy for the right and left PTV, respectively. The median dose to 0.03 cm{sup 3} was 14.2 ± 2.2 Gy to the spinal cord and 46.5 ± 3.1 Gy to the stomach. Plans that spared the stomach significantly reduced the dose to the left PTV and stomach. Conformity metrics were significantly better for single plans that simultaneously treated both PTVs with VMAT, intensity modulated radiation therapy, or 3-dimensional conformal radiation therapy compared with separate plans. No significant differences existed in the dose at 2 cm from the PTVs. Conclusions: Although most plans used VMAT, the range of conformity and dose falloff was large. The decision to prioritize either OARs or PTV coverage varied considerably, suggesting that the toxicity outcomes in the trial could be affected. Several benchmarks met the dose-volume histogram metrics but produced unacceptable plans owing to low conformity. Dissemination of a frequently-asked-questions document improved the approval rate at the first attempt. Benchmark credentialing was found to be a valuable tool for educating institutions about the protocol requirements.« less

  8. Benchmark Credentialing Results for NRG-BR001: The First National Cancer Institute-Sponsored Trial of Stereotactic Body Radiation Therapy for Multiple Metastases.

    PubMed

    Al-Hallaq, Hania A; Chmura, Steven J; Salama, Joseph K; Lowenstein, Jessica R; McNulty, Susan; Galvin, James M; Followill, David S; Robinson, Clifford G; Pisansky, Thomas M; Winter, Kathryn A; White, Julia R; Xiao, Ying; Matuszak, Martha M

    2017-01-01

    The NRG-BR001 trial is the first National Cancer Institute-sponsored trial to treat multiple (range 2-4) extracranial metastases with stereotactic body radiation therapy. Benchmark credentialing is required to ensure adherence to this complex protocol, in particular, for metastases in close proximity. The present report summarizes the dosimetric results and approval rates. The benchmark used anonymized data from a patient with bilateral adrenal metastases, separated by <5 cm of normal tissue. Because the planning target volume (PTV) overlaps with organs at risk (OARs), institutions must use the planning priority guidelines to balance PTV coverage (45 Gy in 3 fractions) against OAR sparing. Submitted plans were processed by the Imaging and Radiation Oncology Core and assessed by the protocol co-chairs by comparing the doses to targets, OARs, and conformity metrics using nonparametric tests. Of 63 benchmarks submitted through October 2015, 94% were approved, with 51% approved at the first attempt. Most used volumetric arc therapy (VMAT) (78%), a single plan for both PTVs (90%), and prioritized the PTV over the stomach (75%). The median dose to 95% of the volume was 44.8 ± 1.0 Gy and 44.9 ± 1.0 Gy for the right and left PTV, respectively. The median dose to 0.03 cm 3 was 14.2 ± 2.2 Gy to the spinal cord and 46.5 ± 3.1 Gy to the stomach. Plans that spared the stomach significantly reduced the dose to the left PTV and stomach. Conformity metrics were significantly better for single plans that simultaneously treated both PTVs with VMAT, intensity modulated radiation therapy, or 3-dimensional conformal radiation therapy compared with separate plans. No significant differences existed in the dose at 2 cm from the PTVs. Although most plans used VMAT, the range of conformity and dose falloff was large. The decision to prioritize either OARs or PTV coverage varied considerably, suggesting that the toxicity outcomes in the trial could be affected. Several benchmarks met the dose-volume histogram metrics but produced unacceptable plans owing to low conformity. Dissemination of a frequently-asked-questions document improved the approval rate at the first attempt. Benchmark credentialing was found to be a valuable tool for educating institutions about the protocol requirements. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. A core curriculum for clinical fellowship training in pathology informatics

    PubMed Central

    McClintock, David S.; Levy, Bruce P.; Lane, William J.; Lee, Roy E.; Baron, Jason M.; Klepeis, Veronica E.; Onozato, Maristela L.; Kim, JiYeon; Dighe, Anand S.; Beckwith, Bruce A.; Kuo, Frank; Black-Schaffer, Stephen; Gilbertson, John R.

    2012-01-01

    Background: In 2007, our healthcare system established a clinical fellowship program in Pathology Informatics. In 2010 a core didactic course was implemented to supplement the fellowship research and operational rotations. In 2011, the course was enhanced by a formal, structured core curriculum and reading list. We present and discuss our rationale and development process for the Core Curriculum and the role it plays in our Pathology Informatics Fellowship Training Program. Materials and Methods: The Core Curriculum for Pathology Informatics was developed, and is maintained, through the combined efforts of our Pathology Informatics Fellows and Faculty. The curriculum was created with a three-tiered structure, consisting of divisions, topics, and subtopics. Primary (required) and suggested readings were selected for each subtopic in the curriculum and incorporated into a curated reading list, which is reviewed and maintained on a regular basis. Results: Our Core Curriculum is composed of four major divisions, 22 topics, and 92 subtopics that cover the wide breadth of Pathology Informatics. The four major divisions include: (1) Information Fundamentals, (2) Information Systems, (3) Workflow and Process, and (4) Governance and Management. A detailed, comprehensive reading list for the curriculum is presented in the Appendix to the manuscript and contains 570 total readings (current as of March 2012). Discussion: The adoption of a formal, core curriculum in a Pathology Informatics fellowship has significant impacts on both fellowship training and the general field of Pathology Informatics itself. For a fellowship, a core curriculum defines a basic, common scope of knowledge that the fellowship expects all of its graduates will know, while at the same time enhancing and broadening the traditional fellowship experience of research and operational rotations. For the field of Pathology Informatics itself, a core curriculum defines to the outside world, including departments, companies, and health systems considering hiring a pathology informatician, the core knowledge set expected of a person trained in the field and, more fundamentally, it helps to define the scope of the field within Pathology and healthcare in general. PMID:23024890

  10. Molecular diffusion of stable water isotopes in polar firn as a proxy for past temperatures

    NASA Astrophysics Data System (ADS)

    Holme, Christian; Gkinis, Vasileios; Vinther, Bo M.

    2018-03-01

    Polar precipitation archived in ice caps contains information on past temperature conditions. Such information can be retrieved by measuring the water isotopic signals of δ18O and δD in ice cores. These signals have been attenuated during densification due to molecular diffusion in the firn column, where the magnitude of the diffusion is isotopologue specific and temperature dependent. By utilizing the differential diffusion signal, dual isotope measurements of δ18O and δD enable multiple temperature reconstruction techniques. This study assesses how well six different methods can be used to reconstruct past surface temperatures from the diffusion-based temperature proxies. Two of the methods are based on the single diffusion lengths of δ18O and δD , three of the methods employ the differential diffusion signal, while the last uses the ratio between the single diffusion lengths. All techniques are tested on synthetic data in order to evaluate their accuracy and precision. We perform a benchmark test to thirteen high resolution Holocene data sets from Greenland and Antarctica, which represent a broad range of mean annual surface temperatures and accumulation rates. Based on the benchmark test, we comment on the accuracy and precision of the methods. Both the benchmark test and the synthetic data test demonstrate that the most precise reconstructions are obtained when using the single isotope diffusion lengths, with precisions of approximately 1.0 °C . In the benchmark test, the single isotope diffusion lengths are also found to reconstruct consistent temperatures with a root-mean-square-deviation of 0.7 °C . The techniques employing the differential diffusion signals are more uncertain, where the most precise method has a precision of 1.9 °C . The diffusion length ratio method is the least precise with a precision of 13.7 °C . The absolute temperature estimates from this method are also shown to be highly sensitive to the choice of fractionation factor parameterization.

  11. Polymerizable Molecular Silsesquioxane Cage Armored Hybrid Microcapsules with In Situ Shell Functionalization.

    PubMed

    Xing, Yuxiu; Peng, Jun; Xu, Kai; Lin, Weihong; Gao, Shuxi; Ren, Yuanyuan; Gui, Xuefeng; Liang, Shengyuan; Chen, Mingcai

    2016-02-01

    We prepared core-shell polymer-silsesquioxane hybrid microcapsules from cage-like methacryloxypropyl silsesquioxanes (CMSQs) and styrene (St). The presence of CMSQ can moderately reduce the interfacial tension between St and water and help to emulsify the monomer prior to polymerization. Dynamic light scattering (DLS) and TEM analysis demonstrated that uniform core-shell latex particles were achieved. The polymer latex particles were subsequently transformed into well-defined hollow nanospheres by removing the polystyrene (PS) core with 1:1 ethanol/cyclohexane. High-resolution TEM and nitrogen adsorption-desorption analysis showed that the final nanospheres possessed hollow cavities and had porous shells; the pore size was approximately 2-3 nm. The nanospheres exhibited large surface areas (up to 486 m 2  g -1 ) and preferential adsorption, and they demonstrated the highest reported methylene blue adsorption capacity (95.1 mg g -1 ). Moreover, the uniform distribution of the methacryloyl moiety on the hollow nanospheres endowed them with more potential properties. These results could provide a new benchmark for preparing hollow microspheres by a facile one-step template-free method for various applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, P. T.; Shadid, J. N.; Hu, J. J.

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  13. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE PAGES

    Lin, P. T.; Shadid, J. N.; Hu, J. J.; ...

    2017-11-06

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  14. Some conservation issues for the dynamical cores of NWP and climate models

    NASA Astrophysics Data System (ADS)

    Thuburn, J.

    2008-03-01

    The rationale for designing atmospheric numerical model dynamical cores with certain conservation properties is reviewed. The conceptual difficulties associated with the multiscale nature of realistic atmospheric flow, and its lack of time-reversibility, are highlighted. A distinction is made between robust invariants, which are conserved or nearly conserved in the adiabatic and frictionless limit, and non-robust invariants, which are not conserved in the limit even though they are conserved by exactly adiabatic frictionless flow. For non-robust invariants, a further distinction is made between processes that directly transfer some quantity from large to small scales, and processes involving a cascade through a continuous range of scales; such cascades may either be explicitly parameterized, or handled implicitly by the dynamical core numerics, accepting the implied non-conservation. An attempt is made to estimate the relative importance of different conservation laws. It is argued that satisfactory model performance requires spurious sources of a conservable quantity to be much smaller than any true physical sources; for several conservable quantities the magnitudes of the physical sources are estimated in order to provide benchmarks against which any spurious sources may be measured.

  15. Reengineering of waste management at the Oak Ridge National Laboratory. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myrick, T.E.

    1997-08-01

    A reengineering evaluation of the waste management program at the Oak Ridge National Laboratory (ORNL) was conducted during the months of February through July 1997. The goal of the reengineering was to identify ways in which the waste management process could be streamlined and improved to reduce costs while maintaining full compliance and customer satisfaction. A Core Team conducted preliminary evaluations and determined that eight particular aspects of the ORNL waste management program warranted focused investigations during the reengineering. The eight areas included Pollution Prevention, Waste Characterization, Waste Certification/Verification, Hazardous/Mixed Waste Stream, Generator/WM Teaming, Reporting/Records, Disposal End Points, and On-Sitemore » Treatment/Storage. The Core Team commissioned and assembled Process Teams to conduct in-depth evaluations of each of these eight areas. The Core Team then evaluated the Process Team results and consolidated the 80 process-specific recommendations into 15 overall recommendations. Benchmarking of a commercial nuclear facility, a commercial research facility, and a DOE research facility was conducted to both validate the efficacy of these findings and seek additional ideas for improvement. The outcome of this evaluation is represented by the 15 final recommendations that are described in this report.« less

  16. Initial Coupling of the RELAP-7 and PRONGHORN Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Ortensi; D. Andrs; A.A. Bingham

    2012-10-01

    Modern nuclear reactor safety codes require the ability to solve detailed coupled neutronic- thermal fluids problems. For larger cores, this implies fully coupled higher dimensionality spatial dynamics with appropriate feedback models that can provide enough resolution to accurately compute core heat generation and removal during steady and unsteady conditions. The reactor analysis code PRONGHORN is being coupled to RELAP-7 as a first step to extend RELAP’s current capabilities. This report details the mathematical models, the type of coupling, and the testing results from the integrated system. RELAP-7 is a MOOSE-based application that solves the continuity, momentum, and energy equations inmore » 1-D for a compressible fluid. The pipe and joint capabilities enable it to model parts of the power conversion unit. The PRONGHORN application, also developed on the MOOSE infrastructure, solves the coupled equations that define the neutron diffusion, fluid flow, and heat transfer in a full core model. The two systems are loosely coupled to simplify the transition towards a more complex infrastructure. The integration is tested on a simplified version of the OECD/NEA MHTGR-350 Coupled Neutronics-Thermal Fluids benchmark model.« less

  17. Core-Noise

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.

    2010-01-01

    This presentation is a technical progress report and near-term outlook for NASA-internal and NASA-sponsored external work on core (combustor and turbine) noise funded by the Fundamental Aeronautics Program Subsonic Fixed Wing (SFW) Project. Sections of the presentation cover: the SFW system level noise metrics for the 2015, 2020, and 2025 timeframes; the emerging importance of core noise and its relevance to the SFW Reduced-Noise-Aircraft Technical Challenge; the current research activities in the core-noise area, with some additional details given about the development of a high-fidelity combustion-noise prediction capability; the need for a core-noise diagnostic capability to generate benchmark data for validation of both high-fidelity work and improved models, as well as testing of future noise-reduction technologies; relevant existing core-noise tests using real engines and auxiliary power units; and examples of possible scenarios for a future diagnostic facility. The NASA Fundamental Aeronautics Program has the principal objective of overcoming today's national challenges in air transportation. The SFW Reduced-Noise-Aircraft Technical Challenge aims to enable concepts and technologies to dramatically reduce the perceived aircraft noise outside of airport boundaries. This reduction of aircraft noise is critical for enabling the anticipated large increase in future air traffic. Noise generated in the jet engine core, by sources such as the compressor, combustor, and turbine, can be a significant contribution to the overall noise signature at low-power conditions, typical of approach flight. At high engine power during takeoff, jet and fan noise have traditionally dominated over core noise. However, current design trends and expected technological advances in engine-cycle design as well as noise-reduction methods are likely to reduce non-core noise even at engine-power points higher than approach. In addition, future low-emission combustor designs could increase the combustion-noise component. The trend towards high-power-density cores also means that the noise generated in the low-pressure turbine will likely increase. Consequently, the combined result from these emerging changes will be to elevate the overall importance of turbomachinery core noise, which will need to be addressed in order to meet future noise goals.

  18. Selection criteria for using nighttime construction and maintenance operations : appendices.

    DOT National Transportation Integrated Search

    2003-05-01

    Appendix A: literature review and bibliography of related research; : Appendix B: survey instrument; : Appendix C: survey results; : Appendix D: Oregon crash analysis; : Appendix E: userguide to estimate road user costs; : Appendix F: the study of wo...

  19. Sedimentary and geochemical signature of the 2016 Kaikōura Tsunami at Little Pigeon Bay: A depositional benchmark for the Banks Peninsula region, New Zealand

    NASA Astrophysics Data System (ADS)

    Williams, Shaun; Zhang, Tianran; Chagué, Catherine; Williams, James; Goff, James; Lane, Emily M.; Bind, Jochen; Qasim, Ilyas; Thomas, Kristie-Lee; Mueller, Christof; Hampton, Sam; Borella, Josh

    2018-07-01

    The 14 November 2016 Kaikōura Tsunami inundated Little Pigeon Bay in Banks Peninsula, New Zealand, and left a distinct sedimentary deposit, on the ground and within the cottage near the shore. Sedimentary (grain size) and geochemical (electrical conductivity and X-Ray Fluorescence) analyses on samples collected over successive field campaigns are used to characterize the deposits. Sediment distribution observed in the cottage in combination with flow direction indicators suggests that sediment and debris laid down within the building were predominantly the result of a single wave that had been channeled up the stream bed rather than from offshore. Salinity data indicated that the maximum tsunami-wetted and/or seawater-sprayed area extended 12.5 m farther inland than the maximum inundation distance inferred from the debris line observed a few days after the event. In addition, the salinity signature was short-lived. An overall inland waning of tsunami energy was indicated by the mean grain size and portable X-Ray Fluorescence elemental results. ITRAX data collected from three cores along an inland transect indicated a distinct elevated elemental signature at the surfaces of the cores, with an associated increase in magnetic susceptibility. Comparable signatures were also identified within subsurface stratigraphic sequences, and likely represent older tsunamis known to have inundated this bay as well as adjacent bays in Banks Peninsula. The sedimentary and geochemical signatures of the 2016 Kaikōura Tsunami at Little Pigeon Bay provide a modern benchmark that can be used to identify older tsunami deposits in the Banks Peninsula region.

  20. Computation of the free energy due to electron density fluctuation of a solute in solution: A QM/MM method with perturbation approach combined with a theory of solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuoka, Daiki; Takahashi, Hideaki, E-mail: hideaki@m.tohoku.ac.jp; Morita, Akihiro

    2014-04-07

    We developed a perturbation approach to compute solvation free energy Δμ within the framework of QM (quantum mechanical)/MM (molecular mechanical) method combined with a theory of energy representation (QM/MM-ER). The energy shift η of the whole system due to the electronic polarization of the solute is evaluated using the second-order perturbation theory (PT2), where the electric field formed by surrounding solvent molecules is treated as the perturbation to the electronic Hamiltonian of the isolated solute. The point of our approach is that the energy shift η, thus obtained, is to be adopted for a novel energy coordinate of the distributionmore » functions which serve as fundamental variables in the free energy functional developed in our previous work. The most time-consuming part in the QM/MM-ER simulation can be, thus, avoided without serious loss of accuracy. For our benchmark set of molecules, it is demonstrated that the PT2 approach coupled with QM/MM-ER gives hydration free energies in excellent agreements with those given by the conventional method utilizing the Kohn-Sham SCF procedure except for a few molecules in the benchmark set. A variant of the approach is also proposed to deal with such difficulties associated with the problematic systems. The present approach is also advantageous to parallel implementations. We examined the parallel efficiency of our PT2 code on multi-core processors and found that the speedup increases almost linearly with respect to the number of cores. Thus, it was demonstrated that QM/MM-ER coupled with PT2 deserves practical applications to systems of interest.« less

  1. Tracking millennial-scale Holocene glacial advance and retreat using osmium isotopes: Insights from the Greenland ice sheet

    USGS Publications Warehouse

    Rooney, Alan D.; Selby, David; Llyod, Jeremy M.; Roberts, David H.; Luckge, Andreas; Sageman, Bradley B.; Prouty, Nancy G.

    2016-01-01

    High-resolution Os isotope stratigraphy can aid in reconstructing Pleistocene ice sheet fluctuation and elucidating the role of local and regional weathering fluxes on the marine Os residence time. This paper presents new Os isotope data from ocean cores adjacent to the West Greenland ice sheet that have excellent chronological controls. Cores MSM-520 and DA00-06 represent distal to proximal sites adjacent to two West Greenland ice streams. Core MSM-520 has a steadily decreasing Os signal over the last 10 kyr (187Os/188Os = 1.35–0.81). In contrast, Os isotopes from core DA00-06 (proximal to the calving front of Jakobshavn Isbræ) highlight four stages of ice stream retreat and advance over the past 10 kyr (187Os/188Os = 2.31; 1.68; 2.09; 1.47). Our high-resolution chemostratigraphic records provide vital benchmarks for ice-sheet modelers as we attempt to better constrain the future response of major ice sheets to climate change. Variations in Os isotope composition from sediment and macro-algae (seaweed) sourced from regional and global settings serve to emphasize the overwhelming effect weathering sources have on seawater Os isotope composition. Further, these findings demonstrate that the residence time of Os is shorter than previous estimates of ∼104 yr.

  2. Hardware accelerated high performance neutron transport computation based on AGENT methodology

    NASA Astrophysics Data System (ADS)

    Xiao, Shanjie

    The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the computation time for 3D full-core neutron transport analysis, making the AGENT methodology unique and advantageous, and thus supplies the possibility to extend the application range of neutron transport analysis in either industry engineering or academic research.

  3. Columbia Accident Investigation Board Report. Volume Six

    NASA Technical Reports Server (NTRS)

    Barry, J. L.; Gehmann, H. W.; Deal, D. W.; Hallock, J. N.; Hess, K. W.

    2003-01-01

    In the course of its inquiry into the February 1, 2003 destruction of the Space Shuttle Columbia, the Columbia Accident Investigation Board conducted a series of public hearings at Houston, Texas; Cape Canaveral, Florida; and Washington, DC. Testimony from these hearings was recorded and then transcribed. This appendix, Volume VI of the Report, is a compilation of those transcripts. Contents: Transcripts of Board Public Hearings; Appendix H.1 March 6, 2003 Houston, Texas; Appendix H.2 March 17, 2003 Houston, Texas; Appendix H.3 March 18, 2003 Houston, Texas; Appendix H. 4 March 25, 2003 Cape Canaveral, Florida; Appendix H.5 March 26, 2003 Cape Canaveral, Florida; Appendix H.6 April 7, 2003 Houston, Texas; Appendix H.7 April 8, 2003 Houston, Texas; Appendix H.8 April 23, 2003 Houston, Texas; Appendix H.9 May 6, 2003 Houston, Texas; Appendix H.10 June 12, 2003 Washington, DC.

  4. Motivational Interviewing Support for a Behavioral Health Internet Intervention for Drivers with Type 1 Diabetes

    PubMed Central

    Ingersoll, Karen S.; Banton, Thomas; Gorlin, Eugenia; Vajda, Karen; Singh, Harsimran; Peterson, Ninoska; Gonder-Frederick, Linda; Cox, Daniel J.

    2015-01-01

    While Internet interventions can improve health behaviors, their impact is limited by program adherence. Supporting program adherence through telephone counseling may be useful, but there have been few direct tests of the impact of support. We describe a Telephone Motivational Interviewing (MI) intervention targeting adherence to an Internet intervention for drivers with Type 1 Diabetes, DD.com, and compare completion of intervention benchmarks by those randomized to DD.com plus MI vs. DD.com only. The goal of the pre-intervention MI session was to increase the participant's motivation to complete the Internet intervention and all its assignments, while the goal of the post-treatment MI session was to plan for maintaining changes made during the intervention. Sessions were semi-structured and partially scripted to maximize consistency. MI Fidelity was coded using a standard coding system, the MITI. We examined the effects of MI support vs. no support on number of days from enrollment to program benchmarks. Results show that MI sessions were provided with good fidelity. Users who received MI support completed some program benchmarks such as Core 4 (t176 df= -2.25; p<.03) and 11 of 12 monthly driving diaries significantly sooner, but support did not significantly affect time to intervention completion (t177 df= -1.69; p<. 10) or rates of completion. These data suggest that there is little benefit to therapist guidance for Internet interventions including automated email prompts and other automated minimal supports, but that a booster MI session may enhance collection of follow-up data. PMID:25774342

  5. Core Noise - Increasing Importance

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.

    2011-01-01

    This presentation is a technical summary of and outlook for NASA-internal and NASA-sponsored external research on core (combustor and turbine) noise funded by the Fundamental Aeronautics Program Subsonic Fixed Wing (SFW) Project. Sections of the presentation cover: the SFW system-level noise metrics for the 2015, 2020, and 2025 timeframes; turbofan design trends and their aeroacoustic implications; the emerging importance of core noise and its relevance to the SFW Reduced-Perceived-Noise Technical Challenge; and the current research activities in the core-noise area, with additional details given about the development of a high-fidelity combustor-noise prediction capability as well as activities supporting the development of improved reduced-order, physics-based models for combustor-noise prediction. The need for benchmark data for validation of high-fidelity and modeling work and the value of a potential future diagnostic facility for testing of core-noise-reduction concepts are indicated. The NASA Fundamental Aeronautics Program has the principal objective of overcoming today's national challenges in air transportation. The SFW Reduced-Perceived-Noise Technical Challenge aims to develop concepts and technologies to dramatically reduce the perceived aircraft noise outside of airport boundaries. This reduction of aircraft noise is critical to enabling the anticipated large increase in future air traffic. Noise generated in the jet engine core, by sources such as the compressor, combustor, and turbine, can be a significant contribution to the overall noise signature at low-power conditions, typical of approach flight. At high engine power during takeoff, jet and fan noise have traditionally dominated over core noise. However, current design trends and expected technological advances in engine-cycle design as well as noise-reduction methods are likely to reduce non-core noise even at engine-power points higher than approach. In addition, future low-emission combustor designs could increase the combustion-noise component. The trend towards high-power-density cores also means that the noise generated in the low-pressure turbine will likely increase. Consequently, the combined result from these emerging changes will be to elevate the overall importance of turbomachinery core noise, which will need to be addressed in order to meet future noise goals.

  6. Block-Parallel Data Analysis with DIY2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Peterka, Tom

    DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less

  7. Characterizing complexity in socio-technical systems: a case study of a SAMU Medical Regulation Center.

    PubMed

    Righi, Angela Weber; Wachs, Priscila; Saurin, Tarcísio Abreu

    2012-01-01

    Complexity theory has been adopted by a number of studies as a benchmark to investigate the performance of socio-technical systems, especially those that are characterized by relevant cognitive work. However, there is little guidance on how to assess, systematically, the extent to which a system is complex. The main objective of this study is to carry out a systematic analysis of a SAMU (Mobile Emergency Medical Service) Medical Regulation Center in Brazil, based on the core characteristics of complex systems presented by previous studies. The assessment was based on direct observations and nine interviews: three of them with regulator of emergencies medical doctor, three with radio operators and three with telephone attendants. The results indicated that, to a great extent, the core characteristics of complexity are magnified) due to basic shortcomings in the design of the work system. Thus, some recommendations are put forward with a view to reducing unnecessary complexity that hinders the performance of the socio-technical system.

  8. The Italian corporate system in a network perspective (1952-1983)

    NASA Astrophysics Data System (ADS)

    Bargigli, L.; Giannetti, R.

    2018-03-01

    We study the Italian network of boards in four benchmark years covering different decades, when important economic structural shifts occurred. We find that the latter did not significantly disturb its structure as a small world. At the same time, we do not find a strong peculiarity of the Italian variety of capitalism and its corporate governance system. Typical properties of small world networks are at levels which are not dissimilar from those of other developed economies. Even the steady decrease of density that we observe is recurrent in many other national systems. The composition of the core of the most connected boards remains also quite stable over time. Among the most central boards we always find those of banks and insurances, as well as those of State Owned Enterprises (SOEs). At the same time, the system underwent two significant dynamic adjustments in the Sixties (nationalization of electrical industry) and Seventies (financial restructuring after the "big inflation") which are revealed by modifications in the core and in the community structure.

  9. Cancer cell profiling by barcoding allows multiplexed protein analysis in fine-needle aspirates.

    PubMed

    Ullal, Adeeti V; Peterson, Vanessa; Agasti, Sarit S; Tuang, Suan; Juric, Dejan; Castro, Cesar M; Weissleder, Ralph

    2014-01-15

    Immunohistochemistry-based clinical diagnoses require invasive core biopsies and use a limited number of protein stains to identify and classify cancers. We introduce a technology that allows analysis of hundreds of proteins from minimally invasive fine-needle aspirates (FNAs), which contain much smaller numbers of cells than core biopsies. The method capitalizes on DNA-barcoded antibody sensing, where barcodes can be photocleaved and digitally detected without any amplification steps. After extensive benchmarking in cell lines, this method showed high reproducibility and achieved single-cell sensitivity. We used this approach to profile ~90 proteins in cells from FNAs and subsequently map patient heterogeneity at the protein level. Additionally, we demonstrate how the method could be used as a clinical tool to identify pathway responses to molecularly targeted drugs and to predict drug response in patient samples. This technique combines specificity with ease of use to offer a new tool for understanding human cancers and designing future clinical trials.

  10. Cancer cell profiling by barcoding allows multiplexed protein analysis in fine needle aspirates

    PubMed Central

    Ullal, Adeeti V.; Peterson, Vanessa; Agasti, Sarit S.; Tuang, Suan; Juric, Dejan; Castro, Cesar M.; Weissleder, Ralph

    2014-01-01

    Immunohistochemistry-based clinical diagnoses require invasive core biopsies and use a limited number of protein stains to identify and classify cancers. Here, we introduce a technology that allows analysis of hundreds of proteins from minimally invasive fine needle aspirates (FNA), which contain much smaller numbers of cells than core biopsies. The method capitalizes on DNA-barcoded antibody sensing where barcodes can be photo-cleaved and digitally detected without any amplification steps. Following extensive benchmarking in cell lines, this method showed high reproducibility and achieved single cell sensitivity. We used this approach to profile ~90 proteins in cells from FNAs and subsequently map patient heterogeneity at the protein level. Additionally, we demonstrate how the method could be used as a clinical tool to identify pathway responses to molecularly targeted drugs and to predict drug response in patient samples. This technique combines specificity with ease of use to offer a new tool for understanding human cancers and designing future clinical trials. PMID:24431113

  11. Spectral Element Method for the Simulation of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo Tibor; Murman, Scott M.

    2013-01-01

    This work uses a discontinuous-Galerkin spectral-element method (DGSEM) to solve the compressible Navier-Stokes equations [1{3]. The inviscid ux is computed using the approximate Riemann solver of Roe [4]. The viscous fluxes are computed using the second form of Bassi and Rebay (BR2) [5] in a manner consistent with the spectral-element approximation. The method of lines with the classical 4th-order explicit Runge-Kutta scheme is used for time integration. Results for polynomial orders up to p = 15 (16th order) are presented. The code is parallelized using the Message Passing Interface (MPI). The computations presented in this work are performed using the Sandy Bridge nodes of the NASA Pleiades supercomputer at NASA Ames Research Center. Each Sandy Bridge node consists of 2 eight-core Intel Xeon E5-2670 processors with a clock speed of 2.6Ghz and 2GB per core memory. On a Sandy Bridge node the Tau Benchmark [6] runs in a time of 7.6s.

  12. Numerical Simulations of Close and Contact Binary Systems Having Bipolytropic Equation of State

    NASA Astrophysics Data System (ADS)

    Kadam, Kundan; Clayton, Geoffrey C.; Motl, Patrick M.; Marcello, Dominic; Frank, Juhan

    2017-01-01

    I present the results of the numerical simulations of the mass transfer in close and contact binary systems with both stars having a bipolytropic (composite polytropic) equation of state. The initial binary systems are obtained by a modifying Hachisu’s self-consistent field technique. Both the stars have fully resolved cores with a molecular weight jump at the core-envelope interface. The initial properties of these simulations are chosen such that they satisfy the mass-radius relation, composition and period of a late W-type contact binary system. The simulations are carried out using two different Eulerian hydrocodes, Flow-ER with a fixed cylindrical grid, and Octo-tiger with an AMR capable cartesian grid. The detailed comparison of the simulations suggests an agreement between the results obtained from the two codes at different resolutions. The set of simulations can be treated as a benchmark, enabling us to reliably simulate mass transfer and merger scenarios of binary systems involving bipolytropic components.

  13. Simulation of X-ray absorption spectra with orthogonality constrained density functional theory.

    PubMed

    Derricotte, Wallace D; Evangelista, Francesco A

    2015-06-14

    Orthogonality constrained density functional theory (OCDFT) [F. A. Evangelista, P. Shushkov and J. C. Tully, J. Phys. Chem. A, 2013, 117, 7378] is a variational time-independent approach for the computation of electronic excited states. In this work we extend OCDFT to compute core-excited states and generalize the original formalism to determine multiple excited states. Benchmark computations on a set of 13 small molecules and 40 excited states show that unshifted OCDFT/B3LYP excitation energies have a mean absolute error of 1.0 eV. Contrary to time-dependent DFT, OCDFT excitation energies for first- and second-row elements are computed with near-uniform accuracy. OCDFT core excitation energies are insensitive to the choice of the functional and the amount of Hartree-Fock exchange. We show that OCDFT is a powerful tool for the assignment of X-ray absorption spectra of large molecules by simulating the gas-phase near-edge spectrum of adenine and thymine.

  14. Coulomb Excitation of Neutron-Rich Zn Isotopes: First Observation of the 21+ State in Zn80

    NASA Astrophysics Data System (ADS)

    van de Walle, J.; Aksouh, F.; Ames, F.; Behrens, T.; Bildstein, V.; Blazhev, A.; Cederkäll, J.; Clément, E.; Cocolios, T. E.; Davinson, T.; Delahaye, P.; Eberth, J.; Ekström, A.; Fedorov, D. V.; Fedosseev, V. N.; Fraile, L. M.; Franchoo, S.; Gernhauser, R.; Georgiev, G.; Habs, D.; Heyde, K.; Huber, G.; Huyse, M.; Ibrahim, F.; Ivanov, O.; Iwanicki, J.; Jolie, J.; Kester, O.; Köster, U.; Kröll, T.; Krücken, R.; Lauer, M.; Lisetskiy, A. F.; Lutter, R.; Marsh, B. A.; Mayet, P.; Niedermaier, O.; Nilsson, T.; Pantea, M.; Perru, O.; Raabe, R.; Reiter, P.; Sawicka, M.; Scheit, H.; Schrieder, G.; Schwalm, D.; Seliverstov, M. D.; Sieber, T.; Sletten, G.; Smirnova, N.; Stanoiu, M.; Stefanescu, I.; Thomas, J.-C.; Valiente-Dobón, J. J.; van Duppen, P.; Verney, D.; Voulot, D.; Warr, N.; Weisshaar, D.; Wenander, F.; Wolf, B. H.; Zielińska, M.

    2007-10-01

    Neutron-rich, radioactive Zn isotopes were investigated at the Radioactive Ion Beam facility REX-ISOLDE (CERN) using low-energy Coulomb excitation. The energy of the 21+ state in Zn78 could be firmly established and for the first time the 2+→01+ transition in Zn80 was observed at 1492(1) keV. B(E2,21+→01+) values were extracted for Zn74,76,78,80 and compared to large scale shell model calculations. With only two protons outside the Z=28 proton core, Zn80 is the lightest N=50 isotone for which spectroscopic information has been obtained to date. Two sets of advanced shell model calculations reproduce the observed B(E2) systematics. The results for N=50 isotones indicate a good N=50 shell closure and a strong Z=28 proton core polarization. The new results serve as benchmarks to establish theoretical models, predicting the nuclear properties of the doubly magic nucleus Ni78.

  15. A phylo-functional core of gut microbiota in healthy young Chinese cohorts across lifestyles, geography and ethnicities.

    PubMed

    Zhang, Jiachao; Guo, Zhuang; Xue, Zhengsheng; Sun, Zhihong; Zhang, Menghui; Wang, Lifeng; Wang, Guoyang; Wang, Fang; Xu, Jie; Cao, Hongfang; Xu, Haiyan; Lv, Qiang; Zhong, Zhi; Chen, Yongfu; Qimuge, Sudu; Menghe, Bilige; Zheng, Yi; Zhao, Liping; Chen, Wei; Zhang, Heping

    2015-09-01

    Structural profiling of healthy human gut microbiota across heterogeneous populations is necessary for benchmarking and characterizing the potential ecosystem services provided by particular gut symbionts for maintaining the health of their hosts. Here we performed a large structural survey of fecal microbiota in 314 healthy young adults, covering 20 rural and urban cohorts from 7 ethnic groups living in 9 provinces throughout China. Canonical analysis of unweighted UniFrac principal coordinates clustered the subjects mainly by their ethnicities/geography and less so by lifestyles. Nine predominant genera, all of which are known to contain short-chain fatty acid producers, co-occurred in all individuals and collectively represented nearly half of the total sequences. Interestingly, species-level compositional profiles within these nine genera still discriminated the subjects according to their ethnicities/geography and lifestyles. Therefore, a phylogenetically diverse core of gut microbiota at the genus level may be commonly shared by distinctive healthy populations as functionally indispensable ecosystem service providers for the hosts.

  16. A phylo-functional core of gut microbiota in healthy young Chinese cohorts across lifestyles, geography and ethnicities

    PubMed Central

    Zhang, Jiachao; Guo, Zhuang; Xue, Zhengsheng; Sun, Zhihong; Zhang, Menghui; Wang, Lifeng; Wang, Guoyang; Wang, Fang; Xu, Jie; Cao, Hongfang; Xu, Haiyan; Lv, Qiang; Zhong, Zhi; Chen, Yongfu; Qimuge, Sudu; Menghe, Bilige; Zheng, Yi; Zhao, Liping; Chen, Wei; Zhang, Heping

    2015-01-01

    Structural profiling of healthy human gut microbiota across heterogeneous populations is necessary for benchmarking and characterizing the potential ecosystem services provided by particular gut symbionts for maintaining the health of their hosts. Here we performed a large structural survey of fecal microbiota in 314 healthy young adults, covering 20 rural and urban cohorts from 7 ethnic groups living in 9 provinces throughout China. Canonical analysis of unweighted UniFrac principal coordinates clustered the subjects mainly by their ethnicities/geography and less so by lifestyles. Nine predominant genera, all of which are known to contain short-chain fatty acid producers, co-occurred in all individuals and collectively represented nearly half of the total sequences. Interestingly, species-level compositional profiles within these nine genera still discriminated the subjects according to their ethnicities/geography and lifestyles. Therefore, a phylogenetically diverse core of gut microbiota at the genus level may be commonly shared by distinctive healthy populations as functionally indispensable ecosystem service providers for the hosts. PMID:25647347

  17. In-core flux sensor evaluations at the ATR critical facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Troy Unruh; Benjamin Chase; Joy Rempe

    2014-09-01

    Flux detector evaluations were completed as part of a joint Idaho State University (ISU) / Idaho National Laboratory (INL) / French Atomic Energy commission (CEA) ATR National Scientific User Facility (ATR NSUF) project to compare the accuracy, response time, and long duration performance of several flux detectors. Special fixturing developed by INL allows real-time flux detectors to be inserted into various ATRC core positions and perform lobe power measurements, axial flux profile measurements, and detector cross-calibrations. Detectors initially evaluated in this program include the French Atomic Energy Commission (CEA)-developed miniature fission chambers; specialized self-powered neutron detectors (SPNDs) developed by themore » Argentinean National Energy Commission (CNEA); specially developed commercial SPNDs from Argonne National Laboratory. As shown in this article, data obtained from this program provides important insights related to flux detector accuracy and resolution for subsequent ATR and CEA experiments and flux data required for bench-marking models in the ATR V&V Upgrade Initiative.« less

  18. Game playing.

    PubMed

    Rosin, Christopher D

    2014-03-01

    Game playing has been a core domain of artificial intelligence research since the beginnings of the field. Game playing provides clearly defined arenas within which computational approaches can be readily compared to human expertise through head-to-head competition and other benchmarks. Game playing research has identified several simple core algorithms that provide successful foundations, with development focused on the challenges of defeating human experts in specific games. Key developments include minimax search in chess, machine learning from self-play in backgammon, and Monte Carlo tree search in Go. These approaches have generalized successfully to additional games. While computers have surpassed human expertise in a wide variety of games, open challenges remain and research focuses on identifying and developing new successful algorithmic foundations. WIREs Cogn Sci 2014, 5:193-205. doi: 10.1002/wcs.1278 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. © 2014 John Wiley & Sons, Ltd.

  19. Liquid Rocket Booster (LRB) for the Space Transportation System (STS) systems study. Appendix F: Performance and trajectory for ALS/LRB launch vehicles

    NASA Technical Reports Server (NTRS)

    1989-01-01

    By simply combining two baseline pump-fed LOX/RP-1 Liquid Rocket Boosters (LRBs) with the Denver core, a launch vehicle (Option 1 Advanced Launch System (ALS)) is obtained that can perform both the 28.5 deg (ALS) mission and the polar orbit ALS mission. The Option 2 LRB was obtained by finding the optimum LOX/LH2 engine for the STS/LRB reference mission (70.5 K lb payload). Then this engine and booster were used to estimate ALS payload for the 28.5 deg inclination ALS mission. Previous studies indicated that the optimum number of STS/LRB engines is four. When the engine/booster sizing was performed, each engine had 478 K lb sea level thrust and the booster carried 625,000 lb of useable propellant. Two of these LRBs combined with the Denver core provided a launch vehicle that meets the payload requirements for both the ALS and STS reference missions. The Option 3 LRB uses common engines for the cores and boosters. The booster engines do not have the nozzle extension. These engines were sized as common ALS engines. An ALS launch vehicle that has six core engines and five engines per booster provides 109,100 lb payload for the 28.5 deg mission. Each of these LOX/LH2 LRBs carries 714,100 lb of useable propellant. It is estimated that the STS/LRB reference mission payload would be 75,900 lb.

  20. T1 bright appendix sign to exclude acute appendicitis in pregnant women.

    PubMed

    Shin, Ilah; An, Chansik; Lim, Joon Seok; Kim, Myeong-Jin; Chung, Yong Eun

    2017-08-01

    To evaluate the diagnostic value of the T1 bright appendix sign for the diagnosis of acute appendicitis in pregnant women. This retrospective study included 125 pregnant women with suspected appendicitis who underwent magnetic resonance (MR) imaging. The T1 bright appendix sign was defined as a high intensity signal filling more than half length of the appendix on T1-weighted imaging. Sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of the T1 bright appendix sign for normal appendix identification were calculated in all patients and in those with borderline-sized appendices (6-7 mm). The T1 bright appendix sign was seen in 51% of patients with normal appendices, but only in 4.5% of patients with acute appendicitis. The overall sensitivity, specificity, PPV, and NPV of the T1 bright appendix sign for normal appendix diagnosis were 44.9%, 95.5%, 97.6%, and 30.0%, respectively. All four patients with borderline sized appendix with appendicitis showed negative T1 bright appendix sign. The T1 bright appendix sign is a specific finding for the diagnosis of a normal appendix in pregnant women with suspected acute appendicitis. • Magnetic resonance imaging is increasingly used in emergency settings. • Acute appendicitis is the most common cause of acute abdomen. • Magnetic resonance imaging is widely used in pregnant population. • T1 bright appendix sign can be a specific sign representing normal appendix.

  1. Continuous flame aerosol synthesis of carbon-coated nano-LiFePO4 for Li-ion batteries

    PubMed Central

    Waser, Oliver; Büchel, Robert; Hintennach, Andreas; Novák, Petr; Pratsinis, Sotiris E.

    2013-01-01

    Core-shell, nanosized LiFePO4-carbon particles were made in one step by scalable flame aerosol technology at 7 g/h. Core LiFePO4 particles were made in an enclosed flame spray pyrolysis (FSP) unit and were coated in-situ downstream by auto thermal carbonization (pyrolysis) of swirl-fed C2H2 in an O2-controlled atmosphere. The formation of acetylene carbon black (ACB) shell was investigated as a function of the process fuel-oxidant equivalence ratio (EQR). The core-shell morphology was obtained at slightly fuel-rich conditions (1.0 < EQR < 1.07) whereas segregated ACB and LiFePO4 particles were formed at fuel-lean conditions (0.8 < EQR < 1). Post-annealing of core-shell particles in reducing environment (5 vol% H2 in argon) at 700 °C for up to 4 hours established phase pure, monocrystalline LiFePO4 with a crystal size of 65 nm and 30 wt% ACB content. Uncoated LiFePO4 or segregated LiFePO4-ACB grew to 250 nm at these conditions. Annealing at 800 °C induced carbothermal reduction of LiFePO4 to Fe2P by ACB shell consumption that resulted in cavities between carbon shell and core LiFePO4 and even slight LiFePO4 crystal growth but better electrochemical performance. The present carbon-coated LiFePO4 showed superior cycle stability and higher rate capability than the benchmark, commercially available LiFePO4. PMID:23407817

  2. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  3. Porting a Hall MHD Code to a Graphic Processing Unit

    NASA Technical Reports Server (NTRS)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  4. Multigroup cross section library for GFR2400

    NASA Astrophysics Data System (ADS)

    Čerba, Štefan; Vrban, Branislav; Lüley, Jakub; Haščík, Ján; Nečas, Vladimír

    2017-09-01

    In this paper the development and optimization of the SBJ_E71 multigroup cross section library for GFR2400 applications is discussed. A cross section processing scheme, merging Monte Carlo and deterministic codes, was developed. Several fine and coarse group structures and two weighting flux options were analysed through 18 benchmark experiments selected from the handbook of ICSBEP and based on performed similarity assessments. The performance of the collapsed version of the SBJ_E71 library was compared with MCNP5 CE ENDF/B VII.1 and the Korean KAFAX-E70 library. The comparison was made based on integral parameters of calculations performed on full core homogenous models.

  5. Anisn-Dort Neutron-Gamma Flux Intercomparison Exercise for a Simple Testing Model

    NASA Astrophysics Data System (ADS)

    Boehmer, B.; Konheiser, J.; Borodkin, G.; Brodkin, E.; Egorov, A.; Kozhevnikov, A.; Zaritsky, S.; Manturov, G.; Voloschenko, A.

    2003-06-01

    The ability of transport codes ANISN, DORT, ROZ-6, MCNP and TRAMO, as well as nuclear data libraries BUGLE-96, ABBN-93, VITAMIN-B6 and ENDF/B-6 to deliver consistent gamma and neutron flux results was tested in the calculation of a one-dimensional cylindrical model consisting of a homogeneous core and an outer zone with a single material. Model variants with H2O, Fe, Cr and Ni in the outer zones were investigated. The results are compared with MCNP-ENDF/B-6 results. Discrepancies are discussed. The specified test model is proposed as a computational benchmark for testing calculation codes and data libraries.

  6. Multispan Elevated Guideway Design for Passenger Transport Vehicles : Volume 2. Appendixes.

    DOT National Transportation Integrated Search

    1975-04-01

    Contents: Appendix A - derivation of vehicle-guideway interaction equations; Appendix B - evaluation of pier support dynamics; Appendix C - computer simulation program of two-dimensional vehicle over a multi-span guideway; Appendix D - computer progr...

  7. Emergency Victim Care. A Training Manual for Emergency Medical Technicians. Module 14. Appendix I: Communicating with Deaf and Hearing Impaired Patients. Appendix II: Medical Terminology. Appendix III: EMS Organizations. Appendix IV: Legislation (Ohio). Glossary of Terms. Index. Revised.

    ERIC Educational Resources Information Center

    Ohio State Dept. of Education, Columbus. Div. of Vocational Education.

    This training manual for emergency medical technicians, one of 14 modules that comprise the Emergency Victim Care textbook, contains appendixes, a glossary, and an index. The first appendix is an article on communicating with deaf and hearing-impaired patients. Appendix 2, the largest section in this manual, is an introduction to medical…

  8. Tracking the emergence of synthetic biology.

    PubMed

    Shapira, Philip; Kwon, Seokbeom; Youtie, Jan

    2017-01-01

    Synthetic biology is an emerging domain that combines biological and engineering concepts and which has seen rapid growth in research, innovation, and policy interest in recent years. This paper contributes to efforts to delineate this emerging domain by presenting a newly constructed bibliometric definition of synthetic biology. Our approach is dimensioned from a core set of papers in synthetic biology, using procedures to obtain benchmark synthetic biology publication records, extract keywords from these benchmark records, and refine the keywords, supplemented with articles published in dedicated synthetic biology journals. We compare our search strategy with other recent bibliometric approaches to define synthetic biology, using a common source of publication data for the period from 2000 to 2015. The paper details the rapid growth and international spread of research in synthetic biology in recent years, demonstrates that diverse research disciplines are contributing to the multidisciplinary development of synthetic biology research, and visualizes this by profiling synthetic biology research on the map of science. We further show the roles of a relatively concentrated set of research sponsors in funding the growth and trajectories of synthetic biology. In addition to discussing these analyses, the paper notes limitations and suggests lines for further work.

  9. Structural and Sequence Similarity Makes a Significant Impact on Machine-Learning-Based Scoring Functions for Protein-Ligand Interactions.

    PubMed

    Li, Yang; Yang, Jianyi

    2017-04-24

    The prediction of protein-ligand binding affinity has recently been improved remarkably by machine-learning-based scoring functions. For example, using a set of simple descriptors representing the atomic distance counts, the RF-Score improves the Pearson correlation coefficient to about 0.8 on the core set of the PDBbind 2007 database, which is significantly higher than the performance of any conventional scoring function on the same benchmark. A few studies have been made to discuss the performance of machine-learning-based methods, but the reason for this improvement remains unclear. In this study, by systemically controlling the structural and sequence similarity between the training and test proteins of the PDBbind benchmark, we demonstrate that protein structural and sequence similarity makes a significant impact on machine-learning-based methods. After removal of training proteins that are highly similar to the test proteins identified by structure alignment and sequence alignment, machine-learning-based methods trained on the new training sets do not outperform the conventional scoring functions any more. On the contrary, the performance of conventional functions like X-Score is relatively stable no matter what training data are used to fit the weights of its energy terms.

  10. Development of a Computing Cluster At the University of Richmond

    NASA Astrophysics Data System (ADS)

    Carbonneau, J.; Gilfoyle, G. P.; Bunn, E. F.

    2010-11-01

    The University of Richmond has developed a computing cluster to support the massive simulation and data analysis requirements for programs in intermediate-energy nuclear physics, and cosmology. It is a 20-node, 240-core system running Red Hat Enterprise Linux 5. We have built and installed the physics software packages (Geant4, gemc, MADmap...) and developed shell and Perl scripts for running those programs on the remote nodes. The system has a theoretical processing peak of about 2500 GFLOPS. Testing with the High Performance Linpack (HPL) benchmarking program (one of the standard benchmarks used by the TOP500 list of fastest supercomputers) resulted in speeds of over 900 GFLOPS. The difference between the maximum and measured speeds is due to limitations in the communication speed among the nodes; creating a bottleneck for large memory problems. As HPL sends data between nodes, the gigabit Ethernet connection cannot keep up with the processing power. We will show how both the theoretical and actual performance of the cluster compares with other current and past clusters, as well as the cost per GFLOP. We will also examine the scaling of the performance when distributed to increasing numbers of nodes.

  11. Propane in nitrogen, 1000 μmol/mol

    NASA Astrophysics Data System (ADS)

    Konopelko, L. A.; Kustikov, Y. A.; Kolobova, A. V.; Pankratov, V. V.; Pankov, A. A.; Efremova, O. V.; Rozhnov, M. S.; Melnyk, D. M.; Petryshyn, P. V.; Levbarg, O. S.; Kisel, S. P.; Shpilnyi, S. A.; Yakubov, S. Ye; Bakovec, N. V.; Mironchik, A. M.; Aleksandrov, V. V.

    2017-01-01

    This article presents the report on the COOMET key comparison COOMET.QM-K111, which is linking to the appropriate CCQM comparison—CCQM-K111 'Propane in nitrogen 1000 μmol/mol'. CCQM-K111 was carried out in 2014-2016 and it was one of a series of key comparisons in the gas analysis area assessing core competences. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  12. Core Program in the Joint Institute for Advancement of Flight Sciences

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Following the precedent started several years ago, each of the graduating MS and DSc candidates in JIAFS present a seminar which is advertised throughout the area. Following the formal seminar the attendees are excused and the review committee examines the student as in a standard thesis defense. This allows the students to gain experience in presenting their research and disseminating the Institute's research results to a wider audience. A list of seminars are given in Appendix B. Some 172 excellent applications for the Graduate Research Scholar Assistantships were received during this period. Forty-nine new GRSA were appointed by Professor Whitesides to JTAFS under the various research grants and contracts.

  13. Summary of data acquisition and field operations: Terra Resources, Anderson Canyon No. 3-17, Lincoln County, Wyoming; Terra Resources, North Anderson Canyon No. 40-16, Sweetwater County, Wyoming. Topical report, August 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1989-08-01

    A summary is presented of open-hole data collected on two cooperative wells for the GRI Tight Gas Sands Program. The overall objective of gathering well data in the Frontier Formation is to identify and evaluate technological problems in formation evaluation and hydraulic fracturing. Open-hole data acquisition is emphasized for the Anderson Canyon No. 3-17, a full cooperative well (i.e., coring, logging, cased-hole stress testing, fracture monitoring). Data collected on the North Anderson Canyon No. 40-16, a partial cooperative well (i.e., logging only), is described in an appendix.

  14. A comparison of journal coverage in Psychological Abstracts and the primary health sciences indexes: implications for cooperative serials acquisition and retention.

    PubMed Central

    Sekerak, R J

    1986-01-01

    An overlap study was performed to identify important psychology journals that are also of interest to biomedical scientists and health care practitioners. The journal lists of Index Medicus, Hospital Literature Index, Cumulative Index to Nursing and Allied Health Literature, and International Nursing Index were compared with the journal list of Psychological Abstracts. A total of 357 Psychological Abstracts titles were also in one or more of the health sciences indexes. A core list of forty-five titles covered by all of the indexes is presented in the Appendix. Results of the study are discussed vis-à-vis cooperative serials acquisition and retention efforts. PMID:3742117

  15. Liquid rocket booster study. Volume 2, book 5, appendix 9: LRB alternate applications and evolutionary growth

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The analyses performed in assessing the merit of the Liquid Rocket Booster concept for use in alternate applications such as for Shuttle C, for Standalone Expendable Launch Vehicles, and possibly for use with the Air Force's Advanced Launch System are presented. A comparison is also presented of the three LRB candidate designs, namely: (1) the LO2/LH2 pump fed, (2) the LO2/RP-1 pump fed, and (3) the LO2/RP-1 pressure fed propellant systems in terms of evolution along with design and cost factors, and other qualitative considerations. A further description is also presented of the recommended LRB standalone, core-to-orbit launch vehicle concept.

  16. Phase Equilibrium Investigations of Planetary Materials

    NASA Technical Reports Server (NTRS)

    Grove, T. L.

    2005-01-01

    This grant provided funds to carry out phase equilibrium studies on the processes of chemical differentiation of the moon and the meteorite parent bodies, during their early evolutionary history. Several experimental studies examined processes that led to the formation of lunar ultramafic glasses. Phase equilibrium studies were carried out on selected low-Ti and high-Ti lunar ultramafic glass compositions to provide constraints on the depth range, temperature and processes of melt generation and/or assimilation. A second set of experiments examined the role of sulfide melts in core formation processes in the earth and terrestrial planets. The major results of each paper are discussed, and copies of the papers are attached as Appendix I.

  17. Hydrologic Observatories: Design, Operation, and the Neuse Basin Prototype

    NASA Astrophysics Data System (ADS)

    Reckhow, K.; Band, L.

    2003-12-01

    Hydrologic observatories are conceived as major research facilities that will be available to the full hydrologic community, to facilitate comprehensive, cross-disciplinary and multi-scale measurements necessary to address the current and next generation of critical science and management issues. A network of hydrologic observatories is proposed that both develop national comparable, multidisciplinary data sets and provide study areas to allow scientists, through their own creativity, to make scientific breakthroughs that would be impossible without the proposed observatories. The core objective of an observatory is to improve predictive understanding of the flow paths, fluxes, and residence times of water, sediment and nutrients (the "core data") across a range of spatial and temporal scales across `interfaces'. To assess attainment of this objective, a benchmark will be established in the first year, and evaluated periodically. The benchmark should provide an estimate of prediction uncertainty at points in the stream across scale; the general principle is that predictive understanding must be demonstrated internal to the catchment as well as its outlet. The core data will be needed for practically any hydrologic study, yet absence of these data has been a barrier to larger scale studies in the past. However, advancement of hydrologic science facilitated by the network of hydrologic observatories is expected to focus on a set of science drivers, drawn from the major scientific questions posed by the set of NRC reports and refined into CUAHSI themes. These hypotheses will be tested at all observatories and will be used in the design to ensure the sufficiency of the data set. To make the observatories a national (and international) resource, a key aspect of the operation is the support of remote PI's. This support will include a resident staff of scientists and technicians on the order of 10 FTE's, availability of dormitory, laboratory, workshop space for all scientists, and the awarding of travel support out of observatory funds. The conflicting goals of support for a PI-designed observatory and a network of community-available observatories will be achieved by allocation of resources to assure both goals will be met. It is proposed that these resources be divided into three pools: Core data pool. Data to be collected by the observatory PI's and staff, and where possible, augmented by existing (e.g., USGS) data collection. Design pool. Available to support the designs of observatory PI's. Community pool. Available to non-PI scientists to test cross-observatory hypotheses. Application of these design and operation concepts to the design of the Neuse basin prototype hydrologic observatory is briefly discussed.

  18. 18 CFR Appendix A to Subpart H of... - Appendix A to Subpart H of Part 35

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Appendix A to Subpart H of Part 35 A Appendix A to Subpart H of Part 35 Conservation of Power and Water Resources FEDERAL... Rates Pt. 35, Subpt. H, App. A Appendix A to Subpart H of Part 35 Appendix A Standard Screen Format...

  19. Coupled thermo-chemical boundary conditions in double-diffusive geodynamo models at arbitrary Lewis numbers.

    NASA Astrophysics Data System (ADS)

    Bouffard, M.

    2016-12-01

    Convection in the Earth's outer core is driven by the combination of two buoyancy sources: a thermal source directly related to the Earth's secular cooling, the release of latent heat and possibly the heat generated by radioactive decay, and a compositional source due to the crystallization of the growing inner core which releases light elements into the liquid outer core. The dynamics of fusion/crystallization being dependent on the heat flux distribution, the thermochemical boundary conditions are coupled at the inner core boundary which may affect the dynamo in various ways, particularly if heterogeneous conditions are imposed at one boundary. In addition, the thermal and compositional molecular diffusivities differ by three orders of magnitude. This can produce significant differences in the convective dynamics compared to pure thermal or compositional convection due to the potential occurence of double-diffusive phenomena. Traditionally, temperature and composition have been combined into one single variable called codensity under the assumption that turbulence mixes all physical properties at an "eddy-diffusion" rate. This description does not allow for a proper treatment of the thermochemical coupling and is certainly incorrect within stratified layers in which double-diffusive phenomena can be expected. For a more general and rigorous approach, two distinct transport equations should therefore be solved for temperature and composition. However, the weak compositional diffusivity is technically difficult to handle in current geodynamo codes and requires the use of a semi-Lagrangian description to minimize numerical diffusion. We implemented a "particle-in-cell" method into a geodynamo code to properly describe the compositional field. The code is suitable for High Parallel Computing architectures and was successfully tested on two benchmarks. Following the work by Aubert et al. (2008) we use this new tool to perform dynamo simulations including thermochemical coupling at the inner core boundary as well as exploration of the infinite Lewis number limit to study the effect of a heterogeneous core mantle boundary heat flow on the inner core growth.

  20. Preliminary Physical Stratigraphy and Geophysical Data of the USGS Hope Plantation Core (BE-110), Bertie County, North Carolina

    USGS Publications Warehouse

    Weems, Robert E.; Seefelt, Ellen L.; Wrege, Beth M.; Self-Trail, Jean M.; Prowell, David C.; Durand, Colleen; Cobbs, Eugene F.; McKinney, Kevin C.

    2007-01-01

    Introduction In March and April, 2004, the U.S. Geological Survey (USGS), in cooperation with the North Carolina Geological Survey (NCGS) and the Raleigh Water Resources Discipline (WRD), drilled a stratigraphic test hole and well in Bertie County, North Carolina (fig. 1). The Hope Plantation test hole (BE-110-2004) was cored on the property of Hope Plantation near Windsor, North Carolina. The drill site is located on the Republican 7.5 minute quadradrangle at lat 36?01'58'N., long 78?01'09'W. (decimal degrees 36.0329 and 77.0192) (fig. 2). The altitude of the site is 48 ft above mean sea level as determined by Paulin Precise altimeter. This test hole was continuously cored by Eugene F. Cobbs, III and Kevin C. McKinney (USGS) to a total depth of 1094.5 ft. Later, a ground water observation well was installed with a screened interval between 315-329 feet below land surface (fig. 3). Upper Triassic, Lower Cretaceous, Upper Cretaceous, Tertiary, and Quaternary sediments were recovered from the site. The core is stored at the NCGS Coastal Plain core storage facility in Raleigh, North Carolina. In this report, we provide the initial lithostratigraphic summary recorded at the drill site along with site core photographs, data from the geophysical logger, calcareous nannofossil biostratigraphic correlations (Table 1) and initial hydrogeologic interpretations. The lithostratigraphy from this core can be compared to previous investigations of the Elizabethtown corehole, near Elizabethtown, North Carolina in Bladen County (Self-Trail, Wrege, and others, 2004), the Kure Beach corehole, near Wilmington, North Carolina in New Hanover County (Self-Trail, Prowell, and Christopher, 2004), the Esso #1, Esso #2, Mobil #1 and Mobil #2 cores in the Albermarle and Pamlico Sounds (Zarra, 1989), and the Cape Fear River outcrops in Bladen County (Farrell, 1998; Farrell and others, 2001). This core is the third in a series of planned benchmark coreholes that will be used to elucidate the physical stratigraphy, facies, thickness, and hydrogeology of the Tertiary and Cretaceous Coastal Plain sediments of North Carolina.

  1. Experimental and Theoretical Investigations on Viscosity of Fe-Ni-C Liquids at High Pressures

    NASA Astrophysics Data System (ADS)

    Chen, B.; Lai, X.; Wang, J.; Zhu, F.; Liu, J.; Kono, Y.

    2016-12-01

    Understanding and modeling of Earth's core processes such as geodynamo and heat flow via convection in liquid outer cores hinges on the viscosity of candidate liquid iron alloys under core conditions. Viscosity estimates from various methods of the metallic liquid of the outer core, however, span up to 12 orders of magnitude. Due to experimental challenges, viscosity measurements of iron liquids alloyed with lighter elements are scarce and conducted at conditions far below those expected for the outer core. In this study, we adopt a synergistic approach by integrating experiments at experimentally-achievable conditions with computations up to core conditions. We performed viscosity measurements based on the modified Stokes' floating sphere viscometry method for the Fe-Ni-C liquids at high pressures in a Paris-Edinburgh press at Sector 16 of the Advanced Photon Source, Argonne National Laboratory. Our results show that the addition of 3-5 wt.% carbon to iron-nickel liquids has negligible effect on its viscosity at pressures lower than 5 GPa. The viscosity of the Fe-Ni-C liquids, however, becomes notably higher and increases by a factor of 3 at 5-8 GPa. Similarly, our first-principles molecular dynamics calculations up to Earth's core pressures show a viscosity change in Fe-Ni-C liquids at 5 GPa. The significant change in the viscosity is likely due to a liquid structural transition of the Fe-Ni-C liquids as revealed by our X-ray diffraction measurements and first-principles molecular dynamics calculations. The observed correlation between structure and physical properties of liquids permit stringent benchmark test of the computational liquid models and contribute to a more comprehensive understanding of liquid properties under high pressures. The interplay between experiments and first-principles based modeling is shown to be a practical and effective methodology for studying liquid properties under outer core conditions that are difficult to reach with the current static high-pressure capabilities. The new viscosity data from experiments and computations would provide new insights into the internal dynamics of the outer core.

  2. Benchmarking reference services: step by step.

    PubMed

    Buchanan, H S; Marshall, J G

    1996-01-01

    This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.

  3. Nuclear Data Needs for Generation IV Nuclear Energy Systems

    NASA Astrophysics Data System (ADS)

    Rullhusen, Peter

    2006-04-01

    Nuclear data needs for generation IV systems. Future of nuclear energy and the role of nuclear data / P. Finck. Nuclear data needs for generation IV nuclear energy systems-summary of U.S. workshop / T. A. Taiwo, H. S. Khalil. Nuclear data needs for the assessment of gen. IV systems / G. Rimpault. Nuclear data needs for generation IV-lessons from benchmarks / S. C. van der Marck, A. Hogenbirk, M. C. Duijvestijn. Core design issues of the supercritical water fast reactor / M. Mori ... [et al.]. GFR core neutronics studies at CEA / J. C. Bosq ... [et al]. Comparative study on different phonon frequency spectra of graphite in GCR / Young-Sik Cho ... [et al.]. Innovative fuel types for minor actinides transmutation / D. Haas, A. Fernandez, J. Somers. The importance of nuclear data in modeling and designing generation IV fast reactors / K. D. Weaver. The GIF and Mexico-"everything is possible" / C. Arrenondo Sánchez -- Benmarks, sensitivity calculations, uncertainties. Sensitivity of advanced reactor and fuel cycle performance parameters to nuclear data uncertainties / G. Aliberti ... [et al.]. Sensitivity and uncertainty study for thermal molten salt reactors / A. Biduad ... [et al.]. Integral reactor physics benchmarks- The International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPHEP) / J. B. Briggs, D. W. Nigg, E. Sartori. Computer model of an error propagation through micro-campaign of fast neutron gas cooled nuclear reactor / E. Ivanov. Combining differential and integral experiments on [symbol] for reducing uncertainties in nuclear data applications / T. Kawano ... [et al.]. Sensitivity of activation cross sections of the Hafnium, Tanatalum and Tungsten stable isotopes to nuclear reaction mechanisms / V. Avrigeanu ... [et al.]. Generating covariance data with nuclear models / A. J. Koning. Sensitivity of Candu-SCWR reactors physics calculations to nuclear data files / K. S. Kozier, G. R. Dyck. The lead cooled fast reactor benchmark BREST-300: analysis with sensitivity method / V. Smirnov ... [et al.]. Sensitivity analysis of neutron cross-sections considered for design and safety studies of LFR and SFR generation IV systems / K. Tucek, J. Carlsson, H. Wider -- Experiments. INL capabilities for nuclear data measurements using the Argonne intense pulsed neutron source facility / J. D. Cole ... [et al.]. Cross-section measurements in the fast neutron energy range / A. Plompen. Recent measurements of neutron capture cross sections for minor actinides by a JNC and Kyoto University Group / H. Harada ... [et al.]. Determination of minor actinides fission cross sections by means of transfer reactions / M. Aiche ... [et al.] -- Evaluated data libraries. Nuclear data services from the NEA / H. Henriksson, Y. Rugama. Nuclear databases for energy applications: an IAEA perspective / R. Capote Noy, A. L. Nichols, A. Trkov. Nuclear data evaluation for generation IV / G. Noguère ... [et al.]. Improved evaluations of neutron-induced reactions on americium isotopes / P. Talou ... [et al.]. Using improved ENDF-based nuclear data for candu reactor calculations / J. Prodea. A comparative study on the graphite-moderated reactors using different evaluated nuclear data / Do Heon Kim ... [et al.].

  4. Voyager electronic parts radiation program. Volume 2: Test requirements and procedures

    NASA Technical Reports Server (NTRS)

    Stanley, A. G.; Martin, K. E.; Price, W. E.

    1978-01-01

    Documents are presented outlining the conditions and requirements of the test program. The Appendixes are as follows: appendix A -- Electron Simulation Radiation Test Specification for Voyager Electronic Parts and Devices, appendix B -- Electronic Piece-Part Testing Program for Voyager, appendix C -- Test Procedure for Radiation Screening of Voyager Piece Parts, appendix D -- Boeing In Situ Test Fixture, and appendix E -- Irradiate - Anneal (IRAN) Screening Documents.

  5. Complementary and Alternative Medicine in the Military Health System: Appendixes

    DTIC Science & Technology

    2017-01-01

    report. Appendixes F through H supply other supplementary material. Appendixes F and G contain the CAM survey instrument and a glossary of CAM services...respectively. Appendix H contains tables of detailed results from the CAM survey and the MHS administrative data analyses. Individual tables in...this appendix are referenced in Chapters Three through Seven of the main report. 2 Table of Contents Appendix F: CAM Survey Instrument

  6. Parallel Agent-Based Simulations on Clusters of GPUs and Multi-Core Processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaby, Brandon G; Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    An effective latency-hiding mechanism is presented in the parallelization of agent-based model simulations (ABMS) with millions of agents. The mechanism is designed to accommodate the hierarchical organization as well as heterogeneity of current state-of-the-art parallel computing platforms. We use it to explore the computation vs. communication trade-off continuum available with the deep computational and memory hierarchies of extant platforms and present a novel analytical model of the tradeoff. We describe our implementation and report preliminary performance results on two distinct parallel platforms suitable for ABMS: CUDA threads on multiple, networked graphical processing units (GPUs), and pthreads on multi-core processors. Messagemore » Passing Interface (MPI) is used for inter-GPU as well as inter-socket communication on a cluster of multiple GPUs and multi-core processors. Results indicate the benefits of our latency-hiding scheme, delivering as much as over 100-fold improvement in runtime for certain benchmark ABMS application scenarios with several million agents. This speed improvement is obtained on our system that is already two to three orders of magnitude faster on one GPU than an equivalent CPU-based execution in a popular simulator in Java. Thus, the overall execution of our current work is over four orders of magnitude faster when executed on multiple GPUs.« less

  7. Taming Wild Horses: The Need for Virtual Time-based Scheduling of VMs in Network Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J

    2012-01-01

    The next generation of scalable network simulators employ virtual machines (VMs) to act as high-fidelity models of traffic producer/consumer nodes in simulated networks. However, network simulations could be inaccurate if VMs are not scheduled according to virtual time, especially when many VMs are hosted per simulator core in a multi-core simulator environment. Since VMs are by default free-running, on the outset, it is not clear if, and to what extent, their untamed execution affects the results in simulated scenarios. Here, we provide the first quantitative basis for establishing the need for generalized virtual time scheduling of VMs in network simulators,more » based on an actual prototyped implementations. To exercise breadth, our system is tested with multiple disparate applications: (a) a set of message passing parallel programs, (b) a computer worm propagation phenomenon, and (c) a mobile ad-hoc wireless network simulation. We define and use error metrics and benchmarks in scaled tests to empirically report the poor match of traditional, fairness-based VM scheduling to VM-based network simulation, and also clearly show the better performance of our simulation-specific scheduler, with up to 64 VMs hosted on a 12-core simulator node.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, J. Grant, E-mail: grant.hill@sheffield.ac.uk, E-mail: kipeters@wsu.edu; Peterson, Kirk A., E-mail: grant.hill@sheffield.ac.uk, E-mail: kipeters@wsu.edu

    New correlation consistent basis sets, cc-pVnZ-PP-F12 (n = D, T, Q), for all the post-d main group elements Ga–Rn have been optimized for use in explicitly correlated F12 calculations. The new sets, which include not only orbital basis sets but also the matching auxiliary sets required for density fitting both conventional and F12 integrals, are designed for correlation of valence sp, as well as the outer-core d electrons. The basis sets are constructed for use with the previously published small-core relativistic pseudopotentials of the Stuttgart-Cologne variety. Benchmark explicitly correlated coupled-cluster singles and doubles with perturbative triples [CCSD(T)-F12b] calculations of themore » spectroscopic properties of numerous diatomic molecules involving 4p, 5p, and 6p elements have been carried out and compared to the analogous conventional CCSD(T) results. In general the F12 results obtained with a n-zeta F12 basis set were comparable to conventional aug-cc-pVxZ-PP or aug-cc-pwCVxZ-PP basis set calculations obtained with x = n + 1 or even x = n + 2. The new sets used in CCSD(T)-F12b calculations are particularly efficient at accurately recovering the large correlation effects of the outer-core d electrons.« less

  9. Constraining axion-like-particles with hard X-ray emission from magnetars

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-François; Sinha, Kuver

    2018-06-01

    Axion-like particles (ALPs) produced in the core of a magnetar will convert to photons in the magnetosphere, leading to possible signatures in the hard X-ray band. We perform a detailed calculation of the ALP-to-photon conversion probability in the magnetosphere, recasting the coupled differential equations that describe ALP-photon propagation into a form that is efficient for large scale numerical scans. We show the dependence of the conversion probability on the ALP energy, mass, ALP-photon coupling, magnetar radius, surface magnetic field, and the angle between the magnetic field and direction of propagation. Along the way, we develop an analytic formalism to perform similar calculations in more general n-state oscillation systems. Assuming ALP emission rates from the core that are just subdominant to neutrino emission, we calculate the resulting constraints on the ALP mass versus ALP-photon coupling space, taking SGR 1806-20 as an example. In particular, we take benchmark values for the magnetar radius and core temperature, and constrain the ALP parameter space by the requirement that the luminosity from ALP-to-photon conversion should not exceed the total observed luminosity from the magnetar. The resulting constraints are competitive with constraints from helioscope experiments in the relevant part of ALP parameter space.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dokhane, A.; Canepa, S.; Ferroukhi, H.

    For stability analyses of the Swiss operating Boiling-Water-Reactors (BWRs), the methodology employed and validated so far at the Paul Scherrer Inst. (PSI) was based on the RAMONA-3 code with a hybrid upstream static lattice/core analysis approach using CASMO-4 and PRESTO-2. More recently, steps were undertaken towards a new methodology based on the SIMULATE-3K (S3K) code for the dynamical analyses combined with the CMSYS system relying on the CASMO/SIMULATE-3 suite of codes and which was established at PSI to serve as framework for the development and validation of reference core models of all the Swiss reactors and operated cycles. This papermore » presents a first validation of the new methodology on the basis of a benchmark recently organised by a Swiss utility and including the participation of several international organisations with various codes/methods. Now in parallel, a transition from CASMO-4E (C4E) to CASMO-5M (C5M) as basis for the CMSYS core models was also recently initiated at PSI. Consequently, it was considered adequate to address the impact of this transition both for the steady-state core analyses as well as for the stability calculations and to achieve thereby, an integral approach for the validation of the new S3K methodology. Therefore, a comparative assessment of C4 versus C5M is also presented in this paper with particular emphasis on the void coefficients and their impact on the downstream stability analysis results. (authors)« less

  11. Trailing Vortex Measurements in the Wake of a Hovering Rotor Blade with Various Tip Shapes

    NASA Technical Reports Server (NTRS)

    Martin, Preston B.; Leishman, J. Gordon

    2003-01-01

    This work examined the wake aerodynamics of a single helicopter rotor blade with several tip shapes operating on a hover test stand. Velocity field measurements were conducted using three-component laser Doppler velocimetry (LDV). The objective of these measurements was to document the vortex velocity profiles and then extract the core properties, such as the core radius, peak swirl velocity, and axial velocity. The measured test cases covered a wide range of wake-ages and several tip shapes, including rectangular, tapered, swept, and a subwing tip. One of the primary differences shown by the change in tip shape was the wake geometry. The effect of blade taper reduced the initial peak swirl velocity by a significant fraction. It appears that this is accomplished by decreasing the vortex strength for a given blade loading. The subwing measurements showed that the interaction and merging of the subwing and primary vortices created a less coherent vortical structure. A source of vortex core instability is shown to be the ratio of the peak swirl velocity to the axial velocity deficit. The results show that if there is a turbulence producing region of the vortex structure, it will be outside of the core boundary. The LDV measurements were supported by laser light-sheet flow visualization. The results provide several benchmark test cases for future validation of theoretical vortex models, numerical free-wake models, and computational fluid dynamics results.

  12. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  13. Convergence studies of deterministic methods for LWR explicit reflector methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canepa, S.; Hursin, M.; Ferroukhi, H.

    2013-07-01

    The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less

  14. Experimental detailed power distribution in a fast spectrum thermionic reactor fuel element at the core/BeO reflector interface region

    NASA Technical Reports Server (NTRS)

    Klann, P. G.; Lantz, E.

    1973-01-01

    A zero-power critical assembly was designed, constructed, and operated for the prupose of conducting a series of benchmark experiments dealing with the physics characteristics of a UN-fueled, Li-7-cooled, Mo-reflected, drum-controlled compact fast reactor for use with a space-power conversion system. The critical assembly was modified to simulate a fast spectrum advanced thermionics reactor by: (1) using BeO as a reflector in place of some of the existing molybdenum, (2) substituting Nb-1Zr tubing for some of the existing Ta tubing, and (3) inserting four full-scale mockups of thermionic type fuel elements near the core and BeO reflector boundary. These mockups were surrounded with a buffer zone having the equivalent thermionic core composition. In addition to measuring the critical mass of this thermionic configuration, a detailed power distribution in one of the thermionic element stages in the mixed spectrum region was measured. A power peak to average ratio of two was observed for this fuel stage at the midplane of the core and adjacent to the reflector. Also, the power on the outer surface adjacent to the BeO was slightly more than a factor of two larger than the power on the inside surface of a 5.08 cm (2.0 in.) high annular fuel segment with a 2.52 cm (0.993 in. ) o.d. and a 1.86 cm (0.731 in.) i.d.

  15. Method for VAWT Placement on a Complex Building Structure

    DTIC Science & Technology

    2013-06-01

    85 APPENDIX C: ANSYS CFX SPECIFICAITONS FOR WIND FLOW ANALYSIS .....87 APPENDIX D: SINGLE ROTOR ANALYSIS ANSYS CFX MESH DETAILS...89 APPENDIX E: SINGLE ROTOR ANALYSIS, ANSYS CFX SPECIFICS .....................91 APPENDIX F: DETAILED RESULTS OF SINGLE ROTOR...101 APPENDIX I: DUAL ROTOR ANALYSIS- ANSYS CFX SPECIFICATIONS (6 BLADED VAWTS

  16. 40 CFR Appendix B to Part 66 - Instruction Manual

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 15 2011-07-01 2011-07-01 false Instruction Manual B Appendix B to...) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. B Appendix B to Part 66—Instruction Manual Note: For text of appendix B see appendix B to part 67. ...

  17. Comparative Roles of Overexpressed and Mutated H- and K-ras in Mammary Carcinogenesis.

    DTIC Science & Technology

    1996-08-01

    transgene of these tumors. 14. SUBJECT TERMS 15. NUMBER OF PAGES Breast Cancer , mammary carcinogenesis, oncogenes, ras genes, 44 replication defective...27 Appendix 5 29 Appendix 6 31 Appendix 7 33 Appendix 8 35 Appendix 9 37 Appendix 10 39 Introduction Breast cancer development involves multiple poorly...understood steps (25). Currently, several genes that may participate in breast cancer development are under investigation. The ras family of genes

  18. US Department of Energy Nevada Operations Office annual site environmental report: 1993. Volume 2: Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, S.C.; Glines, W.M.; Townsend, Y.E.

    1994-09-01

    This report is comprised of appendices which support monitoring and surveillance on and around the Nevada Test Site (NTS) during 1993. Appendix A contains onsite Pu-238, gross beta, and gamma-emitting radionuclides in air. Appendix B contains onsite tritium in air. Appendix C contains onsite Pu-238, Sr-90, gross alpha and beta, gamma-emitting radionuclides, Ra-226, Ra-228 and tritium in water. A summary of 1993 results of offsite radiological monitoring is included in Appendix D. Appendix E contains radioactive noble gases in air onsite. Appendix F contains onsite thermoluminescent dosimeter data. Historical trends in onsite thermoluminescent dosimeter data are contained in Appendix G.more » Appendix H summarizes 1993 compliance at the DOE/NV NTS and non-NTS facilities. Appendix I summarizes the 1993 results of non radiological monitoring.« less

  19. Statistical Mechanics and Applications in Condensed Matter

    NASA Astrophysics Data System (ADS)

    Di Castro, Carlo; Raimondi, Roberto

    2015-08-01

    Preface; 1. Thermodynamics: a brief overview; 2. Kinetics; 3. From Boltzmann to Gibbs; 4. More ensembles; 5. The thermodynamic limit and its thermodynamic stability; 6. Density matrix and quantum statistical mechanics; 7. The quantum gases; 8. Mean-field theories and critical phenomena; 9. Second quantization and Hartree-Fock approximation; 10. Linear response and fluctuation-dissipation theorem in quantum systems: equilibrium and small deviations; 11. Brownian motion and transport in disordered systems; 12. Fermi liquids; 13. The Landau theory of the second order phase transitions; 14. The Landau-Wilson model for critical phenomena; 15. Superfluidity and superconductivity; 16. The scaling theory; 17. The renormalization group approach; 18. Thermal Green functions; 19. The microscopic foundations of Fermi liquids; 20. The Luttinger liquid; 21. Quantum interference effects in disordered electron systems; Appendix A. The central limit theorem; Appendix B. Some useful properties of the Euler Gamma function; Appendix C. Proof of the second theorem of Yang and Lee; Appendix D. The most probable distribution for the quantum gases; Appendix E. Fermi-Dirac and Bose-Einstein integrals; Appendix F. The Fermi gas in a uniform magnetic field: Landau diamagnetism; Appendix G. Ising and gas-lattice models; Appendix H. Sum over discrete Matsubara frequencies; Appendix I. Hydrodynamics of the two-fluid model of superfluidity; Appendix J. The Cooper problem in the theory of superconductivity; Appendix K. Superconductive fluctuations phenomena; Appendix L. Diagrammatic aspects of the exact solution of the Tomonaga Luttinger model; Appendix M. Details on the theory of the disordered Fermi liquid; References; Author index; Index.

  20. Fast Neutron Spectrum Potassium Worth for Space Power Reactor Design Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Marshall, Margaret A.; Briggs, J. Blair

    2015-03-01

    A variety of critical experiments were constructed of enriched uranium metal (oralloy ) during the 1960s and 1970s at the Oak Ridge Critical Experiments Facility (ORCEF) in support of criticality safety operations at the Y-12 Plant. The purposes of these experiments included the evaluation of storage, casting, and handling limits for the Y-12 Plant and providing data for verification of calculation methods and cross-sections for nuclear criticality safety applications. These included solid cylinders of various diameters, annuli of various inner and outer diameters, two and three interacting cylinders of various diameters, and graphite and polyethylene reflected cylinders and annuli. Ofmore » the hundreds of delayed critical experiments, one was performed that consisted of uranium metal annuli surrounding a potassium-filled, stainless steel can. The outer diameter of the annuli was approximately 13 inches (33.02 cm) with an inner diameter of 7 inches (17.78 cm). The diameter of the stainless steel can was 7 inches (17.78 cm). The critical height of the configurations was approximately 5.6 inches (14.224 cm). The uranium annulus consisted of multiple stacked rings, each with radial thicknesses of 1 inch (2.54 cm) and varying heights. A companion measurement was performed using empty stainless steel cans; the primary purpose of these experiments was to test the fast neutron cross sections of potassium as it was a candidate for coolant in some early space power reactor designs.The experimental measurements were performed on July 11, 1963, by J. T. Mihalczo and M. S. Wyatt (Ref. 1) with additional information in its corresponding logbook. Unreflected and unmoderated experiments with the same set of highly enriched uranium metal parts were performed at the Oak Ridge Critical Experiments Facility in the 1960s and are evaluated in the International Handbook for Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) with the identifier HEU MET FAST 051. Thin graphite reflected (2 inches or less) experiments also using the same set of highly enriched uranium metal parts are evaluated in HEU MET FAST 071. Polyethylene-reflected configurations are evaluated in HEU-MET-FAST-076. A stack of highly enriched metal discs with a thick beryllium top reflector is evaluated in HEU-MET-FAST-069, and two additional highly enriched uranium annuli with beryllium cores are evaluated in HEU-MET-FAST-059. Both detailed and simplified model specifications are provided in this evaluation. Both of these fast neutron spectra assemblies were determined to be acceptable benchmark experiments. The calculated eigenvalues for both the detailed and the simple benchmark models are within ~0.26 % of the benchmark values for Configuration 1 (calculations performed using MCNP6 with ENDF/B-VII.1 neutron cross section data), but under-calculate the benchmark values by ~7s because the uncertainty in the benchmark is very small: ~0.0004 (1s); for Configuration 2, the under-calculation is ~0.31 % and ~8s. Comparison of detailed and simple model calculations for the potassium worth measurement and potassium mass coefficient yield results approximately 70 – 80 % lower (~6s to 10s) than the benchmark values for the various nuclear data libraries utilized. Both the potassium worth and mass coefficient are also deemed to be acceptable benchmark experiment measurements.« less

  1. Analysis of the Impact of ’People Programs’ Upon Retention of Enlisted Personnel in the Air Force. Appendices K, L, N.

    DTIC Science & Technology

    1982-06-09

    32 V. An Econometric Model of Retention 71 Bibliography 166 Appendix A Appendix B Volume II Appendix C Appendix D Appendix E Volume III Appendix F...RETENTION OF ENLISTED PE.(U R O0URC RE SE ARCH CORP NH L LEG E ST AIT ION TO 09 jN 82 F4698 _H 8 -0063 NCASFEEEFhEE 5/9 hE EhsohEohmhhhhEE

  2. 40 CFR Appendix C to Part 66 - Computer Program

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 15 2011-07-01 2011-07-01 false Computer Program C Appendix C to Part...) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer Program Note: For text of appendix C see appendix C to part 67. ...

  3. Listening to the occupants: a Web-based indoor environmental quality survey.

    PubMed

    Zagreus, Leah; Huizenga, Charlie; Arens, Edward; Lehrer, David

    2004-01-01

    Building occupants are a rich source of information about indoor environmental quality and its effect on comfort and productivity. The Center for the Built Environment has developed a Web-based survey and accompanying online reporting tools to quickly and inexpensively gather, process and present this information. The core questions assess occupant satisfaction with the following IEQ areas: office layout, office furnishings, thermal comfort, indoor air quality, lighting, acoustics, and building cleanliness and maintenance. The survey can be used to assess the performance of a building, identify areas needing improvement, and provide useful feedback to designers and operators about specific aspects of building design features and operating strategies. The survey has been extensively tested and refined and has been conducted in more than 70 buildings, creating a rapidly growing database of standardized survey data that is used for benchmarking. We present three case studies that demonstrate different applications of the survey: a pre/post analysis of occupants moving to a new building, a survey used in conjunction with physical measurements to determine how environmental factors affect occupants' perceived comfort and productivity levels, and a benchmarking example of using the survey to establish how new buildings are meeting a client's design objectives. In addition to its use in benchmarking a building's performance against other buildings, the CBE survey can be used as a diagnostic tool to identify specific problems and their sources. Whenever a respondent indicates dissatisfaction with an aspect of building performance, a branching page follows with more detailed questions about the nature of the problem. This systematically collected information provides a good resource for solving indoor environmental problems in the building. By repeating the survey after a problem has been corrected it is also possible to assess the effectiveness of the solution.

  4. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  5. Northeastern Gulf of Mexico coastal and marine ecosystem program: Data search and synthesis, annotated bibliography. Appendix A: Physical oceanography. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This study summarizes environmental and socioeconomic information related to the Florida Panhandle Outer Continental Shelf (OCS). It contains a conceptual model of active processes and identification of information gaps that will be useful in the design of future environmental studies in the geographic area. The annotated bibliography for this study is printer in six volumes, each pertaining to a specific topic. They are as follows: Appendix A--Physical Oceanography; Appendix B--Meteorology; Appendix C--Geology; Appendix D--Chemistry; Appendix E--Biology; and Appendix F--Socioeconomics. This volume contains bibliographic references pertaining to physical oceanography.

  6. 14 CFR Appendix G to Part 151 - Appendix G to Part 151

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Appendix G to Part 151 G Appendix G to Part 151 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRPORTS FEDERAL AID TO AIRPORTS Pt. 151, App. G Appendix G to Part 151 There is set forth below an...

  7. 14 CFR Appendix G to Part 151 - Appendix G to Part 151

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Appendix G to Part 151 G Appendix G to Part 151 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRPORTS FEDERAL AID TO AIRPORTS Pt. 151, App. G Appendix G to Part 151 There is set forth below an...

  8. 14 CFR Appendix G to Part 151 - Appendix G to Part 151

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Appendix G to Part 151 G Appendix G to Part 151 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRPORTS FEDERAL AID TO AIRPORTS Pt. 151, App. G Appendix G to Part 151 There is set forth below an...

  9. 14 CFR Appendix G to Part 151 - Appendix G to Part 151

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Appendix G to Part 151 G Appendix G to Part 151 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRPORTS FEDERAL AID TO AIRPORTS Pt. 151, App. G Appendix G to Part 151 There is set forth below an...

  10. 47 CFR Appendix - Technical Appendix 2

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... PROGRAM Waiver of household eligibility. Pt. 301, App. 2 Technical Appendix 2 TECHNICAL APPENDIX 2—NTIA... promotional prices Equipment cannot be sold conditioned on the purchase of a Smart Antenna or other equipment...

  11. Validation Data and Model Development for Fuel Assembly Response to Seismic Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bardet, Philippe; Ricciardi, Guillaume

    2016-01-31

    Vibrations are inherently present in nuclear reactors, especially in cores and steam generators of pressurized water reactors (PWR). They can have significant effects on local heat transfer and wear and tear in the reactor and often set safety margins. The simulation of these multiphysics phenomena from first principles requires the coupling of several codes, which is one the most challenging tasks in modern computer simulation. Here an ambitious multiphysics multidisciplinary validation campaign is conducted. It relied on an integrated team of experimentalists and code developers to acquire benchmark and validation data for fluid-structure interaction codes. Data are focused on PWRmore » fuel bundle behavior during seismic transients.« less

  12. Supercomputer simulations of structure formation in the Universe

    NASA Astrophysics Data System (ADS)

    Ishiyama, Tomoaki

    2017-06-01

    We describe the implementation and performance results of our massively parallel MPI†/OpenMP‡ hybrid TreePM code for large-scale cosmological N-body simulations. For domain decomposition, a recursive multi-section algorithm is used and the size of domains are automatically set so that the total calculation time is the same for all processes. We developed a highly-tuned gravity kernel for short-range forces, and a novel communication algorithm for long-range forces. For two trillion particles benchmark simulation, the average performance on the fullsystem of K computer (82,944 nodes, the total number of core is 663,552) is 5.8 Pflops, which corresponds to 55% of the peak speed.

  13. An efficient implementation of semi-numerical computation of the Hartree-Fock exchange on the Intel Phi processor

    NASA Astrophysics Data System (ADS)

    Liu, Fenglai; Kong, Jing

    2018-07-01

    Unique technical challenges and their solutions for implementing semi-numerical Hartree-Fock exchange on the Phil Processor are discussed, especially concerning the single- instruction-multiple-data type of processing and small cache size. Benchmark calculations on a series of buckyball molecules with various Gaussian basis sets on a Phi processor and a six-core CPU show that the Phi processor provides as much as 12 times of speedup with large basis sets compared with the conventional four-center electron repulsion integration approach performed on the CPU. The accuracy of the semi-numerical scheme is also evaluated and found to be comparable to that of the resolution-of-identity approach.

  14. Space Station Furnace Facility. Experiment/Facility Requirements Document (E/FRD), volume 2, appendix 5

    NASA Technical Reports Server (NTRS)

    Kephart, Nancy

    1992-01-01

    The function of the Space Station Furnace Facility (SSFF) is to support materials research into the crystal growth and solidification processes of electronic and photonic materials, metals and alloys, and glasses and ceramics. To support this broad base of research requirements, the SSFF will employ a variety of furnace modules operated, regulated, and supported by a core of common subsystems. Furnace modules may be reconfigured or specifically developed to provide unique solidifcation conditions for each set of experiments. The SSFF modular approach permits the addition of new or scaled-up furnace modules to support the evolution of the facility as new science requirements are identified. The SSFF Core is of modular design to permit augmentation for enhanced capabilities. The fully integrated configuration of the SSFF will consist of three racks with the capability of supporting up to two furnace modules per rack. The initial configuration of the SSFF will consist of two of the three racks and one furnace module. This Experiment/Facility Requirements Document (E/FRD) describes the integrated facility requirements for the Space Station Freedom (SSF) Integrated Configuration-1 (IC1) mission. The IC1 SSFF will consist of two racks: the Core Rack, with the centralized subsystem equipment, and the Experiment Rack-1, with Furnace Module-1 and the distributed subsystem equipment to support the furnace.

  15. A study of the required Rayleigh number to sustain dynamo with various inner core radius

    NASA Astrophysics Data System (ADS)

    Nishida, Y.; Katoh, Y.; Matsui, H.; Kumamoto, A.

    2017-12-01

    It is widely accepted that the geomagnetic field is sustained by thermal and compositional driven convections of a liquid iron alloy in the outer core. The generation process of the geomagnetic field has been studied by a number of MHD dynamo simulations. Recent studies of the ratio of the Earth's core evolution suggest that the inner solid core radius ri to the outer liquid core radius ro changed from ri/ro = 0 to 0.35 during the last one billion years. There are some studies of dynamo in the early Earth with smaller inner core than the present. Heimpel et al. (2005) revealed the Rayleigh number Ra of the onset of dynamo process as a function of ri/ro from simulation, while paleomagnetic observation shows that the geomagnetic field has been sustained for 3.5 billion years. While Heimpel and Evans (2013) studied dynamo processes taking into account the thermal history of the Earth's interior, there were few cases corresponding to the early Earth. Driscoll (2016) performed a series of dynamo based on a thermal evolution model. Despite a number of dynamo simulations, dynamo process occurring in the interior of the early Earth has not been fully understood because the magnetic Prandtl numbers in these simulations are much larger than that for the actual outer core.In the present study, we performed thermally driven dynamo simulations with different aspect ratio ri/ro = 0.15, 0.25 and 0.35 to evaluate the critical Ra for the thermal convection and required Ra to maintain the dynamo. For this purpose, we performed simulations with various Ra and fixed the other control parameters such as the Ekman, Prandtl, and magnetic Prandtl numbers. For the initial condition and boundary conditions, we followed the dynamo benchmark case 1 by Christensen et al. (2001). The results show that the critical Ra increases with the smaller aspect ratio ri/ro. It is confirmed that larger amplitude of buoyancy is required in the smaller inner core to maintain dynamo.

  16. 29 CFR Appendix B to Subpart Y of... - Guidelines for Scientific Diving

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 8 2013-07-01 2013-07-01 false Guidelines for Scientific Diving B Appendix B to Subpart Y..., Subpt. Y, App. B Appendix B to Subpart Y of Part 1926—Guidelines for Scientific Diving Note: The requirements applicable to construction work under this appendix B are identical to those set forth at appendix...

  17. 29 CFR Appendix B to Subpart Y of... - Guidelines for Scientific Diving

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 8 2012-07-01 2012-07-01 false Guidelines for Scientific Diving B Appendix B to Subpart Y..., Subpt. Y, App. B Appendix B to Subpart Y of Part 1926—Guidelines for Scientific Diving Note: The requirements applicable to construction work under this appendix B are identical to those set forth at appendix...

  18. 14 CFR Appendix B to Part 25 - Appendix B to Part 25

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Appendix B to Part 25 B Appendix B to Part 25 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Pt. 25, App. B Appendix B to Part 25 EC28SE91.055 EC28SE91...

  19. 45 CFR Appendix A to Part 13 - Appendix A to Part 13

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Appendix A to Part 13 A Appendix A to Part 13 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION IMPLEMENTATION OF THE EQUAL ACCESS TO JUSTICE ACT IN AGENCY PROCEEDINGS Pt. 13, App. A Appendix A to Part 13 Proceedings covered Statutory authority Applicable regulations...

  20. 45 CFR Appendix A to Part 13 - Appendix A to Part 13

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Appendix A to Part 13 A Appendix A to Part 13 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION IMPLEMENTATION OF THE EQUAL ACCESS TO JUSTICE ACT IN AGENCY PROCEEDINGS Pt. 13, App. A Appendix A to Part 13 Proceedings covered Statutory authority Applicable regulations...

  1. 45 CFR Appendix A to Part 13 - Appendix A to Part 13

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Appendix A to Part 13 A Appendix A to Part 13 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION IMPLEMENTATION OF THE EQUAL ACCESS TO JUSTICE ACT IN AGENCY PROCEEDINGS Pt. 13, App. A Appendix A to Part 13 Proceedings covered Statutory authority Applicable regulations...

  2. 14 CFR Appendix C to Part 151 - Appendix C to Part 151

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Appendix C to Part 151 C Appendix C to Part...) AIRPORTS FEDERAL AID TO AIRPORTS Pt. 151, App. C Appendix C to Part 151 There is set forth below an... Items 1. Maintenance-type work, including: (a) Seal coats. (b) Crack filling. (c) Resealing joints. (d...

  3. 14 CFR Appendix C to Part 25 - Appendix C to Part 25

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Appendix C to Part 25 C Appendix C to Part... AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Pt. 25, App. C Appendix C to Part 25 Part I—Atmospheric....062 EC28SE91.063 (c) Takeoff maximum icing. The maximum intensity of atmospheric icing conditions for...

  4. 14 CFR Appendix C to Part 25 - Appendix C to Part 25

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Appendix C to Part 25 C Appendix C to Part... AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Pt. 25, App. C Appendix C to Part 25 Part I—Atmospheric....062 EC28SE91.063 (c) Takeoff maximum icing. The maximum intensity of atmospheric icing conditions for...

  5. 14 CFR Appendix C to Part 25 - Appendix C to Part 25

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Appendix C to Part 25 C Appendix C to Part... AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Pt. 25, App. C Appendix C to Part 25 Part I—Atmospheric....062 EC28SE91.063 (c) Takeoff maximum icing. The maximum intensity of atmospheric icing conditions for...

  6. 14 CFR Appendix H to Part 151 - Appendix H to Part 151

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Appendix H to Part 151 H Appendix H to Part...) AIRPORTS FEDERAL AID TO AIRPORTS Pt. 151, App. H Appendix H to Part 151 There is set forth below the...)). H. Withholding for unpaid wages and liquidated damages, and priority of payment (1) The FAA may...

  7. Test Report: Direct and Indirect Lightning Effects on Composite Materials

    NASA Technical Reports Server (NTRS)

    Evans, R. W.

    1997-01-01

    Lightning tests were performed on composite materials as a part of an investigation of electromagnetic effects on the materials. Samples were subjected to direct and remote simulated lightning strikes. Samples included various thicknesses of graphite filament reinforced plastic (GFRP), material enhanced by expanded aluminum foil layers, and material with an aluminum honeycomb core. Shielding properties of the material and damage to the sample surfaces and joints were investigated. Adding expanded aluminum foil layers and increasing the thickness of GFRP improves the shielding effectiveness against lightning induced fields and the ability to withstand lightning strikes. A report describing the lightning strike tests performed by the U.S. Army Redstone Technical Test Center, Redstone Arsenal, AL, STERT-TE-E-EM, is included as an appendix.

  8. Integrated Battlefield Effects Research for the National Training Center. Appendix B. Requirements Design Specification for the Addition of Nuclear and Chemical Capabilities to the National Training Center (NTC) Core Instrumentation Subsystem (CIS)

    DTIC Science & Technology

    1984-12-31

    NiGjA0’NG S04EDULE suniitd N /A since Unclassified A OER04ORMING ORG.AP4IMIOIJ REPORT NUMBER(S) S. MONITORING ORGAzuZ.lION AEPORT NI.JMSEMR(S) R:LJF-84...foot-poad-toro~ Joule (J) 1.355 I1I galo0 (U.S. Liquid) setar ( I( 3.735 412 xI -), inch eater (a) 2.540000 N I - jrk joule M) [.am 000 x 4. joule...Division (How to Fight), FM 11-50, 8. Military Symbols, FM 21-30. 9. NBC (Nuclear, Biological and Chemical) Defense, FM 21-40. N " 10. Combat Communications

  9. Benchmarking specialty hospitals, a scoping review on theory and practice.

    PubMed

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  10. All inclusive benchmarking.

    PubMed

    Ellis, Judith

    2006-07-01

    The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.

  11. Scale-4 Analysis of Pressurized Water Reactor Critical Configurations: Volume 2-Sequoyah Unit 2 Cycle 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowman, S.M.

    1995-01-01

    The requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. If credit for the negative reactivity of the depleted (or spent) fuel isotopics is desired, it is necessary to benchmark computational methods against spent fuel critical configurations. This report summarizes a portion of the ongoing effort to benchmark away-from-reactor criticality analysis methods using critical configurations from commercial pressurized-water reactors. The analysis methodology selected for all the calculations reported herein is based on the codes and data provided in the SCALE-4 code system. The isotopic densities for the spent fuel assemblies inmore » the critical configurations were calculated using the SAS2H analytical sequence of the SCALE-4 system. The sources of data and the procedures for deriving SAS2H input parameters are described in detail. The SNIKR code module was used to extract the necessary isotopic densities from the SAS2H results and provide the data in the format required by the SCALE criticality analysis modules. The CSASN analytical sequence in SCALE-4 was used to perform resonance processing of the cross sections. The KENO V.a module of SCALE-4 was used to calculate the effective multiplication factor (k{sub eff}) of each case. The SCALE-4 27-group burnup library containing ENDF/B-IV (actinides) and ENDF/B-V (fission products) data was used for all the calculations. This volume of the report documents the SCALE system analysis of three reactor critical configurations for the Sequoyah Unit 2 Cycle 3. This unit and cycle were chosen because of the relevance in spent fuel benchmark applications: (1) the unit had a significantly long downtime of 2.7 years during the middle of cycle (MOC) 3, and (2) the core consisted entirely of burned fuel at the MOC restart. The first benchmark critical calculation was the MOC restart at hot, full-power (HFP) critical conditions. The other two benchmark critical calculations were the beginning-of-cycle (BOC) startup at both hot, zero-power (HZP) and HFP critical conditions. These latter calculations were used to check for consistency in the calculated results for different burnups and downtimes. The k{sub eff} results were in the range of 1.00014 to 1.00259 with a standard deviation of less than 0.001.« less

  12. 24 CFR Appendix B to 24 Cfr Part 3400 - Appendix B to 24 CFR Part 3400

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 5 2012-04-01 2012-04-01 false Appendix B to 24 CFR Part 3400 B Appendix B to 24 CFR Part 3400 Housing and Urban Development Regulations Relating to Housing and Urban... HOUSING AND URBAN DEVELOPMENT SAFE MORTGAGE LICENSING ACT Pt. 3400, App. B Appendix B to 24 CFR Part 3400...

  13. 30 CFR Appendix A to Subpart J of... - Appendix A to Subpart J of Part 75

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Appendix A to Subpart J of Part 75 A Appendix A to Subpart J of Part 75 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... Medium-Voltage Alternating Current Circuits Pt. 75, Subpt. J, App. A Appendix A to Subpart J of Part 75...

  14. 24 CFR Appendix C to 24 Cfr Part 3400 - Appendix C to 24 CFR Part 3400

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 5 2012-04-01 2012-04-01 false Appendix C to 24 CFR Part 3400 C Appendix C to 24 CFR Part 3400 Housing and Urban Development Regulations Relating to Housing and Urban... HOUSING AND URBAN DEVELOPMENT SAFE MORTGAGE LICENSING ACT Pt. 3400, App. C Appendix C to 24 CFR Part 3400...

  15. 12 CFR Appendixes A-H to Subpart A... - Appendixes A-H to Subpart A of Part 702

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Appendixes A-H to Subpart A of Part 702 A Appendixes A-H to Subpart A of Part 702 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS PROMPT CORRECTIVE ACTION Net Worth Classification Pt. 702, Apps. Appendixes A-H to...

  16. Morphological variations of the vermiform appendix in Iranian cadavers: a study from developing countries.

    PubMed

    Mohammadi, Shabnam; Hedjazi, Arya; Sajjadian, Maryam; Rahmani, Mahboobeh; Mohammadi, Maryam; Moghadam, Maliheh Dadgar

    2017-03-29

    The vermiform appendix is a worm like tube containing a large amount of lymphoid follicles. In our knowledge, there is a little standard data about the vermiform appendix in Iranian population. Therefore, the objective of this study was to investigate the normal appendix size in Iranian cadavers. A cross-sectional study was undertaken between June 2014 and July 2015, in the autopsy laboratory, Legal Medicine Organization, Razavi Khorasan province, Iran. A total of 693 cadavers with the mean age of 40.46±20.99 years were divided into 10 groups. After writing down position of the appendix, the length, diameter and weight of appendix were measured. Statistical analysis was performed using SPSS software. The mean values of the demographic characteristics included: age= 40.46 ± 20.99 years; weight = 63.47 ± 17.84 kg; height = 159.95 ± 28.23 cm. The mean values of the appendix length, diameter, weight and index in the cadavers were 8.52 ± 2.99 cm, 12.17 ± 4.53 mm, 6.43 ± 3.26 grams and 0.013 ± 0.01, respectively. The most common position of appendix was retrocecal in 71.7% of cases. A significant correlations were evident between the value of demographic data and appendix size (P<0.05). The diameter (P=0.002) and index of appendix (P=0.003) showed significant difference between males and females. Having standard data on the vermiform appendix is useful for clinicians as well as anthropologists. The findings of the present study can provide information about morphologic variations of the appendix in Iranian population.

  17. International Game 󈨧: Crisis in South Asia, 28-30 January 1999

    DTIC Science & Technology

    1999-01-01

    19 APPENDIX A: INDIA /PAKISTAN: MILITARY ASSUMPTIONS IN 2003...21 APPENDIX B: INDIA − PAKISTAN CHRONOLOGY ................................................. 23 APPENDIX C: INDIA COUNTRY PROFILE...42 APPENDIX E: INDIA AND PAKISTAN SANCTIONS ................................................ 53 i EXECUTIVE SUMMARY The primary purpose

  18. A Qualitative Analysis of the Spontaneous Volunteer Response to the 2013 Sudan Floods: Changing the Paradigm.

    PubMed

    Albahari, Amin; Schultz, Carl H

    2017-06-01

    Introduction While the concept of community resilience is gaining traction, the role of spontaneous volunteers during the initial response to disasters remains controversial. In an attempt to resolve some of the debate, investigators examined the activities of a spontaneous volunteer group called Nafeer after the Sudan floods around the city of Khartoum in August of 2013. Hypothesis Can spontaneous volunteers successfully initiate, coordinate, and deliver sustained assistance immediately after a disaster? This retrospective, descriptive case study involved: (1) interviews with Nafeer members that participated in the disaster response to the Khartoum floods; (2) examination of documents generated during the event; and (3) subsequent benchmarking of their efforts with the Sphere Handbook. Members who agreed to participate were requested to provide all documents in their possession relating to Nafeer. The response by Nafeer was then benchmarked to the Sphere Handbook's six core standards, as well as the 11 minimum standards in essential health services. A total of 11 individuals were interviewed (six from leadership and five from active members). Nafeer's activities included: food provision; delivery of basic health care; environmental sanitation campaigns; efforts to raise awareness; and construction and strengthening of flood barricades. Its use of electronic platforms and social media to collect data and coordinate the organization's response was effective. Nafeer adopted a flat-management structure, dividing itself into 14 committees. A Coordination Committee was in charge of liaising between all committees. The Health and Sanitation Committee supervised two health days which included mobile medical and dentistry clinics supported by a mobile laboratory and pharmacy. The Engineering Committee managed to construct and maintain flood barricades. Nafeer used crowd-sourcing to fund its activities, receiving donations locally and internationally using supporters outside Sudan. Nafeer completely fulfilled three of Sphere's core standards and partially fulfilled the other three, but none of the essential health services standards were fulfilled. Even though the Sphere Handbook was chosen as the best available "gold standard" to benchmark Nafeer's efforts, it showed significant limitations in effectively measuring this group. It appears that independent spontaneous volunteer initiatives, like Nafeer, potentially can improve community resilience and play a significant role in the humanitarian response. Such organizations should be the subject of increased research activity. Relevant bodies should consider issuing separate guidelines supporting spontaneous volunteer organizations. Albahari A , Schultz CH . A qualitative analysis of the spontaneous volunteer response to the 2013 Sudan floods: changing the paradigm. Prehosp Disaster Med. 2017;32(3):240-248.

  19. Recurrence Interval and Event Age Data for Type A Faults

    USGS Publications Warehouse

    Dawson, Timothy E.; Weldon, Ray J.; Biasi, Glenn P.

    2008-01-01

    This appendix summarizes available recurrence interval, event age, and timing of most recent event data for Type A faults considered in the Earthquake Rate Model 2 (ERM 2) and used in the ERM 2 Appendix C analysis as well as Appendix N (time-dependent probabilities). These data have been compiled into an Excel workbook named Appendix B A-fault event ages_recurrence_V5.0 (herein referred to as the Appendix B workbook). For convenience, the Appendix B workbook is attached to the end of this document as a series of tables. The tables within the Appendix B workbook include site locations, event ages, and recurrence data, and in some cases, the interval of time between earthquakes is also reported. The Appendix B workbook is organized as individual worksheets, with each worksheet named by fault and paleoseismic site. Each worksheet contains the site location in latitude and longitude, as well as information on event ages, and a summary of recurrence data. Because the data has been compiled from different sources with different presentation styles, descriptions of the contents of each worksheet within the Appendix B spreadsheet are summarized.

  20. SPOC Benchmark Case: SNRE Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishal Patel; Michael Eades; Claude Russel Joyner II

    The Small Nuclear Rocket Engine (SNRE) was modeled in the Center for Space Nuclear Research’s (CSNR) Space Propulsion Optimization Code (SPOC). SPOC aims to create nuclear thermal propulsion (NTP) geometries quickly to perform parametric studies on design spaces of historic and new NTP designs. The SNRE geometry was modeled in SPOC and a critical core with a reasonable amount of criticality margin was found. The fuel, tie-tubes, reflector, and control drum masses were predicted rather well. These are all very important for neutronics calculations so the active reactor geometries created with SPOC can continue to be trusted. Thermal calculations ofmore » the average and hot fuel channels agreed very well. The specific impulse calculations used historically and in SPOC disagree so mass flow rates and impulses differed. Modeling peripheral and power balance components that do not affect nuclear characteristics of the core is not a feature of SPOC and as such, these components should continue to be designed using other tools. A full paper detailing the available SNRE data and comparisons with SPOC outputs will be submitted as a follow-up to this abstract.« less

  1. Recommendations for training in pediatric psychology: defining core competencies across training levels.

    PubMed

    Palermo, Tonya M; Janicke, David M; McQuaid, Elizabeth L; Mullins, Larry L; Robins, Paul M; Wu, Yelena P

    2014-10-01

    As a field, pediatric psychology has focused considerable efforts on the education and training of students and practitioners. Alongside a broader movement toward competency attainment in professional psychology and within the health professions, the Society of Pediatric Psychology commissioned a Task Force to establish core competencies in pediatric psychology and address the need for contemporary training recommendations.   The Task Force adapted the framework proposed by the Competency Benchmarks Work Group on preparing psychologists for health service practice and defined competencies applicable across training levels ranging from initial practicum training to entry into the professional workforce in pediatric psychology.   Competencies within 6 cluster areas, including science, professionalism, interpersonal, application, education, and systems, and 1 crosscutting cluster, crosscutting knowledge competencies in pediatric psychology, are presented in this report.   Recommendations for the use of, and the further refinement of, these suggested competencies are discussed. © The Author 2014. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swaminarayan, Sriram; Germann, Timothy C; Kadau, Kai

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementationmore » of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.« less

  3. M3D-K Simulations of Beam-Driven Alfven Eigenmodes in ASDEX-U

    NASA Astrophysics Data System (ADS)

    Wang, Ge; Fu, Guoyong; Lauber, Philipp; Schneller, Mirjam

    2013-10-01

    Core-localized Alfven eigenmodes are often observed in neutral beam-heated plasma in ASDEX-U tokamak. In this work, hybrid simulations with the global kinetic/MHD hybrid code M3D-K have been carried out to investigate the linear stability and nonlinear dynamics of beam-driven Alfven eigenmodes using experimental parameters and profiles of an ASDEX-U discharge. The safety factor q profile is weakly reversed with minimum q value about qmin = 3.0. The simulation results show that the n = 3 mode transits from a reversed shear Alfven eigenmode (RSAE) to a core-localized toroidal Alfven eigenmode (TAE) as qmin drops from 3.0 to 2.79, consistent with results from the stability code NOVA as well as the experimental measurement. The M3D-K results are being compared with those of the linear gyrokinetic stability code LIGKA for benchmark. The simulation results will also be compared with the measured mode frequency and mode structure. This work was funded by the Max-Planck/Princeton Center for Plasma Physics.

  4. The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS

    DOE PAGES

    Ward, Andrew; Downar, Thomas J.; Xu, Y.; ...

    2015-04-22

    The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, themore » capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.« less

  5. Physics-based multiscale coupling for full core nuclear reactor simulation

    DOE PAGES

    Gaston, Derek R.; Permann, Cody J.; Peterson, John W.; ...

    2015-10-01

    Numerical simulation of nuclear reactors is a key technology in the quest for improvements in efficiency, safety, and reliability of both existing and future reactor designs. Historically, simulation of an entire reactor was accomplished by linking together multiple existing codes that each simulated a subset of the relevant multiphysics phenomena. Recent advances in the MOOSE (Multiphysics Object Oriented Simulation Environment) framework have enabled a new approach: multiple domain-specific applications, all built on the same software framework, are efficiently linked to create a cohesive application. This is accomplished with a flexible coupling capability that allows for a variety of different datamore » exchanges to occur simultaneously on high performance parallel computational hardware. Examples based on the KAIST-3A benchmark core, as well as a simplified Westinghouse AP-1000 configuration, demonstrate the power of this new framework for tackling—in a coupled, multiscale manner—crucial reactor phenomena such as CRUD-induced power shift and fuel shuffle. 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-SA license« less

  6. Massively parallel algorithm and implementation of RI-MP2 energy calculation for peta-scale many-core supercomputers.

    PubMed

    Katouda, Michio; Naruse, Akira; Hirano, Yukihiko; Nakajima, Takahito

    2016-11-15

    A new parallel algorithm and its implementation for the RI-MP2 energy calculation utilizing peta-flop-class many-core supercomputers are presented. Some improvements from the previous algorithm (J. Chem. Theory Comput. 2013, 9, 5373) have been performed: (1) a dual-level hierarchical parallelization scheme that enables the use of more than 10,000 Message Passing Interface (MPI) processes and (2) a new data communication scheme that reduces network communication overhead. A multi-node and multi-GPU implementation of the present algorithm is presented for calculations on a central processing unit (CPU)/graphics processing unit (GPU) hybrid supercomputer. Benchmark results of the new algorithm and its implementation using the K computer (CPU clustering system) and TSUBAME 2.5 (CPU/GPU hybrid system) demonstrate high efficiency. The peak performance of 3.1 PFLOPS is attained using 80,199 nodes of the K computer. The peak performance of the multi-node and multi-GPU implementation is 514 TFLOPS using 1349 nodes and 4047 GPUs of TSUBAME 2.5. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Re-visiting the tympanic membrane vicinity as core body temperature measurement site

    PubMed Central

    Gan, Chee Wee; Liang, Wenyu

    2017-01-01

    Core body temperature (CBT) is an important and commonly used indicator of human health and endurance performance. A rise in baseline CBT can be attributed to an onset of flu, infection or even thermoregulatory failure when it becomes excessive. Sites which have been used for measurement of CBT include the pulmonary artery, the esophagus, the rectum and the tympanic membrane. Among them, the tympanic membrane is an attractive measurement site for CBT due to its unobtrusive nature and ease of measurement facilitated, especially when continuous CBT measurements are needed for monitoring such as during military, occupational and sporting settings. However, to-date, there are still polarizing views on the suitability of tympanic membrane as a CBT site. This paper will revisit a number of key unresolved issues in the literature and also presents, for the first time, a benchmark of the middle ear temperature against temperature measurements from other sites. Results from experiments carried out on human and primate subjects will be presented to draw a fresh set of insights against the backdrop of hypotheses and controversies. PMID:28414722

  8. Re-visiting the tympanic membrane vicinity as core body temperature measurement site.

    PubMed

    Yeoh, Wui Keat; Lee, Jason Kai Wei; Lim, Hsueh Yee; Gan, Chee Wee; Liang, Wenyu; Tan, Kok Kiong

    2017-01-01

    Core body temperature (CBT) is an important and commonly used indicator of human health and endurance performance. A rise in baseline CBT can be attributed to an onset of flu, infection or even thermoregulatory failure when it becomes excessive. Sites which have been used for measurement of CBT include the pulmonary artery, the esophagus, the rectum and the tympanic membrane. Among them, the tympanic membrane is an attractive measurement site for CBT due to its unobtrusive nature and ease of measurement facilitated, especially when continuous CBT measurements are needed for monitoring such as during military, occupational and sporting settings. However, to-date, there are still polarizing views on the suitability of tympanic membrane as a CBT site. This paper will revisit a number of key unresolved issues in the literature and also presents, for the first time, a benchmark of the middle ear temperature against temperature measurements from other sites. Results from experiments carried out on human and primate subjects will be presented to draw a fresh set of insights against the backdrop of hypotheses and controversies.

  9. An Evaluation of One-Sided and Two-Sided Communication Paradigms on Relaxed-Ordering Interconnect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Khaled Z.; Hargrove, Paul H.; Iancu, Costin

    The Cray Gemini interconnect hardware provides multiple transfer mechanisms and out-of-order message delivery to improve communication throughput. In this paper we quantify the performance of one-sided and two-sided communication paradigms with respect to: 1) the optimal available hardware transfer mechanism, 2) message ordering constraints, 3) per node and per core message concurrency. In addition to using Cray native communication APIs, we use UPC and MPI micro-benchmarks to capture one- and two-sided semantics respectively. Our results indicate that relaxing the message delivery order can improve performance up to 4.6x when compared with strict ordering. When hardware allows it, high-level one-sided programmingmore » models can already take advantage of message reordering. Enforcing the ordering semantics of two-sided communication comes with a performance penalty. Furthermore, we argue that exposing out-of-order delivery at the application level is required for the next-generation programming models. Any ordering constraints in the language specifications reduce communication performance for small messages and increase the number of active cores required for peak throughput.« less

  10. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-05-20

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executedmore » in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.« less

  11. Interlaboratory comparison of immunohistochemical testing for HER2: results of the 2004 and 2005 College of American Pathologists HER2 Immunohistochemistry Tissue Microarray Survey.

    PubMed

    Fitzgibbons, Patrick L; Murphy, Douglas A; Dorfman, David M; Roche, Patrick C; Tubbs, Raymond R

    2006-10-01

    Correct assessment of human epidermal growth factor receptor 2 (HER2) status is essential in managing patients with invasive breast carcinoma, but few data are available on the accuracy of laboratories performing HER2 testing by immunohistochemistry (IHC). To review the results of the 2004 and 2005 College of American Pathologists HER2 Immunohistochemistry Tissue Microarray Survey. The HER2 survey is designed for laboratories performing immunohistochemical staining and interpretation for HER2. The survey uses tissue microarrays, each consisting of ten 3-mm tissue cores obtained from different invasive breast carcinomas. All cases are also analyzed by fluorescence in situ hybridization. Participants receive 8 tissue microarrays (80 cases) with instructions to perform immunostaining for HER2 using the laboratory's standard procedures. The laboratory interprets the stained slides and returns results to the College of American Pathologists for analysis. In 2004 and 2005, a core was considered "graded" when at least 90% of laboratories agreed on the result--negative (0, 1+) versus positive (2+, 3+). This interlaboratory comparison survey included 102 laboratories in 2004 and 141 laboratories in 2005. Of the 160 cases in both surveys, 111 (69%) achieved 90% consensus (graded). All 43 graded cores scored as IHC-positive were fluorescence in situ hybridization-positive, whereas all but 3 of the 68 IHC-negative graded cores were fluorescence in situ hybridization-negative. Ninety-seven (95%) of 102 laboratories in 2004 and 129 (91%) of 141 laboratories in 2005 correctly scored at least 90% of the graded cores. Performance among laboratories performing HER2 IHC in this tissue microarray-based survey was excellent. Cores found to be IHC-positive or IHC-negative by participant consensus can be used as validated benchmarks for interlaboratory comparison, allowing laboratories to assess their performance and determine if improvements are needed.

  12. Emergency Plan for the Locks and Dams at St. Anthony Falls Minneapolis, Minnesota

    DTIC Science & Technology

    1987-03-01

    Identification Subplan APPENDIX B Emergency Operations and Repair Subplan APPENDIX C Emergency Notification Subplan APPENDIX D Inundation Map Package...Emergency Operations and Repair Subplan 3) Appendix C , Emergency Notification Subplan 4) Appendix D , Inundation Maps and Hydraulic Data b. AP...which were selected for planning include: a. Structural Damage b. Sabotage c . Extreme Storm d . Excess Seepage e. Failure Due to Scouring A brief dis

  13. 34 CFR Appendix B to Subpart B of... - Appendix I, Standards for Audit of Governmental Organizations, Programs, Activities, and...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Organizations, Programs, Activities, and Functions (GAO) B Appendix B to Subpart B of Part 668 Education... Programs Pt. 668, Subpt. B, App. B Appendix B to Subpart B of Part 668—Appendix I, Standards for Audit of... required for the practice of public accountancy by the regulatory authorities of the States.” 1 1 Letter (B...

  14. 34 CFR Appendix B to Subpart B of... - Appendix I, Standards for Audit of Governmental Organizations, Programs, Activities, and...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Organizations, Programs, Activities, and Functions (GAO) B Appendix B to Subpart B of Part 668 Education... Programs Pt. 668, Subpt. B, App. B Appendix B to Subpart B of Part 668—Appendix I, Standards for Audit of... required for the practice of public accountancy by the regulatory authorities of the States.” 1 1 Letter (B...

  15. 34 CFR Appendix B to Subpart B of... - Appendix I, Standards for Audit of Governmental Organizations, Programs, Activities, and...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Organizations, Programs, Activities, and Functions (GAO) B Appendix B to Subpart B of Part 668 Education... Programs Pt. 668, Subpt. B, App. B Appendix B to Subpart B of Part 668—Appendix I, Standards for Audit of... required for the practice of public accountancy by the regulatory authorities of the States.” 1 1 Letter (B...

  16. Shielding Analysis of a Small Compact Space Nuclear Reactor

    DTIC Science & Technology

    1987-08-01

    RESPONSE) =4, MAXWELLIAN FISSION SPECTRUM (ILNTEGRAL RESPONSE) =5, LOS ALAMOS FISSION SPECTRUM, 1982 (INTEGRAL RESPONSE) =6, VITAMIN C NEUTRON SPECTRUM...Appendices Appendix A: Calculations of Effective Radii.. A-1 Appendix B: Atom Density Calculations for FEMPlD and FEMP2D ................ B-I Appendix C ...FEMPID and FEM22D Data........... C -i Appendix D: Energy Group Definition .......... D-I Appendix E: Transport Equation, Legendr4 Polynomial

  17. 5 CFR Appendix C to Part 2634 - Privacy Act and Paperwork Reduction Act Notices for Appendixes A and B

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Notices for Appendixes A and B C Appendix C to Part 2634 Administrative Personnel OFFICE OF GOVERNMENT... DIVESTITURE Pt. 2634, App. C Appendix C to Part 2634—Privacy Act and Paperwork Reduction Act Notices for... (the “Ethics Act”) (5 U.S.C. App.) and subpart D of 5 CFR part 2634 of the regulations of the Office of...

  18. 5 CFR Appendix C to Part 2634 - Privacy Act and Paperwork Reduction Act Notices for Appendixes A and B

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Notices for Appendixes A and B C Appendix C to Part 2634 Administrative Personnel OFFICE OF GOVERNMENT... DIVESTITURE Pt. 2634, App. C Appendix C to Part 2634—Privacy Act and Paperwork Reduction Act Notices for... (the “Ethics Act”) (5 U.S.C. App.) and subpart D of 5 CFR part 2634 of the regulations of the Office of...

  19. 18 CFR Appendix B to Subpart H of... - Appendix B to Subpart H of Part 35

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Appendix B to Subpart H of Part 35 B Appendix B to Subpart H of Part 35 Conservation of Power and Water Resources FEDERAL... Rates Pt. 35, Subpt. H, App. B Appendix B to Subpart H of Part 35 This is an example of the required...

  20. TomoPhantom, a software package to generate 2D-4D analytical phantoms for CT image reconstruction algorithm benchmarks

    NASA Astrophysics Data System (ADS)

    Kazantsev, Daniil; Pickalov, Valery; Nagella, Srikanth; Pasca, Edoardo; Withers, Philip J.

    2018-01-01

    In the field of computerized tomographic imaging, many novel reconstruction techniques are routinely tested using simplistic numerical phantoms, e.g. the well-known Shepp-Logan phantom. These phantoms cannot sufficiently cover the broad spectrum of applications in CT imaging where, for instance, smooth or piecewise-smooth 3D objects are common. TomoPhantom provides quick access to an external library of modular analytical 2D/3D phantoms with temporal extensions. In TomoPhantom, quite complex phantoms can be built using additive combinations of geometrical objects, such as, Gaussians, parabolas, cones, ellipses, rectangles and volumetric extensions of them. Newly designed phantoms are better suited for benchmarking and testing of different image processing techniques. Specifically, tomographic reconstruction algorithms which employ 2D and 3D scanning geometries, can be rigorously analyzed using the software. TomoPhantom also provides a capability of obtaining analytical tomographic projections which further extends the applicability of software towards more realistic, free from the "inverse crime" testing. All core modules of the package are written in the C-OpenMP language and wrappers for Python and MATLAB are provided to enable easy access. Due to C-based multi-threaded implementation, volumetric phantoms of high spatial resolution can be obtained with computational efficiency.

  1. OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE - A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon Tibbitts; Arnis Judzis

    2001-04-01

    This document details the progress to date on the OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE -- A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING contract for the quarter starting January 2001 through March 2001. Accomplishments to date include the following: (1) On January 9th of 2001, details of the Mud Hammer Drilling Performance Testing Project were presented at a ''kick-off'' meeting held in Morgantown. (2) A preliminary test program was formulated and prepared for presentation at a meeting of the advisory board in Houston on the 8th of February. (3) The meeting was held with the advisorymore » board reviewing the test program in detail. (4) Consensus was achieved and the approved test program was initiated after thorough discussion. (5) This new program outlined the details of the drilling tests as well as scheduling the test program for the weeks of 14th and 21st of May 2001. (6) All the tasks were initiated for a completion to coincide with the test schedule. (7) By the end of March the hardware had been designed and the majority was either being fabricated or completed. (8) The rock was received and cored into cylinders.« less

  2. Scalable Metropolis Monte Carlo for simulation of hard shapes

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.

    2016-07-01

    We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.

  3. Congenital absence of the vermiform appendix.

    PubMed

    Sarkar, Aniruddha

    2012-09-01

    Agenesis of the vermiform appendix is very rare. The incidence is estimated to be one in 100,000 laparotomies for suspected appendicitis. During a routine dissection of the abdomen in a 60-year-old donated male cadaver, the vermiform appendix was found to be absent. The ileocaecal junction and retrocaecal area were thoroughly searched, but the vermiform appendix was not found or appeared to resemble a tubercle. This is likely the first reported case of agenesis of a vermiform appendix in India. This suggests the possibility that the human vermiform appendix would ultimately become rudimentary or absent in the course of evolution.

  4. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  5. An Experimental Study of Characteristic Combustion-Driven Flow for CFD Validation

    NASA Technical Reports Server (NTRS)

    Santoro, Robert J.

    1997-01-01

    A series of uni-element rocket injector studies were completed to provide benchmark quality data needed to validate computational fluid dynamic models. A shear coaxial injector geometry was selected as the primary injector for study using gaseous hydrogen/oxygen and gaseous hydrogen/liquid oxygen propellants. Emphasis was placed on the use of nonintrusive diagnostic techniques to characterize the flowfields inside an optically-accessible rocket chamber. Measurements of the velocity and species fields were obtained using laser velocimetry and Raman spectroscopy, respectively. Qualitative flame shape information was also obtained using laser-induced fluorescence excited from OH radicals and laser light scattering studies of aluminum oxide particle seeded combusting flows. The gaseous hydrogen/liquid oxygen propellant studies for the shear coaxial injector focused on breakup mechanisms associated with the liquid oxygen jet under subcritical pressure conditions. Laser sheet illumination techniques were used to visualize the core region of the jet and a Phase Doppler Particle Analyzer was utilized for drop velocity, size and size distribution characterization. The results of these studies indicated that the shear coaxial geometry configuration was a relatively poor injector in terms of mixing. The oxygen core was observed to extend well downstream of the injector and a significant fraction of the mixing occurred in the near nozzle region where measurements were not possible to obtain. Detailed velocity and species measurements were obtained to allow CFD model validation and this set of benchmark data represents the most comprehensive data set available to date. As an extension of the investigation, a series of gas/gas injector studies were conducted in support of the X-33 Reusable Launch Vehicle program. A Gas/Gas Injector Technology team was formed consisting of the Marshall Space Flight Center, the NASA Lewis Research Center, Rocketdyne and Penn State. Injector geometries studied under this task included shear and swirl coaxial configurations as well as an impinging jet injector.

  6. An Experimental Study of Characteristic Combustion-Driven Flow for CFD Validation

    NASA Technical Reports Server (NTRS)

    Santoro, Robert J.

    1997-01-01

    A series of uni-element rocket injector studies were completed to provide benchmark quality data needed to validate computational fluid dynamic models. A shear coaxial injector geometry was selected as the primary injector for study using gaseous hydrogen/oxygen and gaseous hydrogen/liquid oxygen propellants. Emphasis was placed on the use of non-intrusive diagnostic techniques to characterize the flowfields inside an optically-accessible rocket chamber. Measurements of the velocity and species fields were obtained using laser velocimetry and Raman spectroscopy, respectively Qualitative flame shape information was also obtained using laser-induced fluorescence excited from OH radicals and laser light scattering studies of aluminum oxide particle seeded combusting flows. The gaseous hydrogen/liquid oxygen propellant studies for the shear coaxial injector focused on breakup mechanisms associated with the liquid oxygen jet under sub-critical pressure conditions. Laser sheet illumination techniques were used to visualize the core region of the jet and a Phase Doppler Particle Analyzer was utilized for drop velocity, size and size distribution characterization. The results of these studies indicated that the shear coaxial geometry configuration was a relatively poor injector in terms of mixing. The oxygen core was observed to extend well downstream of the injector and a significant fraction of the mixing occurred in the near nozzle region where measurements were not possible to obtain Detailed velocity and species measurements were obtained to allow CFD model validation and this set of benchmark data represents the most comprehensive data set available to date As an extension of the investigation, a series of gas/gas injector studies were conducted in support of the X-33 Reusable Launch Vehicle program. A Gas/Gas Injector Technology team was formed consisting of the Marshall Space Flight Center, the NASA Lewis Research Center, Rocketdyne and Penn State. Injector geometries studied under this task included shear and swirl coaxial configurations as well as an impinging jet injector.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This large document provides a catalog of the location of large numbers of reports pertaining to the charge of the Presidential Advisory Committee on Human Radiation Research and is arranged as a series of appendices. Titles of the appendices are Appendix A- Records at the Washington National Records Center Reviewed in Whole or Part by DoD Personnel or Advisory Committee Staff; Appendix B- Brief Descriptions of Records Accessions in the Advisory Committee on Human Radiation Experiments (ACHRE) Research Document Collection; Appendix C- Bibliography of Secondary Sources Used by ACHRE; Appendix D- Brief Descriptions of Human Radiation Experiments Identified by ACHRE,more » and Indexes; Appendix E- Documents Cited in the ACHRE Final Report and other Separately Described Materials from the ACHRE Document Collection; Appendix F- Schedule of Advisory Committee Meetings and Meeting Documentation; and Appendix G- Technology Note.« less

  8. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1996 revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Tsao, C.L.

    1996-06-01

    This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less

  9. Concurrent Performance of Gunner’s and Robotic Operator’s Tasks in a Simulated Mounted Combat System Environment

    DTIC Science & Technology

    2006-06-01

    Appendix A. Demographic Questionnaire 25 Appendix B. Attentional Control Survey 27 Appendix C. NASA - TLX Questionnaire 29 Appendix D. Simulator...the National Aeronautics and Space Administration task load index ( NASA - TLX ) questionnaire (appendix C) (Hart & Staveland, 1988). The NASA - TLX is a...There were 2-minute breaks between experimental sessions. Participants assessed their workload using the NASA - TLX after they completed each

  10. Benchmarking in emergency health systems.

    PubMed

    Kennedy, Marcus P; Allen, Jacqueline; Allen, Greg

    2002-12-01

    This paper discusses the role of benchmarking as a component of quality management. It describes the historical background of benchmarking, its competitive origin and the requirement in today's health environment for a more collaborative approach. The classical 'functional and generic' types of benchmarking are discussed with a suggestion to adopt a different terminology that describes the purpose and practicalities of benchmarking. Benchmarking is not without risks. The consequence of inappropriate focus and the need for a balanced overview of process is explored. The competition that is intrinsic to benchmarking is questioned and the negative impact it may have on improvement strategies in poorly performing organizations is recognized. The difficulty in achieving cross-organizational validity in benchmarking is emphasized, as is the need to scrutinize benchmarking measures. The cost effectiveness of benchmarking projects is questioned and the concept of 'best value, best practice' in an environment of fixed resources is examined.

  11. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  12. 49 CFR Appendix E to Part 229 - Performance Criteria for Locomotive Crashworthiness

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Crashworthiness E Appendix E to Part 229 Transportation Other Regulations Relating to Transportation (Continued..., App. E Appendix E to Part 229—Performance Criteria for Locomotive Crashworthiness This appendix provides performance criteria for the crashworthiness evaluation of alternative locomotive designs, and...

  13. 49 CFR Appendix E to Part 229 - Performance Criteria for Locomotive Crashworthiness

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Crashworthiness E Appendix E to Part 229 Transportation Other Regulations Relating to Transportation (Continued..., App. E Appendix E to Part 229—Performance Criteria for Locomotive Crashworthiness This appendix provides performance criteria for the crashworthiness evaluation of alternative locomotive designs, and...

  14. 49 CFR Appendix E to Part 229 - Performance Criteria for Locomotive Crashworthiness

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Crashworthiness E Appendix E to Part 229 Transportation Other Regulations Relating to Transportation (Continued..., App. E Appendix E to Part 229—Performance Criteria for Locomotive Crashworthiness This appendix provides performance criteria for the crashworthiness evaluation of alternative locomotive designs, and...

  15. 49 CFR Appendix E to Part 229 - Performance Criteria for Locomotive Crashworthiness

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Crashworthiness E Appendix E to Part 229 Transportation Other Regulations Relating to Transportation (Continued..., App. E Appendix E to Part 229—Performance Criteria for Locomotive Crashworthiness This appendix provides performance criteria for the crashworthiness evaluation of alternative locomotive designs, and...

  16. Functional Integration

    NASA Astrophysics Data System (ADS)

    Cartier, Pierre; DeWitt-Morette, Cecile

    2006-11-01

    Acknowledgements; List symbols, conventions, and formulary; Part I. The Physical and Mathematical Environment: 1. The physical and mathematical environment; Part II. Quantum Mechanics: 2. First lesson: gaussian integrals; 3. Selected examples; 4. Semiclassical expansion: WKB; 5. Semiclassical expansion: beyond WKB; 6. Quantum dynamics: path integrals and operator formalism; Part III. Methods from Differential Geometry: 7. Symmetries; 8. Homotopy; 9. Grassmann analysis: basics; 10. Grassmann analysis: applications; 11. Volume elements, divergences, gradients; Part IV. Non-Gaussian Applications: 12. Poisson processes in physics; 13. A mathematical theory of Poisson processes; 14. First exit time: energy problems; Part V. Problems in Quantum Field Theory: 15. Renormalization 1: an introduction; 16. Renormalization 2: scaling; 17. Renormalization 3: combinatorics; 18. Volume elements in quantum field theory Bryce DeWitt; Part VI. Projects: 19. Projects; Appendix A. Forward and backward integrals: spaces of pointed paths; Appendix B. Product integrals; Appendix C. A compendium of gaussian integrals; Appendix D. Wick calculus Alexander Wurm; Appendix E. The Jacobi operator; Appendix F. Change of variables of integration; Appendix G. Analytic properties of covariances; Appendix H. Feynman's checkerboard; Bibliography; Index.

  17. Functional Integration

    NASA Astrophysics Data System (ADS)

    Cartier, Pierre; DeWitt-Morette, Cecile

    2010-06-01

    Acknowledgements; List symbols, conventions, and formulary; Part I. The Physical and Mathematical Environment: 1. The physical and mathematical environment; Part II. Quantum Mechanics: 2. First lesson: gaussian integrals; 3. Selected examples; 4. Semiclassical expansion: WKB; 5. Semiclassical expansion: beyond WKB; 6. Quantum dynamics: path integrals and operator formalism; Part III. Methods from Differential Geometry: 7. Symmetries; 8. Homotopy; 9. Grassmann analysis: basics; 10. Grassmann analysis: applications; 11. Volume elements, divergences, gradients; Part IV. Non-Gaussian Applications: 12. Poisson processes in physics; 13. A mathematical theory of Poisson processes; 14. First exit time: energy problems; Part V. Problems in Quantum Field Theory: 15. Renormalization 1: an introduction; 16. Renormalization 2: scaling; 17. Renormalization 3: combinatorics; 18. Volume elements in quantum field theory Bryce DeWitt; Part VI. Projects: 19. Projects; Appendix A. Forward and backward integrals: spaces of pointed paths; Appendix B. Product integrals; Appendix C. A compendium of gaussian integrals; Appendix D. Wick calculus Alexander Wurm; Appendix E. The Jacobi operator; Appendix F. Change of variables of integration; Appendix G. Analytic properties of covariances; Appendix H. Feynman's checkerboard; Bibliography; Index.

  18. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  19. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  20. Installation restoration program. Site investigation report. Revision 4. Volume 2: Appendix B through Appendix E. 155th Air Refueling Group, Nebraska Air National Guard, Lincoln Municipal Airport, Lincoln, Nebraska. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-04-01

    This is Site investigation Report, Volume 2 Appendix B through E. A Site Investigation was performed at the 155th Air Refueling Group at Lincoln, NE to evaluate six areas of suspected contamination identified during a Preliminary Assessment. The sites that this investigation were conducted at are: Site 1 - Fuel Farm, POL Storage Area Site 2 - West End of Old Oak Creek, Site 3 - Former Tank Cleaning/Hazardous Waste Storage Area, Site 4 - Access Road, Dust Control Area, Site 5 - Army National Guard Oil Storage Area, and Site 6 - Hydraulic Pressure Check Unit Storage Area. Themore » report recommended no further action for Sites 3 through 6 due to low levels or no contamination being found. The report recommended that the portion of Site 2 that is located downstream of Site 1 should be included in Site 1. Appendix 2 consist of the following appendix: Well Data and Geologic Boring Logs (Appendix B), Survey Data (Appendix C), Quality Control (Appendix D), and Analytical Results (Appendix E).« less

  1. 14 CFR Appendix L to Part 121 - Type Certification Regulations Made Previously Effective

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Previously Effective L Appendix L to Part 121 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION... AND OPERATIONS OPERATING REQUIREMENTS: DOMESTIC, FLAG, AND SUPPLEMENTAL OPERATIONS Pt. 121, App. L Appendix L to Part 121—Type Certification Regulations Made Previously Effective Appendix L lists...

  2. 10 CFR 140.109 - Appendix I.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Appendix I. 140.109 Section 140.109 Energy NUCLEAR... Appendixes to Part 140 § 140.109 Appendix I. Nuclear Energy Liability Insurance Association master policy no. __ Nuclear Energy Liability Insurance (Secondary Financial Protection) Named Insured: Each person or...

  3. 10 CFR 140.109 - Appendix I.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Appendix I. 140.109 Section 140.109 Energy NUCLEAR... Appendixes to Part 140 § 140.109 Appendix I. Nuclear Energy Liability Insurance Association master policy no. __ Nuclear Energy Liability Insurance (Secondary Financial Protection) Named Insured: Each person or...

  4. 10 CFR 140.109 - Appendix I.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Appendix I. 140.109 Section 140.109 Energy NUCLEAR... Appendixes to Part 140 § 140.109 Appendix I. Nuclear Energy Liability Insurance Association master policy no. __ Nuclear Energy Liability Insurance (Secondary Financial Protection) Named Insured: Each person or...

  5. 49 CFR Appendix E to Part 229 - Performance Criteria for Locomotive Crashworthiness

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Performance Criteria for Locomotive Crashworthiness E Appendix E to Part 229 Transportation Other Regulations Relating to Transportation (Continued..., App. E Appendix E to Part 229—Performance Criteria for Locomotive Crashworthiness This appendix...

  6. 10 CFR 140.109 - Appendix I.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Appendix I. 140.109 Section 140.109 Energy NUCLEAR... Appendixes to Part 140 § 140.109 Appendix I. Nuclear Energy Liability Insurance Association master policy no. __ Nuclear Energy Liability Insurance (Secondary Financial Protection) Named Insured: Each person or...

  7. 10 CFR 140.109 - Appendix I.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Appendix I. 140.109 Section 140.109 Energy NUCLEAR... Appendixes to Part 140 § 140.109 Appendix I. Nuclear Energy Liability Insurance Association master policy no. __ Nuclear Energy Liability Insurance (Secondary Financial Protection) Named Insured: Each person or...

  8. Investigation on the Core Bypass Flow in a Very High Temperature Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassan, Yassin

    2013-10-22

    Uncertainties associated with the core bypass flow are some of the key issues that directly influence the coolant mass flow distribution and magnitude, and thus the operational core temperature profiles, in the very high-temperature reactor (VHTR). Designers will attempt to configure the core geometry so the core cooling flow rate magnitude and distribution conform to the design values. The objective of this project is to study the bypass flow both experimentally and computationally. Researchers will develop experimental data using state-of-the-art particle image velocimetry in a small test facility. The team will attempt to obtain full field temperature distribution using racksmore » of thermocouples. The experimental data are intended to benchmark computational fluid dynamics (CFD) codes by providing detailed information. These experimental data are urgently needed for validation of the CFD codes. The following are the project tasks: • Construct a small-scale bench-top experiment to resemble the bypass flow between the graphite blocks, varying parameters to address their impact on bypass flow. Wall roughness of the graphite block walls, spacing between the blocks, and temperature of the blocks are some of the parameters to be tested. • Perform CFD to evaluate pre- and post-test calculations and turbulence models, including sensitivity studies to achieve high accuracy. • Develop the state-of-the art large eddy simulation (LES) using appropriate subgrid modeling. • Develop models to be used in systems thermal hydraulics codes to account and estimate the bypass flows. These computer programs include, among others, RELAP3D, MELCOR, GAMMA, and GAS-NET. Actual core bypass flow rate may vary considerably from the design value. Although the uncertainty of the bypass flow rate is not known, some sources have stated that the bypass flow rates in the Fort St. Vrain reactor were between 8 and 25 percent of the total reactor mass flow rate. If bypass flow rates are on the high side, the quantity of cooling flow through the core may be considerably less than the nominal design value, causing some regions of the core to operate at temperatures in excess of the design values. These effects are postulated to lead to localized hot regions in the core that must be considered when evaluating the VHTR operational and accident scenarios.« less

  9. Evaluation of a Hospital-Based Pneumonia Nurse Navigator Program.

    PubMed

    Seldon, Lisa E; McDonough, Kelly; Turner, Barbara; Simmons, Leigh Ann

    2016-12-01

    The aim of this study is to evaluate the effectiveness of a hospital-based pneumonia nurse navigator program. This study used a retrospective, formative evaluation. Data of patients admitted from January 2012 through December 2014 to a large community hospital with a primary or secondary diagnosis of pneumonia, excluding aspiration pneumonia, were used. Data included patient demographics, diagnoses, insurance coverage, core measures, average length of stay (ALOS), disposition, readmission rate, financial outcomes, and patient barriers to care were collected. Descriptive statistics and parametric testing were used to analyze data. Core measure performance was sustained at the 90th percentile 2 years after the implementation of the navigator program. The ALOS did not decrease to established benchmarks; however, the SD for ALOS decreased by nearly half after implementation of the navigator program, suggesting the program decreased the number and length of extended stays. Charges per case decreased by 21% from 2012 to 2014. Variable costs decreased by 4% over a 2-year period, which increased net profit per case by 5%. Average readmission payments increased by 8% from 2012 to 2014, and the net revenue per case increased by 8.3%. The pneumonia nurse navigator program may improve core measures, reduce ALOS, and increase net revenue. Future evaluations are necessary to substantiate these findings and optimize the cost and quality performance of navigator programs.

  10. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.

  11. Tiled architecture of a CNN-mostly IP system

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Lambert; Malki, Suleyman

    2009-05-01

    Multi-core architectures have been popularized with the advent of the IBM CELL. On a finer grain the problems in scheduling multi-cores have already existed in the tiled architectures, such as the EPIC and Da Vinci. It is not easy to evaluate the performance of a schedule on such architecture as historical data are not available. One solution is to compile algorithms for which an optimal schedule is known by analysis. A typical example is an algorithm that is already defined in terms of many collaborating simple nodes, such as a Cellular Neural Network (CNN). A simple node with a local register stack together with a 'rotating wheel' internal communication mechanism has been proposed. Though the basic CNN allows for a tiled implementation of a tiled algorithm on a tiled structure, a practical CNN system will have to disturb this regularity by the additional need for arithmetical and logical operations. Arithmetic operations are needed for instance to accommodate for low-level image processing, while logical operations are needed to fork and merge different data streams without use of the external memory. It is found that the 'rotating wheel' internal communication mechanism still handles such mechanisms without the need for global control. Overall the CNN system provides for a practical network size as implemented on a FPGA, can be easily used as embedded IP and provides a clear benchmark for a multi-core compiler.

  12. Gravitational wave sources from Pop III stars are preferentially located within the cores of their host Galaxies

    NASA Astrophysics Data System (ADS)

    Pacucci, Fabio; Loeb, Abraham; Salvadori, Stefania

    2017-10-01

    The detection of gravitational waves (GWs) generated by merging black holes has recently opened up a new observational window into the Universe. The mass of the black holes in the first and third Laser Interferometer Gravitational Wave Observatory (LIGO) detections (36-29 M⊙ and 32-19 M⊙) suggests low-metallicity stars as their most likely progenitors. Based on high-resolution N-body simulations, coupled with state-of-the-art metal enrichment models, we find that the remnants of Pop III stars are preferentially located within the cores of galaxies. The probability of a GW signal to be generated by Pop III stars reaches ∼90 per cent at ∼0.5 kpc from the galaxy centre, compared to a benchmark value of ∼5 per cent outside the core. The predicted merger rates inside bulges is ∼60 × βIII Gpc-3 yr-1 (βIII is the Pop III binarity fraction). To match the 90 per cent credible range of LIGO merger rates, we obtain: 0.03 < βIII < 0.88. Future advances in GW observatories and the discovery of possible electromagnetic counterparts could allow the localization of such sources within their host galaxies. The preferential concentration of GW events within the bulge of galaxies would then provide an indirect proof for the existence of Pop III stars.

  13. Static and Dynamic Frequency Scaling on Multicore CPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Wenlei; Hong, Changwan; Chunduri, Sudheer

    2016-12-28

    Dynamic voltage and frequency scaling (DVFS) adapts CPU power consumption by modifying a processor’s operating frequency (and the associated voltage). Typical approaches employing DVFS involve default strategies such as running at the lowest or the highest frequency, or observing the CPU’s runtime behavior and dynamically adapting the voltage/frequency configuration based on CPU usage. In this paper, we argue that many previous approaches suffer from inherent limitations, such as not account- ing for processor-specific impact of frequency changes on energy for different workload types. We first propose a lightweight runtime-based approach to automatically adapt the frequency based on the CPU workload,more » that is agnostic of the processor characteristics. We then show that further improvements can be achieved for affine kernels in the application, using a compile-time characterization instead of run-time monitoring to select the frequency and number of CPU cores to use. Our framework relies on a one-time energy characterization of CPU-specific DVFS profiles followed by a compile-time categorization of loop-based code segments in the application. These are combined to determine a priori of the frequency and the number of cores to use to execute the application so as to optimize energy or energy-delay product, outperforming runtime approach. Extensive evaluation on 60 benchmarks and five multi-core CPUs show that our approach systematically outperforms the powersave Linux governor, while improving overall performance.« less

  14. Comparative analysis of thorium and uranium fuel for transuranic recycle in a sodium cooled Fast Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. Fiorina; N. E. Stauff; F. Franceschini

    2013-12-01

    The present paper compares the reactor physics and transmutation performance of sodium-cooled Fast Reactors (FRs) for TRansUranic (TRU) burning with thorium (Th) or uranium (U) as fertile materials. The 1000 MWt Toshiba-Westinghouse Advanced Recycling Reactor (ARR) conceptual core has been used as benchmark for the comparison. Both burner and breakeven configurations sustained or started with a TRU supply, and assuming full actinide homogeneous recycle strategy, have been developed. State-of-the-art core physics tools have been employed to establish fuel inventory and reactor physics performances for equilibrium and transition cycles. Results show that Th fosters large improvements in the reactivity coefficients associatedmore » with coolant expansion and voiding, which enhances safety margins and, for a burner design, can be traded for maximizing the TRU burning rate. A trade-off of Th compared to U is the significantly larger fuel inventory required to achieve a breakeven design, which entails additional blankets at the detriment of core compactness as well as fuel manufacturing and separation requirements. The gamma field generated by the progeny of U-232 in the U bred from Th challenges fuel handling and manufacturing, but in case of full recycle, the high contents of Am and Cm in the transmutation fuel impose remote fuel operations regardless of the presence of U-232.« less

  15. Zn Coordination Chemistry:  Development of Benchmark Suites for Geometries, Dipole Moments, and Bond Dissociation Energies and Their Use To Test and Validate Density Functionals and Molecular Orbital Theory.

    PubMed

    Amin, Elizabeth A; Truhlar, Donald G

    2008-01-01

    We present nonrelativistic and relativistic benchmark databases (obtained by coupled cluster calculations) of 10 Zn-ligand bond distances, 8 dipole moments, and 12 bond dissociation energies in Zn coordination compounds with O, S, NH3, H2O, OH, SCH3, and H ligands. These are used to test the predictions of 39 density functionals, Hartree-Fock theory, and seven more approximate molecular orbital theories. In the nonrelativisitic case, the M05-2X, B97-2, and mPW1PW functionals emerge as the most accurate ones for this test data, with unitless balanced mean unsigned errors (BMUEs) of 0.33, 0.38, and 0.43, respectively. The best local functionals (i.e., functionals with no Hartree-Fock exchange) are M06-L and τ-HCTH with BMUEs of 0.54 and 0.60, respectively. The popular B3LYP functional has a BMUE of 0.51, only slightly better than the value of 0.54 for the best local functional, which is less expensive. Hartree-Fock theory itself has a BMUE of 1.22. The M05-2X functional has a mean unsigned error of 0.008 Å for bond lengths, 0.19 D for dipole moments, and 4.30 kcal/mol for bond energies. The X3LYP functional has a smaller mean unsigned error (0.007 Å) for bond lengths but has mean unsigned errors of 0.43 D for dipole moments and 5.6 kcal/mol for bond energies. The M06-2X functional has a smaller mean unsigned error (3.3 kcal/mol) for bond energies but has mean unsigned errors of 0.017 Å for bond lengths and 0.37 D for dipole moments. The best of the semiempirical molecular orbital theories are PM3 and PM6, with BMUEs of 1.96 and 2.02, respectively. The ten most accurate functionals from the nonrelativistic benchmark analysis are then tested in relativistic calculations against new benchmarks obtained with coupled-cluster calculations and a relativistic effective core potential, resulting in M05-2X (BMUE = 0.895), PW6B95 (BMUE = 0.90), and B97-2 (BMUE = 0.93) as the top three functionals. We find significant relativistic effects (∼0.01 Å in bond lengths, ∼0.2 D in dipole moments, and ∼4 kcal/mol in Zn-ligand bond energies) that cannot be neglected for accurate modeling, but the same density functionals that do well in all-electron nonrelativistic calculations do well with relativistic effective core potentials. Although most tests are carried out with augmented polarized triple-ζ basis sets, we also carried out some tests with an augmented polarized double-ζ basis set, and we found, on average, that with the smaller basis set DFT has no loss in accuracy for dipole moments and only ∼10% less accurate bond lengths.

  16. A Strategy for DoD Manufacturing Science and Technology R and D in Precision Fabrication

    DTIC Science & Technology

    1994-01-01

    3-11 vii Contents (Continued) Bibliography Appendix A. Progress Since the 1991 Plan Appendix B. Why "Precision" Appendix C...preci- sion fabrication R&D. Appendix A summarizes progress in precision fabrication R&D since the previous plan was prepared in 1991. Appendix B...lathe’s power consumption may indicate worn bearings. Detecting and acting on this condition can prevent costly spindle damage and associated machine down

  17. Toxicological Benchmarks for Screening of Potential Contaminants of Concern for Effects on Aquatic Biota on the Oak Ridge Reservation, Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W., II

    1993-01-01

    One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less

  18. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  19. Calculus of Elementary Functions, Part IV. Teacher's Commentary. Preliminary Edition.

    ERIC Educational Resources Information Center

    Herriot, Sarah T.; And Others

    This teacher's guide is designed for use with the SMSG textbook "Calculus of Elementary Functions." It contains solutions to exercises found in Chapter 9, Integration Theory and Technique; Chapter 10, Simple Differential Equations; Appendix 5, Area and Integral; Appendix 6; Appendix 7, Continuity Theory; and Appendix 8, More About…

  20. 49 CFR Appendix A to Part 227 - Noise Exposure Computation

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Noise Exposure Computation A Appendix A to Part... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION OCCUPATIONAL NOISE EXPOSURE Pt. 227, App. A Appendix A to Part 227—Noise Exposure Computation This appendix is mandatory. I. Computation of Employee Noise Exposure A...

Top