Sample records for core benchmark analyses

  1. Three-dimensional pin-to-pin analyses of VVER-440 cores by the MOBY-DICK code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehmann, M.; Mikolas, P.

    1994-12-31

    Nuclear design for the Dukovany (EDU) VVER-440s nuclear power plant is routinely performed by the MOBY-DICK system. After its implementation on Hewlett Packard series 700 workstations, it is able to perform routinely three-dimensional pin-to-pin core analyses. For purposes of code validation, the benchmark prepared from EDU operational data was solved.

  2. TREAT Transient Analysis Benchmarking for the HEU Core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontogeorgakos, D. C.; Connaway, H. M.; Wright, A. E.

    2014-05-01

    This work was performed to support the feasibility study on the potential conversion of the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory from the use of high enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel. The analyses were performed by the GTRI Reactor Conversion staff at the Argonne National Laboratory (ANL). The objective of this study was to benchmark the transient calculations against temperature-limited transients performed in the final operating HEU TREAT core configuration. The MCNP code was used to evaluate steady-state neutronics behavior, and the point kinetics code TREKIN was used tomore » determine core power and energy during transients. The first part of the benchmarking process was to calculate with MCNP all the neutronic parameters required by TREKIN to simulate the transients: the transient rod-bank worth, the prompt neutron generation lifetime, the temperature reactivity feedback as a function of total core energy, and the core-average temperature and peak temperature as a functions of total core energy. The results of these calculations were compared against measurements or against reported values as documented in the available TREAT reports. The heating of the fuel was simulated as an adiabatic process. The reported values were extracted from ANL reports, intra-laboratory memos and experiment logsheets and in some cases it was not clear if the values were based on measurements, on calculations or a combination of both. Therefore, it was decided to use the term “reported” values when referring to such data. The methods and results from the HEU core transient analyses will be used for the potential LEU core configurations to predict the converted (LEU) core’s performance.« less

  3. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGES

    Bess, John D.; Montierth, Leland; Köberl, Oliver; ...

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data aremore » greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  4. Sequoia Messaging Rate Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedley, Andrew

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8) with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected tomore » be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less

  5. Core-core and core-valence correlation energy atomic and molecular benchmarks for Li through Ar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ranasinghe, Duminda S.; Frisch, Michael J.; Petersson, George A., E-mail: gpetersson@wesleyan.edu

    2015-12-07

    We have established benchmark core-core, core-valence, and valence-valence absolute coupled-cluster single double (triple) correlation energies (±0.1%) for 210 species covering the first- and second-rows of the periodic table. These species provide 194 energy differences (±0.03 mE{sub h}) including ionization potentials, electron affinities, and total atomization energies. These results can be used for calibration of less expensive methodologies for practical routine determination of core-core and core-valence correlation energies.

  6. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    NASA Technical Reports Server (NTRS)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  7. CLEAR: Cross-Layer Exploration for Architecting Resilience

    DTIC Science & Technology

    2017-03-01

    benchmark analysis, also provides cost-effective solutions (~1% additional energy cost for the same 50× improvement). This paper addresses the...core (OoO-core) [Wang 04], across 18 benchmarks . Such extensive exploration enables us to conclusively answer the above cross-layer resilience...analysis of the effects of soft errors on application benchmarks , provides a highly effective soft error resilience approach. 3. The above

  8. Integral Full Core Multi-Physics PWR Benchmark with Measured Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forget, Benoit; Smith, Kord; Kumar, Shikhar

    In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevantmore » multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.« less

  9. Shift Verification and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less

  10. Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements

    DOE PAGES

    Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.; ...

    2014-11-04

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  11. Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  12. ZPR-6 assembly 7 high {sup 240} PU core : a cylindrical assemby with mixed (PU, U)-oxide fuel and a central high {sup 240} PU zone.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R. M.; Schaefer, R. W.; McKnight, R. D.

    Over a period of 30 years more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited to form the basis for criticality safety benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactormore » physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. The term 'benchmark' in a ZPR program connotes a particularly simple loading aimed at gaining basic reactor physics insight, as opposed to studying a reactor design. In fact, the ZPR-6/7 Benchmark Assembly (Reference 1) had a very simple core unit cell assembled from plates of depleted uranium, sodium, iron oxide, U3O8, and plutonium. The ZPR-6/7 core cell-average composition is typical of the interior region of liquid-metal fast breeder reactors (LMFBRs) of the era. It was one part of the Demonstration Reactor Benchmark Program,a which provided integral experiments characterizing the important features of demonstration-size LMFBRs. As a benchmark, ZPR-6/7 was devoid of many 'real' reactor features, such as simulated control rods and multiple enrichment zones, in its reference form. Those kinds of features were investigated experimentally in variants of the reference ZPR-6/7 or in other critical assemblies in the Demonstration Reactor Benchmark Program.« less

  13. Fukushima Daiichi Radionuclide Inventories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardoni, Jeffrey N.; Jankovsky, Zachary Kyle

    Radionuclide inventories are generated to permit detailed analyses of the Fukushima Daiichi meltdowns. This is necessary information for severe accident calculations, dose calculations, and source term and consequence analyses. Inventories are calculated using SCALE6 and compared to values predicted by international researchers supporting the OECD/NEA's Benchmark Study on the Accident at Fukushima Daiichi Nuclear Power Station (BSAF). Both sets of inventory information are acceptable for best-estimate analyses of the Fukushima reactors. Consistent nuclear information for severe accident codes, including radionuclide class masses and core decay powers, are also derived from the SCALE6 analyses. Key nuclide activity ratios are calculated asmore » functions of burnup and nuclear data in order to explore the utility for nuclear forensics and support future decommissioning efforts.« less

  14. Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; Leland M. Montierth

    2014-06-01

    PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 editionmore » of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.« less

  15. Defining core elements and outstanding practice in Nutritional Science through collaborative benchmarking.

    PubMed

    Samman, Samir; McCarthur, Jennifer O; Peat, Mary

    2006-01-01

    Benchmarking has been adopted by educational institutions as a potentially sensitive tool for improving learning and teaching. To date there has been limited application of benchmarking methodology in the Discipline of Nutritional Science. The aim of this survey was to define core elements and outstanding practice in Nutritional Science through collaborative benchmarking. Questionnaires that aimed to establish proposed core elements for Nutritional Science, and inquired about definitions of " good" and " outstanding" practice were posted to named representatives at eight Australian universities. Seven respondents identified core elements that included knowledge of nutrient metabolism and requirement, food production and processing, modern biomedical techniques that could be applied to understanding nutrition, and social and environmental issues as related to Nutritional Science. Four of the eight institutions who agreed to participate in the present survey identified the integration of teaching with research as an indicator of outstanding practice. Nutritional Science is a rapidly evolving discipline. Further and more comprehensive surveys are required to consolidate and update the definition of the discipline, and to identify the optimal way of teaching it. Global ideas and specific regional requirements also need to be considered.

  16. Present Status and Extensions of the Monte Carlo Performance Benchmark

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  17. Excore Modeling with VERAShift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.

    It is important to be able to accurately predict the neutron flux outside the immediate reactor core for a variety of safety and material analyses. Monte Carlo radiation transport calculations are required to produce the high fidelity excore responses. Under this milestone VERA (specifically the VERAShift package) has been extended to perform excore calculations by running radiation transport calculations with Shift. This package couples VERA-CS with Shift to perform excore tallies for multiple state points concurrently, with each component capable of parallel execution on independent domains. Specifically, this package performs fluence calculations in the core barrel and vessel, or, performsmore » the requested tallies in any user-defined excore regions. VERAShift takes advantage of the general geometry package in Shift. This gives VERAShift the flexibility to explicitly model features outside the core barrel, including detailed vessel models, detectors, and power plant details. A very limited set of experimental and numerical benchmarks is available for excore simulation comparison. The Consortium for the Advanced Simulation of Light Water Reactors (CASL) has developed a set of excore benchmark problems to include as part of the VERA-CS verification and validation (V&V) problems. The excore capability in VERAShift has been tested on small representative assembly problems, multiassembly problems, and quarter-core problems. VERAView has also been extended to visualize these vessel fluence results from VERAShift. Preliminary vessel fluence results for quarter-core multistate calculations look very promising. Further development is needed to determine the details relevant to excore simulations. Validation of VERA for fluence and excore detectors still needs to be performed against experimental and numerical results.« less

  18. Results of a Neutronic Simulation of HTR-Proteus Core 4.2 using PEBBED and other INL Reactor Physics Tools: FY-09 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hans D. Gougar

    The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. A combination of unit cell calculations (COMBINE-PEBDAN), 1-D discrete ordinates transport (SCAMP), and nodal diffusion calculations (PEBBED) were employed to yield keff and flux profiles. Preliminary results indicate that these tools, as currently configured and used, do not yield satisfactory estimates of keff. If control rods are not modeled, these methods can deliver much better agreement with experimental core eigenvalues which suggests that development efforts should focus on modeling control rod andmore » other absorber regions. Under some assumptions and in 1D subcore analyses, diffusion theory agrees well with transport. This suggests that developments in specific areas can produce a viable core simulation approach. Some corrections have been identified and can be further developed, specifically: treatment of the upper void region, treatment of inter-pebble streaming, and explicit (multiscale) transport modeling of TRISO fuel particles as a first step in cross section generation. Until corrections are made that yield better agreement with experiment, conclusions from core design and burnup analyses should be regarded as qualitative and not benchmark quality.« less

  19. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  20. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGES

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  1. The Learning Organisation: Results of a Benchmarking Study.

    ERIC Educational Resources Information Center

    Zairi, Mohamed

    1999-01-01

    Learning in corporations was assessed using these benchmarks: core qualities of creative organizations, characteristic of organizational creativity, attributes of flexible organizations, use of diversity and conflict, creative human resource management systems, and effective and successful teams. These benchmarks are key elements of the learning…

  2. EBR-II Reactor Physics Benchmark Evaluation Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, Chad L.; Lum, Edward S; Stewart, Ryan

    This report provides a reactor physics benchmark evaluation with associated uncertainty quantification for the critical configuration of the April 1986 Experimental Breeder Reactor II Run 138B core configuration.

  3. A solid reactor core thermal model for nuclear thermal rockets

    NASA Astrophysics Data System (ADS)

    Rider, William J.; Cappiello, Michael W.; Liles, Dennis R.

    1991-01-01

    A Helium/Hydrogen Cooled Reactor Analysis (HERA) computer code has been developed. HERA has the ability to model arbitrary geometries in three dimensions, which allows the user to easily analyze reactor cores constructed of prismatic graphite elements. The code accounts for heat generation in the fuel, control rods, and other structures; conduction and radiation across gaps; convection to the coolant; and a variety of boundary conditions. The numerical solution scheme has been optimized for vector computers, making long transient analyses economical. Time integration is either explicit or implicit, which allows the use of the model to accurately calculate both short- or long-term transients with an efficient use of computer time. Both the basic spatial and temporal integration schemes have been benchmarked against analytical solutions.

  4. A benchmarking tool to evaluate computer tomography perfusion infarct core predictions against a DWI standard.

    PubMed

    Cereda, Carlo W; Christensen, Søren; Campbell, Bruce Cv; Mishra, Nishant K; Mlynash, Michael; Levi, Christopher; Straka, Matus; Wintermark, Max; Bammer, Roland; Albers, Gregory W; Parsons, Mark W; Lansberg, Maarten G

    2016-10-01

    Differences in research methodology have hampered the optimization of Computer Tomography Perfusion (CTP) for identification of the ischemic core. We aim to optimize CTP core identification using a novel benchmarking tool. The benchmarking tool consists of an imaging library and a statistical analysis algorithm to evaluate the performance of CTP. The tool was used to optimize and evaluate an in-house developed CTP-software algorithm. Imaging data of 103 acute stroke patients were included in the benchmarking tool. Median time from stroke onset to CT was 185 min (IQR 180-238), and the median time between completion of CT and start of MRI was 36 min (IQR 25-79). Volumetric accuracy of the CTP-ROIs was optimal at an rCBF threshold of <38%; at this threshold, the mean difference was 0.3 ml (SD 19.8 ml), the mean absolute difference was 14.3 (SD 13.7) ml, and CTP was 67% sensitive and 87% specific for identification of DWI positive tissue voxels. The benchmarking tool can play an important role in optimizing CTP software as it provides investigators with a novel method to directly compare the performance of alternative CTP software packages. © The Author(s) 2015.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dokhane, A.; Canepa, S.; Ferroukhi, H.

    For stability analyses of the Swiss operating Boiling-Water-Reactors (BWRs), the methodology employed and validated so far at the Paul Scherrer Inst. (PSI) was based on the RAMONA-3 code with a hybrid upstream static lattice/core analysis approach using CASMO-4 and PRESTO-2. More recently, steps were undertaken towards a new methodology based on the SIMULATE-3K (S3K) code for the dynamical analyses combined with the CMSYS system relying on the CASMO/SIMULATE-3 suite of codes and which was established at PSI to serve as framework for the development and validation of reference core models of all the Swiss reactors and operated cycles. This papermore » presents a first validation of the new methodology on the basis of a benchmark recently organised by a Swiss utility and including the participation of several international organisations with various codes/methods. Now in parallel, a transition from CASMO-4E (C4E) to CASMO-5M (C5M) as basis for the CMSYS core models was also recently initiated at PSI. Consequently, it was considered adequate to address the impact of this transition both for the steady-state core analyses as well as for the stability calculations and to achieve thereby, an integral approach for the validation of the new S3K methodology. Therefore, a comparative assessment of C4 versus C5M is also presented in this paper with particular emphasis on the void coefficients and their impact on the downstream stability analysis results. (authors)« less

  6. Rubus: A compiler for seamless and extensible parallelism.

    PubMed

    Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.

  7. Rubus: A compiler for seamless and extensible parallelism

    PubMed Central

    Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758

  8. HTR-PROTEUS pebble bed experimental program cores 9 & 10: columnar hexagonal point-on-point packing with a 1:1 moderator-to-fuel pebble ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.

    2014-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  9. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORES 5, 6, 7, & 8: COLUMNAR HEXAGONAL POINT-ON-POINT PACKING WITH A 1:2 MODERATOR-TO-FUEL PEBBLE RATIO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess

    2013-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  10. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORES 9 & 10: COLUMNAR HEXAGONAL POINT-ON-POINT PACKING WITH A 1:1 MODERATOR-TO-FUEL PEBBLE RATIO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess

    2013-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  11. Implementation and validation of a conceptual benchmarking framework for patient blood management.

    PubMed

    Kastner, Peter; Breznik, Nada; Gombotz, Hans; Hofmann, Axel; Schreier, Günter

    2015-01-01

    Public health authorities and healthcare professionals are obliged to ensure high quality health service. Because of the high variability of the utilisation of blood and blood components, benchmarking is indicated in transfusion medicine. Implementation and validation of a benchmarking framework for Patient Blood Management (PBM) based on the report from the second Austrian Benchmark trial. Core modules for automatic report generation have been implemented with KNIME (Konstanz Information Miner) and validated by comparing the output with the results of the second Austrian benchmark trial. Delta analysis shows a deviation <0.1% for 95% (max. 1.4%). The framework provides a reliable tool for PBM benchmarking. The next step is technical integration with hospital information systems.

  12. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    NASA Technical Reports Server (NTRS)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  13. ZPR-3 Assembly 11 : A cylindrical sssembly of highly enriched uranium and depleted uranium with an average {sup 235}U enrichment of 12 atom % and a depleted uranium reflector.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R. M.; McKnight, R. D.; Tsiboulia, A.

    2010-09-30

    Over a period of 30 years, more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited for nuclear data validation and to form the basis for criticality safety benchmarks. A number of the Argonne ZPR/ZPPR critical assemblies have been evaluated as ICSBEP and IRPhEP benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physicsmore » benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactor physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. ZPR-3 Assembly 11 (ZPR-3/11) was designed as a fast reactor physics benchmark experiment with an average core {sup 235}U enrichment of approximately 12 at.% and a depleted uranium reflector. Approximately 79.7% of the total fissions in this assembly occur above 100 keV, approximately 20.3% occur below 100 keV, and essentially none below 0.625 eV - thus the classification as a 'fast' assembly. This assembly is Fast Reactor Benchmark No. 8 in the Cross Section Evaluation Working Group (CSEWG) Benchmark Specificationsa and has historically been used as a data validation benchmark assembly. Loading of ZPR-3 Assembly 11 began in early January 1958, and the Assembly 11 program ended in late January 1958. The core consisted of highly enriched uranium (HEU) plates and depleted uranium plates loaded into stainless steel drawers, which were inserted into the central square stainless steel tubes of a 31 x 31 matrix on a split table machine. The core unit cell consisted of two columns of 0.125 in.-wide (3.175 mm) HEU plates, six columns of 0.125 in.-wide (3.175 mm) depleted uranium plates and one column of 1.0 in.-wide (25.4 mm) depleted uranium plates. The length of each column was 10 in. (254.0 mm) in each half of the core. The axial blanket consisted of 12 in. (304.8 mm) of depleted uranium behind the core. The thickness of the depleted uranium radial blanket was approximately 14 in. (355.6 mm), and the length of the radial blanket in each half of the matrix was 22 in. (558.8 mm). The assembly geometry approximated a right circular cylinder as closely as the square matrix tubes allowed. According to the logbook and loading records for ZPR-3/11, the reference critical configuration was loading 10 which was critical on January 21, 1958. Subsequent loadings were very similar but less clean for criticality because there were modifications made to accommodate reactor physics measurements other than criticality. Accordingly, ZPR-3/11 loading 10 was selected as the only configuration for this benchmark. As documented below, it was determined to be acceptable as a criticality safety benchmark experiment. A very accurate transformation to a simplified model is needed to make any ZPR assembly a practical criticality-safety benchmark. There is simply too much geometric detail in an exact (as-built) model of a ZPR assembly, even a clean core such as ZPR-3/11 loading 10. The transformation must reduce the detail to a practical level without masking any of the important features of the critical experiment. And it must do this without increasing the total uncertainty far beyond that of the original experiment. Such a transformation is described in Section 3. It was obtained using a pair of continuous-energy Monte Carlo calculations. First, the critical configuration was modeled in full detail - every plate, drawer, matrix tube, and air gap was modeled explicitly. Then the regionwise compositions and volumes from the detailed as-built model were used to construct a homogeneous, two-dimensional (RZ) model of ZPR-3/11 that conserved the mass of each nuclide and volume of each region. The simple cylindrical model is the criticality-safety benchmark model. The difference in the calculated k{sub eff} values between the as-built three-dimensional model and the homogeneous two-dimensional benchmark model was used to adjust the measured excess reactivity of ZPR-3/11 loading 10 to obtain the k{sub eff} for the benchmark model.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rouxelin, Pascal Nicolas; Strydom, Gerhard

    Best-estimate plus uncertainty analysis of reactors is replacing the traditional conservative (stacked uncertainty) method for safety and licensing analysis. To facilitate uncertainty analysis applications, a comprehensive approach and methodology must be developed and applied. High temperature gas cooled reactors (HTGRs) have several features that require techniques not used in light-water reactor analysis (e.g., coated-particle design and large graphite quantities at high temperatures). The International Atomic Energy Agency has therefore launched the Coordinated Research Project on HTGR Uncertainty Analysis in Modeling to study uncertainty propagation in the HTGR analysis chain. The benchmark problem defined for the prismatic design is represented bymore » the General Atomics Modular HTGR 350. The main focus of this report is the compilation and discussion of the results obtained for various permutations of Exercise I 2c and the use of the cross section data in Exercise II 1a of the prismatic benchmark, which is defined as the last and first steps of the lattice and core simulation phases, respectively. The report summarizes the Idaho National Laboratory (INL) best estimate results obtained for Exercise I 2a (fresh single-fuel block), Exercise I 2b (depleted single-fuel block), and Exercise I 2c (super cell) in addition to the first results of an investigation into the cross section generation effects for the super-cell problem. The two dimensional deterministic code known as the New ESC based Weighting Transport (NEWT) included in the Standardized Computer Analyses for Licensing Evaluation (SCALE) 6.1.2 package was used for the cross section evaluation, and the results obtained were compared to the three dimensional stochastic SCALE module KENO VI. The NEWT cross section libraries were generated for several permutations of the current benchmark super-cell geometry and were then provided as input to the Phase II core calculation of the stand alone neutronics Exercise II 1a. The steady state core calculations were simulated with the INL coupled-code system known as the Parallel and Highly Innovative Simulation for INL Code System (PHISICS) and the system thermal-hydraulics code known as the Reactor Excursion and Leak Analysis Program (RELAP) 5 3D using the nuclear data libraries previously generated with NEWT. It was observed that significant differences in terms of multiplication factor and neutron flux exist between the various permutations of the Phase I super-cell lattice calculations. The use of these cross section libraries only leads to minor changes in the Phase II core simulation results for fresh fuel but shows significantly larger discrepancies for spent fuel cores. Furthermore, large incongruities were found between the SCALE NEWT and KENO VI results for the super cells, and while some trends could be identified, a final conclusion on this issue could not yet be reached. This report will be revised in mid 2016 with more detailed analyses of the super-cell problems and their effects on the core models, using the latest version of SCALE (6.2). The super-cell models seem to show substantial improvements in terms of neutron flux as compared to single-block models, particularly at thermal energies.« less

  15. Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects

    PubMed Central

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140

  16. Benchmarking strategies for measuring the quality of healthcare: problems and prospects.

    PubMed

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Sterbentz, James W.; Snoj, Luka

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  18. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  19. Highly Enriched Uranium Metal Cylinders Surrounded by Various Reflector Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard Jones; J. Blair Briggs; Leland Monteirth

    A series of experiments was performed at Los Alamos Scientific Laboratory in 1958 to determine critical masses of cylinders of Oralloy (Oy) reflected by a number of materials. The experiments were all performed on the Comet Universal Critical Assembly Machine, and consisted of discs of highly enriched uranium (93.3 wt.% 235U) reflected by half-inch and one-inch-thick cylindrical shells of various reflector materials. The experiments were performed by members of Group N-2, particularly K. W. Gallup, G. E. Hansen, H. C. Paxton, and R. H. White. This experiment was intended to ascertain critical masses for criticality safety purposes, as well asmore » to compare neutron transport cross sections to those obtained from danger coefficient measurements with the Topsy Oralloy-Tuballoy reflected and Godiva unreflected critical assemblies. The reflector materials examined in this series of experiments are as follows: magnesium, titanium, aluminum, graphite, mild steel, nickel, copper, cobalt, molybdenum, natural uranium, tungsten, beryllium, aluminum oxide, molybdenum carbide, and polythene (polyethylene). Also included are two special configurations of composite beryllium and iron reflectors. Analyses were performed in which uncertainty associated with six different parameters was evaluated; namely, extrapolation to the uranium critical mass, uranium density, 235U enrichment, reflector density, reflector thickness, and reflector impurities. In addition to the idealizations made by the experimenters (removal of the platen and diaphragm), two simplifications were also made to the benchmark models that resulted in a small bias and additional uncertainty. First of all, since impurities in core and reflector materials are only estimated, they are not included in the benchmark models. Secondly, the room, support structure, and other possible surrounding equipment were not included in the model. Bias values that result from these two simplifications were determined and associated uncertainty in the bias values were included in the overall uncertainty in benchmark keff values. Bias values were very small, ranging from 0.0004 ?k low to 0.0007 ?k low. Overall uncertainties range from ? 0.0018 to ? 0.0030. Major contributors to the overall uncertainty include uncertainty in the extrapolation to the uranium critical mass and the uranium density. Results are summarized in Figure 1. Figure 1. Experimental, Benchmark-Model, and MCNP/KENO Calculated Results The 32 configurations described and evaluated under ICSBEP Identifier HEU-MET-FAST-084 are judged to be acceptable for use as criticality safety benchmark experiments and should be valuable integral benchmarks for nuclear data testing of the various reflector materials. Details of the benchmark models, uncertainty analyses, and final results are given in this paper.« less

  20. Multigroup cross section library for GFR2400

    NASA Astrophysics Data System (ADS)

    Čerba, Štefan; Vrban, Branislav; Lüley, Jakub; Haščík, Ján; Nečas, Vladimír

    2017-09-01

    In this paper the development and optimization of the SBJ_E71 multigroup cross section library for GFR2400 applications is discussed. A cross section processing scheme, merging Monte Carlo and deterministic codes, was developed. Several fine and coarse group structures and two weighting flux options were analysed through 18 benchmark experiments selected from the handbook of ICSBEP and based on performed similarity assessments. The performance of the collapsed version of the SBJ_E71 library was compared with MCNP5 CE ENDF/B VII.1 and the Korean KAFAX-E70 library. The comparison was made based on integral parameters of calculations performed on full core homogenous models.

  1. 76 FR 54209 - Corrosion-Resistant Carbon Steel Flat Products From the Republic of Korea: Preliminary Results of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-31

    ... description of the merchandise is dispositive. Subsidies Valuation Information A. Benchmarks for Short-Term Financing For those programs requiring the application of a won-denominated, short-term interest rate... Issues and Decision Memorandum (CORE from Korea 2006 Decision Memorandum) at ``Benchmarks for Short-Term...

  2. A Programming Model Performance Study Using the NAS Parallel Benchmarks

    DOE PAGES

    Shan, Hongzhang; Blagojević, Filip; Min, Seung-Jai; ...

    2010-01-01

    Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the threemore » programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.« less

  3. Modal analysis and acoustic transmission through offset-core honeycomb sandwich panels

    NASA Astrophysics Data System (ADS)

    Mathias, Adam Dustin

    The work presented in this thesis is motivated by an earlier research that showed that double, offset-core honeycomb sandwich panels increased thermal resistance and, hence, decreased heat transfer through the panels. This result lead to the hypothesis that these panels could be used for acoustic insulation. Using commercial finite element modeling software, COMSOL Multiphysics, the acoustical properties, specifically the transmission loss across a variety of offset-core honeycomb sandwich panels, is studied for the case of a plane acoustic wave impacting the panel at normal incidence. The transmission loss results are compared with those of single-core honeycomb panels with the same cell sizes. The fundamental frequencies of the panels are also computed in an attempt to better understand the vibrational modes of these particular sandwich-structured panels. To ensure that the finite element analysis software is adequate for the task at hand, two relevant benchmark problems are solved and compared with theory. Results from these benchmark results compared well to those obtained from theory. Transmission loss results from the offset-core honeycomb sandwich panels show increased transmission loss, especially for large cell honeycombs when compared to single-core honeycomb panels.

  4. Service profiling and outcomes benchmarking using the CORE-OM: toward practice-based evidence in the psychological therapies. Clinical Outcomes in Routine Evaluation-Outcome Measures.

    PubMed

    Barkham, M; Margison, F; Leach, C; Lucock, M; Mellor-Clark, J; Evans, C; Benson, L; Connell, J; Audin, K; McGrath, G

    2001-04-01

    To complement the evidence-based practice paradigm, the authors argued for a core outcome measure to provide practice-based evidence for the psychological therapies. Utility requires instruments that are acceptable scientifically, as well as to service users, and a coordinated implementation of the measure at a national level. The development of the Clinical Outcomes in Routine Evaluation-Outcome Measure (CORE-OM) is summarized. Data are presented across 39 secondary-care services (n = 2,710) and within an intensively evaluated single service (n = 1,455). Results suggest that the CORE-OM is a valid and reliable measure for multiple settings and is acceptable to users and clinicians as well as policy makers. Baseline data levels of patient presenting problem severity, including risk, are reported in addition to outcome benchmarks that use the concept of reliable and clinically significant change. Basic quality improvement in outcomes for a single service is considered.

  5. Heat deposition analysis for the High Flux Isotope Reactor’s HEU and LEU core models

    DOE PAGES

    Davidson, Eva E.; Betzler, Benjamin R.; Chandler, David; ...

    2017-08-01

    The High Flux Isotope Reactor at Oak Ridge National Laboratory is an 85 MW th pressurized light-water-cooled and -moderated flux-trap type research reactor. The reactor is used to conduct numerous experiments, advancing various scientific and engineering disciplines. As part of an ongoing program sponsored by the US Department of Energy National Nuclear Security Administration Office of Material Management and Minimization, studies are being performed to assess the feasibility of converting the reactor’s highly enriched uranium fuel to low-enriched uranium fuel. To support this conversion project, reference models with representative experiment target loading and explicit fuel plate representation were developed andmore » benchmarked for both fuels to (1) allow for consistent comparison between designs for both fuel types and (2) assess the potential impact of low-enriched uranium conversion. These high-fidelity models were used to conduct heat deposition analyses at the beginning and end of the reactor cycle and are presented herein. This article (1) discusses the High Flux Isotope Reactor models developed to facilitate detailed heat deposition analyses of the reactor’s highly enriched and low-enriched uranium cores, (2) examines the computational approach for performing heat deposition analysis, which includes a discussion on the methodology for calculating the amount of energy released per fission, heating rates, power and volumetric heating rates, and (3) provides results calculated throughout various regions of the highly enriched and low-enriched uranium core at the beginning and end of the reactor cycle. These are the first detailed high-fidelity heat deposition analyses for the High Flux Isotope Reactor’s highly enriched and low-enriched core models with explicit fuel plate representation. Lastly, these analyses are used to compare heat distributions obtained for both fuel designs at the beginning and end of the reactor cycle, and they are essential for enabling comprehensive thermal hydraulics and safety analyses that require detailed estimates of the heat source within all of the reactor’s fuel element regions.« less

  6. Validation of the BUGJEFF311.BOLIB, BUGENDF70.BOLIB and BUGLE-B7 broad-group libraries on the PCA-Replica (H2O/Fe) neutron shielding benchmark experiment

    NASA Astrophysics Data System (ADS)

    Pescarini, Massimo; Orsi, Roberto; Frisoni, Manuela

    2016-03-01

    The PCA-Replica 12/13 (H2O/Fe) neutron shielding benchmark experiment was analysed using the TORT-3.2 3D SN code. PCA-Replica reproduces a PWR ex-core radial geometry with alternate layers of water and steel including a pressure vessel simulator. Three broad-group coupled neutron/photon working cross section libraries in FIDO-ANISN format with the same energy group structure (47 n + 20 γ) and based on different nuclear data were alternatively used: the ENEA BUGJEFF311.BOLIB (JEFF-3.1.1) and UGENDF70.BOLIB (ENDF/B-VII.0) libraries and the ORNL BUGLE-B7 (ENDF/B-VII.0) library. Dosimeter cross sections derived from the IAEA IRDF-2002 dosimetry file were employed. The calculated reaction rates for the Rh-103(n,n')Rh-103m, In-115(n,n')In-115m and S-32(n,p)P-32 threshold activation dosimeters and the calculated neutron spectra are compared with the corresponding experimental results.

  7. Summary of comparison and analysis of results from exercises 1 and 2 of the OECD PBMR coupled neutronics/thermal hydraulics transient benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkhabela, P.; Han, J.; Tyobeka, B.

    2006-07-01

    The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less

  8. How to Advance TPC Benchmarks with Dependability Aspects

    NASA Astrophysics Data System (ADS)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  9. Root-cause analysis of the better performance of the coarse-mesh finite-difference method for CANDU-type reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, W.

    2012-07-01

    Recent assessment results indicate that the coarse-mesh finite-difference method (FDM) gives consistently smaller percent differences in channel powers than the fine-mesh FDM when compared to the reference MCNP solution for CANDU-type reactors. However, there is an impression that the fine-mesh FDM should always give more accurate results than the coarse-mesh FDM in theory. To answer the question if the better performance of the coarse-mesh FDM for CANDU-type reactors was just a coincidence (cancellation of errors) or caused by the use of heavy water or the use of lattice-homogenized cross sections for the cluster fuel geometry in the diffusion calculation, threemore » benchmark problems were set up with three different fuel lattices: CANDU, HWR and PWR. These benchmark problems were then used to analyze the root cause of the better performance of the coarse-mesh FDM for CANDU-type reactors. The analyses confirm that the better performance of the coarse-mesh FDM for CANDU-type reactors is mainly caused by the use of lattice-homogenized cross sections for the sub-meshes of the cluster fuel geometry in the diffusion calculation. Based on the analyses, it is recommended to use 2 x 2 coarse-mesh FDM to analyze CANDU-type reactors when lattice-homogenized cross sections are used in the core analysis. (authors)« less

  10. Experimental Criticality Benchmarks for SNAP 10A/2 Reactor Cores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krass, A.W.

    2005-12-19

    This report describes computational benchmark models for nuclear criticality derived from descriptions of the Systems for Nuclear Auxiliary Power (SNAP) Critical Assembly (SCA)-4B experimental criticality program conducted by Atomics International during the early 1960's. The selected experimental configurations consist of fueled SNAP 10A/2-type reactor cores subject to varied conditions of water immersion and reflection under experimental control to measure neutron multiplication. SNAP 10A/2-type reactor cores are compact volumes fueled and moderated with the hydride of highly enriched uranium-zirconium alloy. Specifications for the materials and geometry needed to describe a given experimental configuration for a model using MCNP5 are provided. Themore » material and geometry specifications are adequate to permit user development of input for alternative nuclear safety codes, such as KENO. A total of 73 distinct experimental configurations are described.« less

  11. The Role of Institutional Research in Conducting Comparative Analysis of Peers

    ERIC Educational Resources Information Center

    Trainer, James F.

    2008-01-01

    In this age of accountability, transparency, and accreditation, colleges and universities increasingly conduct comparative analyses and engage in benchmarking activities. Meant to inform institutional planning and decision making, comparative analyses and benchmarking are employed to let stakeholders know how an institution stacks up against its…

  12. Deterministic Modeling of the High Temperature Test Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortensi, J.; Cogliati, J. J.; Pope, M. A.

    2010-06-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is usedmore » in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less

  13. Benchmark gas core critical experiment.

    NASA Technical Reports Server (NTRS)

    Kunze, J. F.; Lofthouse, J. H.; Cooper, C. G.; Hyland, R. E.

    1972-01-01

    A critical experiment with spherical symmetry has been conducted on the gas core nuclear reactor concept. The nonspherical perturbations in the experiment were evaluated experimentally and produce corrections to the observed eigenvalue of approximately 1% delta k. The reactor consisted of a low density, central uranium hexafluoride gaseous core, surrounded by an annulus of void or low density hydrocarbon, which in turn was surrounded with a 97-cm-thick heavy water reflector.

  14. User and Performance Impacts from Franklin Upgrades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Yun

    2009-05-10

    The NERSC flagship computer Cray XT4 system"Franklin" has gone through three major upgrades: quad core upgrade, CLE 2.1 upgrade, and IO upgrade, during the past year. In this paper, we will discuss the various aspects of the user impacts such as user access, user environment, and user issues etc from these upgrades. The performance impacts on the kernel benchmarks and selected application benchmarks will also be presented.

  15. New Multi-group Transport Neutronics (PHISICS) Capabilities for RELAP5-3D and its Application to Phase I of the OECD/NEA MHTGR-350 MW Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Cristian Rabiti; Andrea Alfonsi

    2012-10-01

    PHISICS is a neutronics code system currently under development at the Idaho National Laboratory (INL). Its goal is to provide state of the art simulation capability to reactor designers. The different modules for PHISICS currently under development are a nodal and semi-structured transport core solver (INSTANT), a depletion module (MRTAU) and a cross section interpolation (MIXER) module. The INSTANT module is the most developed of the mentioned above. Basic functionalities are ready to use, but the code is still in continuous development to extend its capabilities. This paper reports on the effort of coupling the nodal kinetics code package PHISICSmore » (INSTANT/MRTAU/MIXER) to the thermal hydraulics system code RELAP5-3D, to enable full core and system modeling. This will enable the possibility to model coupled (thermal-hydraulics and neutronics) problems with more options for 3D neutron kinetics, compared to the existing diffusion theory neutron kinetics module in RELAP5-3D (NESTLE). In the second part of the paper, an overview of the OECD/NEA MHTGR-350 MW benchmark is given. This benchmark has been approved by the OECD, and is based on the General Atomics 350 MW Modular High Temperature Gas Reactor (MHTGR) design. The benchmark includes coupled neutronics thermal hydraulics exercises that require more capabilities than RELAP5-3D with NESTLE offers. Therefore, the MHTGR benchmark makes extensive use of the new PHISICS/RELAP5-3D coupling capabilities. The paper presents the preliminary results of the three steady state exercises specified in Phase I of the benchmark using PHISICS/RELAP5-3D.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marck, Steven C. van der, E-mail: vandermarck@nrg.eu

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), tomore » mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for {sup 6}Li, {sup 7}Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such instances can often be related to nuclear data for specific non-fissile elements, such as C, Fe, or Gd. Indications are that the intermediate and mixed spectrum cases are less well described. The results for the shielding benchmarks are generally good, with very similar results for the three libraries in the majority of cases. Nevertheless there are, in certain cases, strong deviations between calculated and benchmark values, such as for Co and Mg. Also, the results show discrepancies at certain energies or angles for e.g. C, N, O, Mo, and W. The functionality of MCNP6 to calculate the effective delayed neutron fraction yields very good results for all three libraries.« less

  17. MC21 analysis of the MIT PWR benchmark: Hot zero power results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly Iii, D. J.; Aviles, B. N.; Herman, B. R.

    2013-07-01

    MC21 Monte Carlo results have been compared with hot zero power measurements from an operating pressurized water reactor (PWR), as specified in a new full core PWR performance benchmark from the MIT Computational Reactor Physics Group. Included in the comparisons are axially integrated full core detector measurements, axial detector profiles, control rod bank worths, and temperature coefficients. Power depressions from grid spacers are seen clearly in the MC21 results. Application of Coarse Mesh Finite Difference (CMFD) acceleration within MC21 has been accomplished, resulting in a significant reduction of inactive batches necessary to converge the fission source. CMFD acceleration has alsomore » been shown to work seamlessly with the Uniform Fission Site (UFS) variance reduction method. (authors)« less

  18. Benchmark Evaluation of Fuel Effect and Material Worth Measurements for a Beryllium-Reflected Space Reactor Mockup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, Margaret A.; Bess, John D.

    2015-02-01

    The critical configuration of the small, compact critical assembly (SCCA) experiments performed at the Oak Ridge Critical Experiments Facility (ORCEF) in 1962-1965 have been evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The initial intent of these experiments was to support the design of the Medium Power Reactor Experiment (MPRE) program, whose purpose was to study “power plants for the production of electrical power in space vehicles.” The third configuration in this series of experiments was a beryllium-reflected assembly of stainless-steel-clad, highly enriched uranium (HEU)-O 2 fuel mockup of a potassium-cooledmore » space power reactor. Reactivity measurements cadmium ratio spectral measurements and fission rate measurements were measured through the core and top reflector. Fuel effect worth measurements and neutron moderating and absorbing material worths were also measured in the assembly fuel region. The cadmium ratios, fission rate, and worth measurements were evaluated for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The fuel tube effect and neutron moderating and absorbing material worth measurements are the focus of this paper. Additionally, a measurement of the worth of potassium filling the core region was performed but has not yet been evaluated Pellets of 93.15 wt.% enriched uranium dioxide (UO 2) were stacked in 30.48 cm tall stainless steel fuel tubes (0.3 cm tall end caps). Each fuel tube had 26 pellets with a total mass of 295.8 g UO 2 per tube. 253 tubes were arranged in 1.506-cm triangular lattice. An additional 7-tube cluster critical configuration was also measured but not used for any physics measurements. The core was surrounded on all side by a beryllium reflector. The fuel effect worths were measured by removing fuel tubes at various radius. An accident scenario was also simulated by moving outward twenty fuel rods from the periphery of the core so they were touching the core tank. The change in the system reactivity when the fuel tube(s) were removed/moved compared with the base configuration was the worth of the fuel tubes or accident scenario. The worth of neutron absorbing and moderating materials was measured by inserting material rods into the core at regular intervals or placing lids at the top of the core tank. Stainless steel 347, tungsten, niobium, polyethylene, graphite, boron carbide, aluminum and cadmium rods and/or lid worths were all measured. The change in the system reactivity when a material was inserted into the core is the worth of the material.« less

  19. Benchmarking high performance computing architectures with CMS’ skeleton framework

    NASA Astrophysics Data System (ADS)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  20. Benchmarking NWP Kernels on Multi- and Many-core Processors

    NASA Astrophysics Data System (ADS)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  1. featsel: A framework for benchmarking of feature selection algorithms and cost functions

    NASA Astrophysics Data System (ADS)

    Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior

    In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.

  2. Knowledge and Practices of Faculty at NASM Accredited Institutions in the Southeast Region Regarding Standards-Based Instruction

    ERIC Educational Resources Information Center

    Nelson, Jonathan Leon

    2017-01-01

    In 1993, Congress passed the mandate "Goals 2000: Educate America Act," which established standards for K-12 education that outlined the core benchmarks of student achievement for individuals who have mastered the core curricula required to earn a high school diploma (Mark, 1995). Unfortunately, these curricular requirements did not…

  3. Benchmarking and Accreditation Goals Support the Value of an Undergraduate Business Law Core Course

    ERIC Educational Resources Information Center

    O'Brien, Christine Neylon; Powers, Richard E.; Wesner, Thomas L.

    2018-01-01

    This article provides information about the value of a core course in business law and why it remains essential to business education. It goes on to identify highly ranked undergraduate business programs that require one or more business law courses. Using "Business Week" and "US News and World Report" to identify top…

  4. Research-Based Writing Practices and the Common Core: Meta-Analysis and Meta-Synthesis

    ERIC Educational Resources Information Center

    Graham, Steve; Harris, Karen R.; Santangelo, Tanya

    2015-01-01

    In order to meet writing objectives specified in the Common Core State Standards (CCSS), many teachers need to make significant changes in how writing is taught. While CCSS identified what students need to master, it did not provide guidance on how teachers are to meet these writing benchmarks. The current article presents research-supported…

  5. Kohn-Sham Band Structure Benchmark Including Spin-Orbit Coupling for 2D and 3D Solids

    NASA Astrophysics Data System (ADS)

    Huhn, William; Blum, Volker

    2015-03-01

    Accurate electronic band structures serve as a primary indicator of the suitability of a material for a given application, e.g., as electronic or catalytic materials. Computed band structures, however, are subject to a host of approximations, some of which are more obvious (e.g., the treatment of the exchange-correlation of self-energy) and others less obvious (e.g., the treatment of core, semicore, or valence electrons, handling of relativistic effects, or the accuracy of the underlying basis set used). We here provide a set of accurate Kohn-Sham band structure benchmarks, using the numeric atom-centered all-electron electronic structure code FHI-aims combined with the ``traditional'' PBE functional and the hybrid HSE functional, to calculate core, valence, and low-lying conduction bands of a set of 2D and 3D materials. Benchmarks are provided with and without effects of spin-orbit coupling, using quasi-degenerate perturbation theory to predict spin-orbit splittings. This work is funded by Fritz-Haber-Institut der Max-Planck-Gesellschaft.

  6. Tracking the emergence of synthetic biology.

    PubMed

    Shapira, Philip; Kwon, Seokbeom; Youtie, Jan

    2017-01-01

    Synthetic biology is an emerging domain that combines biological and engineering concepts and which has seen rapid growth in research, innovation, and policy interest in recent years. This paper contributes to efforts to delineate this emerging domain by presenting a newly constructed bibliometric definition of synthetic biology. Our approach is dimensioned from a core set of papers in synthetic biology, using procedures to obtain benchmark synthetic biology publication records, extract keywords from these benchmark records, and refine the keywords, supplemented with articles published in dedicated synthetic biology journals. We compare our search strategy with other recent bibliometric approaches to define synthetic biology, using a common source of publication data for the period from 2000 to 2015. The paper details the rapid growth and international spread of research in synthetic biology in recent years, demonstrates that diverse research disciplines are contributing to the multidisciplinary development of synthetic biology research, and visualizes this by profiling synthetic biology research on the map of science. We further show the roles of a relatively concentrated set of research sponsors in funding the growth and trajectories of synthetic biology. In addition to discussing these analyses, the paper notes limitations and suggests lines for further work.

  7. The X40×10 Halogen Bonding Benchmark Revisited: Surprising Importance of (n-1)d Subvalence Correlation.

    PubMed

    Kesharwani, Manoj K; Manna, Debashree; Sylvetsky, Nitai; Martin, Jan M L

    2018-03-01

    We have re-evaluated the X40×10 benchmark for halogen bonding using conventional and explicitly correlated coupled cluster methods. For the aromatic dimers at small separation, improved CCSD(T)-MP2 "high-level corrections" (HLCs) cause substantial reductions in the dissociation energy. For the bromine and iodine species, (n-1)d subvalence correlation increases dissociation energies and turns out to be more important for noncovalent interactions than is generally realized; (n-1)sp subvalence correlation is much less important. The (n-1)d subvalence term is dominated by core-valence correlation; with the smaller cc-pVDZ-F12-PP and cc-pVTZ-F12-PP basis sets, basis set convergence for the core-core contribution becomes sufficiently erratic that it may compromise results overall. The two factors conspire to generate discrepancies of up to 0.9 kcal/mol (0.16 kcal/mol RMS) between the original X40×10 data and the present revision.

  8. HTR-PROTEUS Pebble Bed Experimental Program Cores 1, 1A, 2, and 3: Hexagonal Close Packing with a 1:2 Moderator-to-Fuel Pebble Ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; Barbara H. Dolphin; James W. Sterbentz

    2013-03-01

    In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » Four benchmark experiments were evaluated in this report: Cores 1, 1A, 2, and 3. These core configurations represent the hexagonal close packing (HCP) configurations of the HTR-PROTEUS experiment with a moderator-to-fuel pebble ratio of 1:2. Core 1 represents the only configuration utilizing ZEBRA control rods. Cores 1A, 2, and 3 use withdrawable, hollow, stainless steel control rods. Cores 1 and 1A are similar except for the use of different control rods; Core 1A also has one less layer of pebbles (21 layers instead of 22). Core 2 retains the first 16 layers of pebbles from Cores 1 and 1A and has 16 layers of moderator pebbles stacked above the fueled layers. Core 3 retains the first 17 layers of pebbles but has polyethylene rods inserted between pebbles to simulate water ingress. The additional partial pebble layer (layer 18) for Core 3 was not included as it was used for core operations and not the reported critical configuration. Cores 1, 1A, 2, and 3 were determined to be acceptable benchmark experiments.« less

  9. HTR-PROTEUS Pebble Bed Experimental Program Cores 1, 1A, 2, and 3: Hexagonal Close Packing with a 1:2 Moderator-to-Fuel Pebble Ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; Barbara H. Dolphin; James W. Sterbentz

    2012-03-01

    In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » Four benchmark experiments were evaluated in this report: Cores 1, 1A, 2, and 3. These core configurations represent the hexagonal close packing (HCP) configurations of the HTR-PROTEUS experiment with a moderator-to-fuel pebble ratio of 1:2. Core 1 represents the only configuration utilizing ZEBRA control rods. Cores 1A, 2, and 3 use withdrawable, hollow, stainless steel control rods. Cores 1 and 1A are similar except for the use of different control rods; Core 1A also has one less layer of pebbles (21 layers instead of 22). Core 2 retains the first 16 layers of pebbles from Cores 1 and 1A and has 16 layers of moderator pebbles stacked above the fueled layers. Core 3 retains the first 17 layers of pebbles but has polyethylene rods inserted between pebbles to simulate water ingress. The additional partial pebble layer (layer 18) for Core 3 was not included as it was used for core operations and not the reported critical configuration. Cores 1, 1A, 2, and 3 were determined to be acceptable benchmark experiments.« less

  10. Benchmarking high performance computing architectures with CMS’ skeleton framework

    DOE PAGES

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-11-23

    Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less

  11. Benchmarking high performance computing architectures with CMS’ skeleton framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less

  12. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    DTIC Science & Technology

    2017-04-13

    modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a

  13. Let History Not Repeat Itself: Overcoming Obstacles to the Common Core's Success. ES Select

    ERIC Educational Resources Information Center

    Chubb, John

    2012-01-01

    The Common Core State Standards project is the latest in a series of efforts to improve the academic success of American students. Forty-five states and the District of Columbia have endorsed new academic benchmarks that substantially raise the bar for achievement in English and mathematics. Aiming at a deeper form of learning, the initiative is a…

  14. Performance implications from sizing a VM on multi-core systems: A Data analytic application s view

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Horey, James L; Begoli, Edmon

    In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloudmore » Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.« less

  15. Assessing fidelity to evidence-based practices in usual care: the example of family therapy for adolescent behavior problems.

    PubMed

    Hogue, Aaron; Dauber, Sarah

    2013-04-01

    This study describes a multimethod evaluation of treatment fidelity to the family therapy (FT) approach demonstrated by front-line therapists in a community behavioral health clinic that utilized FT as its routine standard of care. Study cases (N=50) were adolescents with conduct and/or substance use problems randomly assigned to routine family therapy (RFT) or to a treatment-as-usual clinic not aligned with the FT approach (TAU). Observational analyses showed that RFT therapists consistently achieved a level of adherence to core FT techniques comparable to the adherence benchmark established during an efficacy trial of a research-based FT. Analyses of therapist-report measures found that compared to TAU, RFT demonstrated strong adherence to FT and differentiation from three other evidence-based practices: cognitive-behavioral therapy, motivational interviewing, and drug counseling. Implications for rigorous fidelity assessments of evidence-based practices in usual care settings are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Comparing Hospital Processes and Outcomes in California Medicare Beneficiaries: Simulation Prompts Reconsideration.

    PubMed

    Escobar, Gabriel J; Baker, Jennifer M; Turk, Benjamin J; Draper, David; Liu, Vincent; Kipnis, Patricia

    2017-01-01

    This article is not a traditional research report. It describes how conducting a specific set of benchmarking analyses led us to broader reflections on hospital benchmarking. We reexamined an issue that has received far less attention from researchers than in the past: How variations in the hospital admission threshold might affect hospital rankings. Considering this threshold made us reconsider what benchmarking is and what future benchmarking studies might be like. Although we recognize that some of our assertions are speculative, they are based on our reading of the literature and previous and ongoing data analyses being conducted in our research unit. We describe the benchmarking analyses that led to these reflections. The Centers for Medicare and Medicaid Services' Hospital Compare Web site includes data on fee-for-service Medicare beneficiaries but does not control for severity of illness, which requires physiologic data now available in most electronic medical records.To address this limitation, we compared hospital processes and outcomes among Kaiser Permanente Northern California's (KPNC) Medicare Advantage beneficiaries and non-KPNC California Medicare beneficiaries between 2009 and 2010. We assigned a simulated severity of illness measure to each record and explored the effect of having the additional information on outcomes. We found that if the admission severity of illness in non-KPNC hospitals increased, KPNC hospitals' mortality performance would appear worse; conversely, if admission severity at non-KPNC hospitals' decreased, KPNC hospitals' performance would appear better. Future hospital benchmarking should consider the impact of variation in admission thresholds.

  17. Student Satisfaction Surveys: The Value in Taking an Historical Perspective

    ERIC Educational Resources Information Center

    Kane, David; Williams, James; Cappuccini-Ansfield, Gillian

    2008-01-01

    Benchmarking satisfaction over time can be extremely valuable where a consistent feedback cycle is employed. However, the value of benchmarking over a long period of time has not been analysed in depth. What is the value of benchmarking this type of data over time? What does it tell us about a feedback and action cycle? What impact does a study of…

  18. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part I: Benchmark comparisons of WIMS-D5 and DRAGON cell and control rod parameters with MCNP5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mollerach, R.; Leszczynski, F.; Fink, J.

    2006-07-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure-vessel design with 451 vertical coolant channels, and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update calculation methods and models (cell, supercell and reactor) was recently carried out coveringmore » cell, supercell (control rod) and core calculations. As a validation of the new models some benchmark comparisons were done with Monte Carlo calculations with MCNP5. This paper presents comparisons of cell and supercell benchmark problems based on a slightly idealized model of the Atucha-I core obtained with the WIMS-D5 and DRAGON codes with MCNP5 results. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, and more symmetric than Atucha-II Cell parameters compared include cell k-infinity, relative power levels of the different rings of fuel rods, and some two-group macroscopic cross sections. Supercell comparisons include supercell k-infinity changes due to the control rods (tubes) of steel and hafnium. (authors)« less

  19. It's Not Education by Zip Code Anymore--But What is It? Conceptions of Equity under the Common Core

    ERIC Educational Resources Information Center

    Kornhaber, Mindy L.; Griffith, Kelly; Tyler, Alison

    2014-01-01

    The Common Core State Standards Initiative is a standards-based reform in which 45 U.S. states and the District of Columbia have agreed to participate. The reform seeks to anchor primary and secondary education across these states in one set of demanding, internationally benchmarked standards. Thereby, all students will be prepared for further…

  20. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  1. Benchmarking infrastructure for mutation text mining.

    PubMed

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  2. Update and evaluation of decay data for spent nuclear fuel analyses

    NASA Astrophysics Data System (ADS)

    Simeonov, Teodosi; Wemple, Charles

    2017-09-01

    Studsvik's approach to spent nuclear fuel analyses combines isotopic concentrations and multi-group cross-sections, calculated by the CASMO5 or HELIOS2 lattice transport codes, with core irradiation history data from the SIMULATE5 reactor core simulator and tabulated isotopic decay data. These data sources are used and processed by the code SNF to predict spent nuclear fuel characteristics. Recent advances in the generation procedure for the SNF decay data are presented. The SNF decay data includes basic data, such as decay constants, atomic masses and nuclide transmutation chains; radiation emission spectra for photons from radioactive decay, alpha-n reactions, bremsstrahlung, and spontaneous fission, electrons and alpha particles from radioactive decay, and neutrons from radioactive decay, spontaneous fission, and alpha-n reactions; decay heat production; and electro-atomic interaction data for bremsstrahlung production. These data are compiled from fundamental (ENDF, ENSDF, TENDL) and processed (ESTAR) sources for nearly 3700 nuclides. A rigorous evaluation procedure of internal consistency checks and comparisons to measurements and benchmarks, and code-to-code verifications is performed at the individual isotope level and using integral characteristics on a fuel assembly level (e.g., decay heat, radioactivity, neutron and gamma sources). Significant challenges are presented by the scope and complexity of the data processing, a dearth of relevant detailed measurements, and reliance on theoretical models for some data.

  3. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Javier Ortensi; Sonat Sen

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less

  4. Initial Performance Results on IBM POWER6

    NASA Technical Reports Server (NTRS)

    Saini, Subbash; Talcott, Dale; Jespersen, Dennis; Djomehri, Jahed; Jin, Haoqiang; Mehrotra, Piysuh

    2008-01-01

    The POWER5+ processor has a faster memory bus than that of the previous generation POWER5 processor (533 MHz vs. 400 MHz), but the measured per-core memory bandwidth of the latter is better than that of the former (5.7 GB/s vs. 4.3 GB/s). The reason for this is that in the POWER5+, the two cores on the chip share the L2 cache, L3 cache and memory bus. The memory controller is also on the chip and is shared by the two cores. This serializes the path to memory. For consistently good performance on a wide range of applications, the performance of the processor, the memory subsystem, and the interconnects (both latency and bandwidth) should be balanced. Recognizing this, IBM has designed the Power6 processor so as to avoid the bottlenecks due to the L2 cache, memory controller and buffer chips of the POWER5+. Unlike the POWER5+, each core in the POWER6 has its own L2 cache (4 MB - double that of the Power5+), memory controller and buffer chips. Each core in the POWER6 runs at 4.7 GHz instead of 1.9 GHz in POWER5+. In this paper, we evaluate the performance of a dual-core Power6 based IBM p6-570 system, and we compare its performance with that of a dual-core Power5+ based IBM p575+ system. In this evaluation, we have used the High- Performance Computing Challenge (HPCC) benchmarks, NAS Parallel Benchmarks (NPB), and four real-world applications--three from computational fluid dynamics and one from climate modeling.

  5. Sedimentary and geochemical signature of the 2016 Kaikōura Tsunami at Little Pigeon Bay: A depositional benchmark for the Banks Peninsula region, New Zealand

    NASA Astrophysics Data System (ADS)

    Williams, Shaun; Zhang, Tianran; Chagué, Catherine; Williams, James; Goff, James; Lane, Emily M.; Bind, Jochen; Qasim, Ilyas; Thomas, Kristie-Lee; Mueller, Christof; Hampton, Sam; Borella, Josh

    2018-07-01

    The 14 November 2016 Kaikōura Tsunami inundated Little Pigeon Bay in Banks Peninsula, New Zealand, and left a distinct sedimentary deposit, on the ground and within the cottage near the shore. Sedimentary (grain size) and geochemical (electrical conductivity and X-Ray Fluorescence) analyses on samples collected over successive field campaigns are used to characterize the deposits. Sediment distribution observed in the cottage in combination with flow direction indicators suggests that sediment and debris laid down within the building were predominantly the result of a single wave that had been channeled up the stream bed rather than from offshore. Salinity data indicated that the maximum tsunami-wetted and/or seawater-sprayed area extended 12.5 m farther inland than the maximum inundation distance inferred from the debris line observed a few days after the event. In addition, the salinity signature was short-lived. An overall inland waning of tsunami energy was indicated by the mean grain size and portable X-Ray Fluorescence elemental results. ITRAX data collected from three cores along an inland transect indicated a distinct elevated elemental signature at the surfaces of the cores, with an associated increase in magnetic susceptibility. Comparable signatures were also identified within subsurface stratigraphic sequences, and likely represent older tsunamis known to have inundated this bay as well as adjacent bays in Banks Peninsula. The sedimentary and geochemical signatures of the 2016 Kaikōura Tsunami at Little Pigeon Bay provide a modern benchmark that can be used to identify older tsunami deposits in the Banks Peninsula region.

  6. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  7. DE-NE0008277_PROTEUS final technical report 2018

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enqvist, Andreas

    This project details re-evaluations of experiments of gas-cooled fast reactor (GCFR) core designs performed in the 1970s at the PROTEUS reactor and create a series of International Reactor Physics Experiment Evaluation Project (IRPhEP) benchmarks. Currently there are no gas-cooled fast reactor (GCFR) experiments available in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). These experiments are excellent candidates for reanalysis and development of multiple benchmarks because these experiments provide high-quality integral nuclear data relevant to the validation and refinement of thorium, neptunium, uranium, plutonium, iron, and graphite cross sections. It would be cost prohibitive to reproduce suchmore » a comprehensive suite of experimental data to support any future GCFR endeavors.« less

  8. Comparing Hospital Processes and Outcomes in California Medicare Beneficiaries: Simulation Prompts Reconsideration

    PubMed Central

    Escobar, Gabriel J; Baker, Jennifer M; Turk, Benjamin J; Draper, David; Liu, Vincent; Kipnis, Patricia

    2017-01-01

    Introduction This article is not a traditional research report. It describes how conducting a specific set of benchmarking analyses led us to broader reflections on hospital benchmarking. We reexamined an issue that has received far less attention from researchers than in the past: How variations in the hospital admission threshold might affect hospital rankings. Considering this threshold made us reconsider what benchmarking is and what future benchmarking studies might be like. Although we recognize that some of our assertions are speculative, they are based on our reading of the literature and previous and ongoing data analyses being conducted in our research unit. We describe the benchmarking analyses that led to these reflections. Objectives The Centers for Medicare and Medicaid Services’ Hospital Compare Web site includes data on fee-for-service Medicare beneficiaries but does not control for severity of illness, which requires physiologic data now available in most electronic medical records. To address this limitation, we compared hospital processes and outcomes among Kaiser Permanente Northern California’s (KPNC) Medicare Advantage beneficiaries and non-KPNC California Medicare beneficiaries between 2009 and 2010. Methods We assigned a simulated severity of illness measure to each record and explored the effect of having the additional information on outcomes. Results We found that if the admission severity of illness in non-KPNC hospitals increased, KPNC hospitals’ mortality performance would appear worse; conversely, if admission severity at non-KPNC hospitals’ decreased, KPNC hospitals’ performance would appear better. Conclusion Future hospital benchmarking should consider the impact of variation in admission thresholds. PMID:29035176

  9. Convergence studies of deterministic methods for LWR explicit reflector methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canepa, S.; Hursin, M.; Ferroukhi, H.

    2013-07-01

    The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less

  10. Theoretical Background and Prognostic Modeling for Benchmarking SHM Sensors for Composite Structures

    DTIC Science & Technology

    2010-10-01

    minimum flaw size can be detected by the existing SHM based monitoring methods. Sandwich panels with foam , WebCore and honeycomb structures were...Whether it be hat stiffened, corrugated sandwich, honeycomb sandwich, or foam filled sandwich, all composite structures have one basic handicap in...based monitoring methods. Sandwich panels with foam , WebCore and honeycomb structures were considered for use in this study. Eigenmode frequency

  11. Benchmark tests of JENDL-3.2 for thermal and fast reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takano, Hideki; Akie, Hiroshi; Kikuchi, Yasuyuki

    1994-12-31

    Benchmark calculations for a variety of thermal and fast reactors have been performed by using the newly evaluated JENDL-3 Version-2 (JENDL-3.2) file. In the thermal reactor calculations for the uranium and plutonium fueled cores of TRX and TCA, the k{sub eff} and lattice parameters were well predicted. The fast reactor calculations for ZPPR-9 and FCA assemblies showed that the k{sub eff} reactivity worths of Doppler, sodium void and control rod, and reaction rate distribution were in a very good agreement with the experiments.

  12. Effectiveness of Social Marketing Interventions to Promote Physical Activity Among Adults: A Systematic Review.

    PubMed

    Xia, Yuan; Deshpande, Sameer; Bonates, Tiberius

    2016-11-01

    Social marketing managers promote desired behaviors to an audience by making them tangible in the form of environmental opportunities to enhance benefits and reduce barriers. This study proposed "benchmarks," modified from those found in the past literature, that would match important concepts of the social marketing framework and the inclusion of which would ensure behavior change effectiveness. In addition, we analyzed behavior change interventions on a "social marketing continuum" to assess whether the number of benchmarks and the role of specific benchmarks influence the effectiveness of physical activity promotion efforts. A systematic review of social marketing interventions available in academic studies published between 1997 and 2013 revealed 173 conditions in 92 interventions. Findings based on χ 2 , Mallows' Cp, and Logical Analysis of Data tests revealed that the presence of more benchmarks in interventions increased the likelihood of success in promoting physical activity. The presence of more than 3 benchmarks improved the success of the interventions; specifically, all interventions were successful when more than 7.5 benchmarks were present. Further, primary formative research, core product, actual product, augmented product, promotion, and behavioral competition all had a significant influence on the effectiveness of interventions. Social marketing is an effective approach in promoting physical activity among adults when a substantial number of benchmarks are used and when managers understand the audience, make the desired behavior tangible, and promote the desired behavior persuasively.

  13. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORE 4: RANDOM PACKING WITH A 1:1 MODERATOR-TO-FUEL PEBBLE RATIO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; Leland M. Montierth

    2013-03-01

    In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » One benchmark experiment was evaluated in this report: Core 4. Core 4 represents the only configuration with random pebble packing in the HTR-PROTEUS series of experiments, and has a moderator-to-fuel pebble ratio of 1:1. Three random configurations were performed. The initial configuration, Core 4.1, was rejected because the method for pebble loading, separate delivery tubes for the moderator and fuel pebbles, may not have been completely random; this core loading was rejected by the experimenters. Cores 4.2 and 4.3 were loaded using a single delivery tube, eliminating the possibility for systematic ordering effects. The second and third cores differed slightly in the quantity of pebbles loaded (40 each of moderator and fuel pebbles), stacked height of the pebbles in the core cavity (0.02 m), withdrawn distance of the stainless steel control rods (20 mm), and withdrawn distance of the autorod (30 mm). The 34 coolant channels in the upper axial reflector and the 33 coolant channels in the lower axial reflector were open. Additionally, the axial graphite fillers used in all other HTR-PROTEUS configurations to create a 12-sided core cavity were not used in the randomly packed cores. Instead, graphite fillers were placed on the cavity floor, creating a funnel-like base, to discourage ordering effects during pebble loading. Core 4 was determined to be acceptable benchmark experiment.« less

  14. HTR-proteus pebble bed experimental program core 4: random packing with a 1:1 moderator-to-fuel pebble ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Montierth, Leland M.; Sterbentz, James W.

    2014-03-01

    In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » One benchmark experiment was evaluated in this report: Core 4. Core 4 represents the only configuration with random pebble packing in the HTR-PROTEUS series of experiments, and has a moderator-to-fuel pebble ratio of 1:1. Three random configurations were performed. The initial configuration, Core 4.1, was rejected because the method for pebble loading, separate delivery tubes for the moderator and fuel pebbles, may not have been completely random; this core loading was rejected by the experimenters. Cores 4.2 and 4.3 were loaded using a single delivery tube, eliminating the possibility for systematic ordering effects. The second and third cores differed slightly in the quantity of pebbles loaded (40 each of moderator and fuel pebbles), stacked height of the pebbles in the core cavity (0.02 m), withdrawn distance of the stainless steel control rods (20 mm), and withdrawn distance of the autorod (30 mm). The 34 coolant channels in the upper axial reflector and the 33 coolant channels in the lower axial reflector were open. Additionally, the axial graphite fillers used in all other HTR-PROTEUS configurations to create a 12-sided core cavity were not used in the randomly packed cores. Instead, graphite fillers were placed on the cavity floor, creating a funnel-like base, to discourage ordering effects during pebble loading. Core 4 was determined to be acceptable benchmark experiment.« less

  15. Next Generation School Districts: What Capacities Do Districts Need to Create and Sustain Schools That Are Ready to Deliver on Common Core?

    ERIC Educational Resources Information Center

    Lake, Robin; Hill, Paul T.; Maas, Tricia

    2015-01-01

    Every sector of the U.S. economy is working on ways to deliver services in a more customized manner. If all goes well, education is headed in the same direction. Personalized learning and globally benchmarked academic standards (a.k.a. Common Core) are the focus of most major school districts and charter school networks. Educators and parents know…

  16. Mean velocity and turbulence measurements in a 90 deg curved duct with thin inlet boundary layer

    NASA Technical Reports Server (NTRS)

    Crawford, R. A.; Peters, C. E.; Steinhoff, J.; Hornkohl, J. O.; Nourinejad, J.; Ramachandran, K.

    1985-01-01

    The experimental database established by this investigation of the flow in a large rectangular turning duct is of benchmark quality. The experimental Reynolds numbers, Deans numbers and boundary layer characteristics are significantly different from previous benchmark curved-duct experimental parameters. This investigation extends the experimental database to higher Reynolds number and thinner entrance boundary layers. The 5% to 10% thick boundary layers, based on duct half-width, results in a large region of near-potential flow in the duct core surrounded by developing boundary layers with large crossflows. The turbulent entrance boundary layer case at R sub ed = 328,000 provides an incompressible flowfield which approaches real turbine blade cascade characteristics. The results of this investigation provide a challenging benchmark database for computational fluid dynamics code development.

  17. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Proctor, Fred H.

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these benchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, J; Dossa, D; Gokhale, M

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less

  19. Time and frequency structure of causal correlation networks in the China bond market

    NASA Astrophysics Data System (ADS)

    Wang, Zhongxing; Yan, Yan; Chen, Xiaosong

    2017-07-01

    There are more than eight hundred interest rates published in the China bond market every day. Identifying the benchmark interest rates that have broad influences on most other interest rates is a major concern for economists. In this paper, a multi-variable Granger causality test is developed and applied to construct a directed network of interest rates, whose important nodes, regarded as key interest rates, are evaluated with CheiRank scores. The results indicate that repo rates are the benchmark of short-term rates, the central bank bill rates are in the core position of mid-term interest rates network, and treasury bond rates lead the long-term bond rates. The evolution of benchmark interest rates from 2008 to 2014 is also studied, and it is found that SHIBOR has generally become the benchmark interest rate in China. In the frequency domain we identify the properties of information flows between interest rates, and the result confirms the existence of market segmentation in the China bond market.

  20. Graph 500 on OpenSHMEM: Using a Practical Survey of Past Work to Motivate Novel Algorithmic Developments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grossman, Max; Pritchard Jr., Howard Porter; Budimlic, Zoran

    2016-12-22

    Graph500 [14] is an effort to offer a standardized benchmark across large-scale distributed platforms which captures the behavior of common communicationbound graph algorithms. Graph500 differs from other large-scale benchmarking efforts (such as HPL [6] or HPGMG [7]) primarily in the irregularity of its computation and data access patterns. The core computational kernel of Graph500 is a breadth-first search (BFS) implemented on an undirected graph. The output of Graph500 is a spanning tree of the input graph, usually represented by a predecessor mapping for every node in the graph. The Graph500 benchmark defines several pre-defined input sizes for implementers to testmore » against. This report summarizes investigation into implementing the Graph500 benchmark on OpenSHMEM, and focuses on first building a strong and practical understanding of the strengths and limitations of past work before proposing and developing novel extensions.« less

  1. The change of radial power factor distribution due to RCCA insertion at the first cycle core of AP1000

    NASA Astrophysics Data System (ADS)

    Susilo, J.; Suparlina, L.; Deswandri; Sunaryo, G. R.

    2018-02-01

    The using of a computer program for the PWR type core neutronic design parameters analysis has been carried out in some previous studies. These studies included a computer code validation on the neutronic parameters data values resulted from measurements and benchmarking calculation. In this study, the AP1000 first cycle core radial power peaking factor validation and analysis were performed using CITATION module of the SRAC2006 computer code. The computer code has been also validated with a good result to the criticality values of VERA benchmark core. The AP1000 core power distribution calculation has been done in two-dimensional X-Y geometry through ¼ section modeling. The purpose of this research is to determine the accuracy of the SRAC2006 code, and also the safety performance of the AP1000 core first cycle operating. The core calculations were carried out with the several conditions, those are without Rod Cluster Control Assembly (RCCA), by insertion of a single RCCA (AO, M1, M2, MA, MB, MC, MD) and multiple insertion RCCA (MA + MB, MA + MB + MC, MA + MB + MC + MD, and MA + MB + MC + MD + M1). The maximum power factor of the fuel rods value in the fuel assembly assumedapproximately 1.406. The calculation results analysis showed that the 2-dimensional CITATION module of SRAC2006 code is accurate in AP1000 power distribution calculation without RCCA and with MA+MB RCCA insertion.The power peaking factor on the first operating cycle of the AP1000 core without RCCA, as well as with single and multiple RCCA are still below in the safety limit values (less then about 1.798). So in terms of thermal power generated by the fuel assembly, then it can be considered that the AP100 core at the first operating cycle is safe.

  2. Benchmarking of calculation schemes in APOLLO2 and COBAYA3 for WER lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheleva, N.; Ivanov, P.; Todorova, G.

    This paper presents solutions of the NURISP WER lattice benchmark using APOLLO2, TRIPOLI4 and COBAYA3 pin-by-pin. The main objective is to validate MOC based calculation schemes for pin-by-pin cross-section generation with APOLLO2 against TRIPOLI4 reference results. A specific objective is to test the APOLLO2 generated cross-sections and interface discontinuity factors in COBAYA3 pin-by-pin calculations with unstructured mesh. The VVER-1000 core consists of large hexagonal assemblies with 2 mm inter-assembly water gaps which require the use of unstructured meshes in the pin-by-pin core simulators. The considered 2D benchmark problems include 19-pin clusters, fuel assemblies and 7-assembly clusters. APOLLO2 calculation schemes withmore » the step characteristic method (MOC) and the higher-order Linear Surface MOC have been tested. The comparison of APOLLO2 vs. TRIPOLI4 results shows a very close agreement. The 3D lattice solver in COBAYA3 uses transport corrected multi-group diffusion approximation with interface discontinuity factors of Generalized Equivalence Theory (GET) or Black Box Homogenization (BBH) type. The COBAYA3 pin-by-pin results in 2, 4 and 8 energy groups are close to the reference solutions when using side-dependent interface discontinuity factors. (authors)« less

  3. Benchmarking road safety performance: Identifying a meaningful reference (best-in-class).

    PubMed

    Chen, Faan; Wu, Jiaorong; Chen, Xiaohong; Wang, Jianjun; Wang, Di

    2016-01-01

    For road safety improvement, comparing and benchmarking performance are widely advocated as the emerging and preferred approaches. However, there is currently no universally agreed upon approach for the process of road safety benchmarking, and performing the practice successfully is by no means easy. This is especially true for the two core activities of which: (1) developing a set of road safety performance indicators (SPIs) and combining them into a composite index; and (2) identifying a meaningful reference (best-in-class), one which has already obtained outstanding road safety practices. To this end, a scientific technique that can combine the multi-dimensional safety performance indicators (SPIs) into an overall index, and subsequently can identify the 'best-in-class' is urgently required. In this paper, the Entropy-embedded RSR (Rank-sum ratio), an innovative, scientific and systematic methodology is investigated with the aim of conducting the above two core tasks in an integrative and concise procedure, more specifically in a 'one-stop' way. Using a combination of results from other methods (e.g. the SUNflower approach) and other measures (e.g. Human Development Index) as a relevant reference, a given set of European countries are robustly ranked and grouped into several classes based on the composite Road Safety Index. Within each class the 'best-in-class' is then identified. By benchmarking road safety performance, the results serve to promote best practice, encourage the adoption of successful road safety strategies and measures and, more importantly, inspire the kind of political leadership needed to create a road transport system that maximizes safety. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Monte Carlo analyses of TRX slightly enriched uranium-H/sub 2/O critical experiments with ENDF/B-IV and related data sets (AWBA Development Program)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardy, J. Jr.

    1977-12-01

    Four H/sub 2/O-moderated, slightly-enriched-uranium critical experiments were analyzed by Monte Carlo methods with ENDF/B-IV data. These were simple metal-rod lattices comprising Cross Section Evaluation Working Group thermal reactor benchmarks TRX-1 through TRX-4. Generally good agreement with experiment was obtained for calculated integral parameters: the epi-thermal/thermal ratio of U238 capture (rho/sup 28/) and of U235 fission (delta/sup 25/), the ratio of U238 capture to U235 fission (CR*), and the ratio of U238 fission to U235 fission (delta/sup 28/). Full-core Monte Carlo calculations for two lattices showed good agreement with cell Monte Carlo-plus-multigroup P/sub l/ leakage corrections. Newly measured parameters for themore » low energy resonances of U238 significantly improved rho/sup 28/. In comparison with other CSEWG analyses, the strong correlation between K/sub eff/ and rho/sup 28/ suggests that U238 resonance capture is the major problem encountered in analyzing these lattices.« less

  5. Properties of 5052 Aluminum For Use as Honeycomb Core in Manned Spaceflight

    NASA Technical Reports Server (NTRS)

    Lerch, Bradley A.

    2018-01-01

    This work explains that the properties of Al 5052 material used commonly for honeycomb cores in sandwich panels are highly dependent on the tempering condition. It has not been common to specify the temper when ordering HC material nor is it common for the supplier to state what the temper is. For aerospace uses, a temper of H38 or H39 is probably recommended. This temper should be stated in the bill of material and should be verified upon receipt of the core. To this end some properties provided herein can aid as benchmark values.

  6. Investigation of Abnormal Heat Transfer and Flow in a VHTR Reactor Core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawaji, Masahiro; Valentin, Francisco I.; Artoun, Narbeh

    2015-12-21

    The main objective of this project was to identify and characterize the conditions under which abnormal heat transfer phenomena would occur in a Very High Temperature Reactor (VHTR) with a prismatic core. High pressure/high temperature experiments have been conducted to obtain data that could be used for validation of VHTR design and safety analysis codes. The focus of these experiments was on the generation of benchmark data for design and off-design heat transfer for forced, mixed and natural circulation in a VHTR core. In particular, a flow laminarization phenomenon was intensely investigated since it could give rise to hot spotsmore » in the VHTR core.« less

  7. Proposed biopsy performance benchmarks for MRI based on an audit of a large academic center.

    PubMed

    Sedora Román, Neda I; Mehta, Tejas S; Sharpe, Richard E; Slanetz, Priscilla J; Venkataraman, Shambhavi; Fein-Zachary, Valerie; Dialani, Vandana

    2018-05-01

    Performance benchmarks exist for mammography (MG); however, performance benchmarks for magnetic resonance imaging (MRI) are not yet fully developed. The purpose of our study was to perform an MRI audit based on established MG and screening MRI benchmarks and to review whether these benchmarks can be applied to an MRI practice. An IRB approved retrospective review of breast MRIs was performed at our center from 1/1/2011 through 12/31/13. For patients with biopsy recommendation, core biopsy and surgical pathology results were reviewed. The data were used to derive mean performance parameter values, including abnormal interpretation rate (AIR), positive predictive value (PPV), cancer detection rate (CDR), percentage of minimal cancers and axillary node negative cancers and compared with MG and screening MRI benchmarks. MRIs were also divided by screening and diagnostic indications to assess for differences in performance benchmarks amongst these two groups. Of the 2455 MRIs performed over 3-years, 1563 were performed for screening indications and 892 for diagnostic indications. With the exception of PPV2 for screening breast MRIs from 2011 to 2013, PPVs were met for our screening and diagnostic populations when compared to the MRI screening benchmarks established by the Breast Imaging Reporting and Data System (BI-RADS) 5 Atlas ® . AIR and CDR were lower for screening indications as compared to diagnostic indications. New MRI screening benchmarks can be used for screening MRI audits while the American College of Radiology (ACR) desirable goals for diagnostic MG can be used for diagnostic MRI audits. Our study corroborates established findings regarding differences in AIR and CDR amongst screening versus diagnostic indications. © 2017 Wiley Periodicals, Inc.

  8. Analysing the performance of personal computers based on Intel microprocessors for sequence aligning bioinformatics applications.

    PubMed

    Nair, Pradeep S; John, Eugene B

    2007-01-01

    Aligning specific sequences against a very large number of other sequences is a central aspect of bioinformatics. With the widespread availability of personal computers in biology laboratories, sequence alignment is now often performed locally. This makes it necessary to analyse the performance of personal computers for sequence aligning bioinformatics benchmarks. In this paper, we analyse the performance of a personal computer for the popular BLAST and FASTA sequence alignment suites. Results indicate that these benchmarks have a large number of recurring operations and use memory operations extensively. It seems that the performance can be improved with a bigger L1-cache.

  9. Under Construction: Benchmark Assessments and Common Core Math Implementation in Grades K-8. Formative Evaluation Cycle Report for the Math in Common Initiative, Volume 1

    ERIC Educational Resources Information Center

    Flaherty, John, Jr.; Sobolew-Shubin, Alexandria; Heredia, Alberto; Chen-Gaddini, Min; Klarin, Becca; Finkelstein, Neal D.

    2014-01-01

    Math in Common® (MiC) is a five-year initiative that supports a formal network of 10 California school districts as they implement the Common Core State Standards in mathematics (CCSS-M) across grades K-8. As the MiC initiative moves into its second year, one of the central activities that each of the districts is undergoing to support CCSS…

  10. Benchmarking to improve the quality of cystic fibrosis care.

    PubMed

    Schechter, Michael S

    2012-11-01

    Benchmarking involves the ascertainment of healthcare programs with most favorable outcomes as a means to identify and spread effective strategies for delivery of care. The recent interest in the development of patient registries for patients with cystic fibrosis (CF) has been fueled in part by an interest in using them to facilitate benchmarking. This review summarizes reports of how benchmarking has been operationalized in attempts to improve CF care. Although certain goals of benchmarking can be accomplished with an exclusive focus on registry data analysis, benchmarking programs in Germany and the United States have supplemented these data analyses with exploratory interactions and discussions to better understand successful approaches to care and encourage their spread throughout the care network. Benchmarking allows the discovery and facilitates the spread of effective approaches to care. It provides a pragmatic alternative to traditional research methods such as randomized controlled trials, providing insights into methods that optimize delivery of care and allowing judgments about the relative effectiveness of different therapeutic approaches.

  11. Benchmark Comparison of Dual- and Quad-Core Processor Linux Clusters with Two Global Climate Modeling Workloads

    NASA Technical Reports Server (NTRS)

    McGalliard, James

    2008-01-01

    This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.

  12. Seismic analysis of the Mirror Fusion Test Facility: soil structure interaction analyses of the Axicell vacuum vessel. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maslenikov, O.R.; Mraz, M.J.; Johnson, J.J.

    1986-03-01

    This report documents the seismic analyses performed by SMA for the MFTF-B Axicell vacuum vessel. In the course of this study we performed response spectrum analyses, CLASSI fixed-base analyses, and SSI analyses that included interaction effects between the vessel and vault. The response spectrum analysis served to benchmark certain modeling differences between the LLNL and SMA versions of the vessel model. The fixed-base analysis benchmarked the differences between analysis techniques. The SSI analyses provided our best estimate of vessel response to the postulated seismic excitation for the MFTF-B facility, and included consideration of uncertainties in soil properties by calculating responsemore » for a range of soil shear moduli. Our results are presented in this report as tables of comparisons of specific member forces from our analyses and the analyses performed by LLNL. Also presented are tables of maximum accelerations and relative displacements and plots of response spectra at various selected locations.« less

  13. Qualification of CASMO5 / SIMULATE-3K against the SPERT-III E-core cold start-up experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grandi, G.; Moberg, L.

    SIMULATE-3K is a three-dimensional kinetic code applicable to LWR Reactivity Initiated Accidents. S3K has been used to calculate several international recognized benchmarks. However, the feedback models in the benchmark exercises are different from the feedback models that SIMULATE-3K uses for LWR reactors. For this reason, it is worth comparing the SIMULATE-3K capabilities for Reactivity Initiated Accidents against kinetic experiments. The Special Power Excursion Reactor Test III was a pressurized-water, nuclear-research facility constructed to analyze the reactor kinetic behavior under initial conditions similar to those of commercial LWRs. The SPERT III E-core resembles a PWR in terms of fuel type, moderator,more » coolant flow rate, and system pressure. The initial test conditions (power, core flow, system pressure, core inlet temperature) are representative of cold start-up, hot start-up, hot standby, and hot full power. The qualification of S3K against the SPERT III E-core measurements is an ongoing work at Studsvik. In this paper, the results for the 30 cold start-up tests are presented. The results show good agreement with the experiments for the reactivity initiated accident main parameters: peak power, energy release and compensated reactivity. Predicted and measured peak powers differ at most by 13%. Measured and predicted reactivity compensations at the time of the peak power differ less than 0.01 $. Predicted and measured energy release differ at most by 13%. All differences are within the experimental uncertainty. (authors)« less

  14. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas

    2009-01-01

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors,more » other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million atom biological systems scale well up to 30k cores, producing 30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.« less

  15. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer.

    PubMed

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas; Smith, Jeremy C

    2009-10-13

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors, other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million-atom biological systems scale well up to ∼30k cores, producing ∼30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.

  16. Designing a Supply Chain Management Academic Curriculum Using QFD and Benchmarking

    ERIC Educational Resources Information Center

    Gonzalez, Marvin E.; Quesada, Gioconda; Gourdin, Kent; Hartley, Mark

    2008-01-01

    Purpose: The purpose of this paper is to utilize quality function deployment (QFD), Benchmarking analyses and other innovative quality tools to develop a new customer-centered undergraduate curriculum in supply chain management (SCM). Design/methodology/approach: The researchers used potential employers as the source for data collection. Then,…

  17. 75 FR 51982 - Fisheries of the Gulf of Mexico; Southeast Data, Assessment, and Review (SEDAR) Update; Greater...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-24

    ... evaluates potential datasets and recommends which datasets are appropriate for assessment analyses. The... points to datasets incorporated in the original SEDAR benchmark assessment and run the benchmark... Webinar II November 22, 2010; 10 a.m. - 1 p.m.; SEDAR Update Assessment Webinar III Using updated datasets...

  18. Allocating scarce financial resources for HIV treatment: benchmarking prices of antiretroviral medicines in Latin America.

    PubMed

    Wirtz, Veronika J; Santa-Ana-Tellez, Yared; Trout, Clinton H; Kaplan, Warren A

    2012-12-01

    Public sector price analyses of antiretroviral (ARV) medicines can provide relevant information to detect ARV procurement procedures that do not obtain competitive market prices. Price benchmarks provide a useful tool for programme managers and policy makers to support such planning and policy measures. The aim of the study was to develop regional and global price benchmarks which can be used to analyse public-sector price variability of ARVs in low- and middle-income countries using the procurement prices of Latin America and the Caribbean (LAC) countries in 2008 as an example. We used the Global Price Reporting Mechanism (GPRM) data base, provided by the World Health Organization (WHO), for 13 LAC countries' ARV procurements to analyse the procurement prices of four first-line and three second-line ARV combinations in 2008. First, a cross-sectional analysis was conducted to compare ARV combination prices. Second, four different price 'benchmarks' were created and we estimated the additional number of patients who could have been treated in each country if the ARV combinations studied were purchased at the various reference ('benchmark') prices. Large price variations exist for first- and second-line ARV combinations between countries in the LAC region. Most countries in the LAC region could be treating between 1.17 and 3.8 times more patients if procurement prices were closer to the lowest regional generic price. For all second-line combinations, a price closer to the lowest regional innovator prices or to the global median transaction price for lower-middle-income countries would also result in treating up to nearly five times more patients. Some rational allocation of financial resources due, in part, to price benchmarking and careful planning by policy makers and programme managers can assist a country in negotiating lower ARV procurement prices and should form part of a sustainable procurement policy.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savi, Daniel, E-mail: d.savi@umweltchemie.ch; Kasser, Ueli; Ott, Thomas

    Highlights: • We’ve analysed data on the dismantling of electronic and electrical appliances. • Ten years of mass balance data of more than recycling companies have been considered. • Percentages of dismantled batteries, capacitors and PWB have been studied. • Threshold values and benchmarks for batteries and capacitors have been identified. • No benchmark for the dismantling of printed wiring boards should be set. - Abstract: The article compiles and analyses sample data for toxic components removed from waste electronic and electrical equipment (WEEE) from more than 30 recycling companies in Switzerland over the past ten years. According to Europeanmore » and Swiss legislation, toxic components like batteries, capacitors and printed wiring boards have to be removed from WEEE. The control bodies of the Swiss take back schemes have been monitoring the activities of WEEE recyclers in Switzerland for about 15 years. All recyclers have to provide annual mass balance data for every year of operation. From this data, percentage shares of removed batteries and capacitors are calculated in relation to the amount of each respective WEEE category treated. A rationale is developed, why such an indicator should not be calculated for printed wiring boards. The distributions of these de-pollution indicators are analysed and their suitability for defining lower threshold values and benchmarks for the depollution of WEEE is discussed. Recommendations for benchmarks and threshold values for the removal of capacitors and batteries are given.« less

  20. Simulation-based comprehensive benchmarking of RNA-seq aligners

    PubMed Central

    Baruzzo, Giacomo; Hayer, Katharina E; Kim, Eun Ji; Di Camillo, Barbara; FitzGerald, Garret A; Grant, Gregory R

    2018-01-01

    Alignment is the first step in most RNA-seq analysis pipelines, and the accuracy of downstream analyses depends heavily on it. Unlike most steps in the pipeline, alignment is particularly amenable to benchmarking with simulated data. We performed a comprehensive benchmarking of 14 common splice-aware aligners for base, read, and exon junction-level accuracy and compared default with optimized parameters. We found that performance varied by genome complexity, and accuracy and popularity were poorly correlated. The most widely cited tool underperforms for most metrics, particularly when using default settings. PMID:27941783

  1. Benchmarking health IT among OECD countries: better data for better policy

    PubMed Central

    Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K

    2014-01-01

    Objective To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. Materials and methods The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. Results The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Discussion Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. Conclusions As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this. PMID:23721983

  2. Benchmarking health IT among OECD countries: better data for better policy.

    PubMed

    Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K

    2014-01-01

    To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this.

  3. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHart, Mark D.; Mausolff, Zander; Weems, Zach

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outsidemore » of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.« less

  4. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  5. A computationally simple model for determining the time dependent spectral neutron flux in a nuclear reactor core

    NASA Astrophysics Data System (ADS)

    Schneider, E. A.; Deinert, M. R.; Cady, K. B.

    2006-10-01

    The balance of isotopes in a nuclear reactor core is key to understanding the overall performance of a given fuel cycle. This balance is in turn most strongly affected by the time and energy-dependent neutron flux. While many large and involved computer packages exist for determining this spectrum, a simplified approach amenable to rapid computation is missing from the literature. We present such a model, which accepts as inputs the fuel element/moderator geometry and composition, reactor geometry, fuel residence time and target burnup and we compare it to OECD/NEA benchmarks for homogeneous MOX and UOX LWR cores. Collision probability approximations to the neutron transport equation are used to decouple the spatial and energy variables. The lethargy dependent neutron flux, governed by coupled integral equations for the fuel and moderator/coolant regions is treated by multigroup thermalization methods, and the transport of neutrons through space is modeled by fuel to moderator transport and escape probabilities. Reactivity control is achieved through use of a burnable poison or adjustable control medium. The model calculates the buildup of 24 actinides, as well as fission products, along with the lethargy dependent neutron flux and the results of several simulations are compared with benchmarked standards.

  6. Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results

    NASA Technical Reports Server (NTRS)

    Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.

  7. Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly

    NASA Astrophysics Data System (ADS)

    Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.

    2014-04-01

    We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.

  8. Generation of openEHR Test Datasets for Benchmarking.

    PubMed

    El Helou, Samar; Karvonen, Tuukka; Yamamoto, Goshiro; Kume, Naoto; Kobayashi, Shinji; Kondo, Eiji; Hiragi, Shusuke; Okamoto, Kazuya; Tamura, Hiroshi; Kuroda, Tomohiro

    2017-01-01

    openEHR is a widely used EHR specification. Given its technology-independent nature, different approaches for implementing openEHR data repositories exist. Public openEHR datasets are needed to conduct benchmark analyses over different implementations. To address their current unavailability, we propose a method for generating openEHR test datasets that can be publicly shared and used.

  9. Xenon-induced power oscillations in a generic small modular reactor

    NASA Astrophysics Data System (ADS)

    Kitcher, Evans Damenortey

    As world demand for energy continues to grow at unprecedented rates, the world energy portfolio of the future will inevitably include a nuclear energy contribution. It has been suggested that the Small Modular Reactor (SMR) could play a significant role in the spread of civilian nuclear technology to nations previously without nuclear energy. As part of the design process, the SMR design must be assessed for the threat to operations posed by xenon-induced power oscillations. In this research, a generic SMR design was analyzed with respect to just such a threat. In order to do so, a multi-physics coupling routine was developed with MCNP/MCNPX as the neutronics solver. Thermal hydraulic assessments were performed using a single channel analysis tool developed in Python. Fuel and coolant temperature profiles were implemented in the form of temperature dependent fuel cross sections generated using the SIGACE code and reactor core coolant densities. The Power Axial Offset (PAO) and Xenon Axial Offset (XAO) parameters were chosen to quantify any oscillatory behavior observed. The methodology was benchmarked against results from literature of startup tests performed at a four-loop PWR in Korea. The developed benchmark model replicated the pertinent features of the reactor within ten percent of the literature values. The results of the benchmark demonstrated that the developed methodology captured the desired phenomena accurately. Subsequently, a high fidelity SMR core model was developed and assessed. Results of the analysis revealed an inherently stable SMR design at beginning of core life and end of core life under full-power and half-power conditions. The effect of axial discretization, stochastic noise and convergence of the Monte Carlo tallies in the calculations of the PAO and XAO parameters was investigated. All were found to be quite small and the inherently stable nature of the core design with respect to xenon-induced power oscillations was confirmed. Finally, a preliminary investigation into excess reactivity control options for the SMR design was conducted confirming the generally held notion that existing PWR control mechanisms can be used in iPWR SMRs with similar effectiveness. With the desire to operate the SMR under the boron free coolant condition, erbium oxide fuel integral burnable absorber rods were identified as a possible means to retain the dispersed absorber effect of soluble boron in the reactor coolant in replacement.

  10. The Management Development Program: A Competency-Based Model for Preparing Hospitality Leaders.

    ERIC Educational Resources Information Center

    Brownell, Judi; Chung, Beth G.

    2001-01-01

    The master of management program at Cornell University focused on competency-based development of skills for the hospitality industry through core courses, minicourses, skill benchmarking, and continuous improvement. Benefits include a shift in the teacher role to advocate/coach, increased information sharing, student satisfaction, and clear…

  11. ICT Proficiency and Gender: A Validation on Training and Development

    ERIC Educational Resources Information Center

    Lin, Shinyi; Shih, Tse-Hua; Lu, Ruiling

    2013-01-01

    Use of innovative learning/instruction mode, embedded in the Certification Pathway System (CPS) developed by Certiport TM, is geared toward Internet and Computing Benchmark & Mentor specifically for IC[superscript 3] certification. The Internet and Computing Core Certification (IC[superscript 3]), as an industry-based credentialing program,…

  12. The Cognitive Science behind the Common Core

    ERIC Educational Resources Information Center

    Marchitello, Max; Wilhelm, Megan

    2014-01-01

    Raising academic standards has been part of the education policy discourse for decades. As early as the 1990s, states and school districts attempted to raise student achievement by developing higher standards and measuring student progress according to more rigorous benchmarks. However, the caliber of the standards--and their assessments--varied…

  13. The Effect of a High School Financial Literacy Course on Student Financial Knowledge

    ERIC Educational Resources Information Center

    McCann, Karen L.

    2010-01-01

    New Jersey school districts establish curriculums to meet the proficiencies found in the New Jersey Core Curriculum Content Standards (NJCCCS). The research focuses on the effectiveness of the Washington Township High School Career and Technology Education Department's curriculum in addressing the NJCCS Financial Literacy benchmarks. The…

  14. a Dosimetry Assessment for the Core Restraint of AN Advanced Gas Cooled Reactor

    NASA Astrophysics Data System (ADS)

    Thornton, D. A.; Allen, D. A.; Tyrrell, R. J.; Meese, T. C.; Huggon, A. P.; Whiley, G. S.; Mossop, J. R.

    2009-08-01

    This paper describes calculations of neutron damage rates within the core restraint structures of Advanced Gas Cooled Reactors (AGRs). Using advanced features of the Monte Carlo radiation transport code MCBEND, and neutron source data from core follow calculations performed with the reactor physics code PANTHER, a detailed model of the reactor cores of two of British Energy's AGR power plants has been developed for this purpose. Because there are no relevant neutron fluence measurements directly supporting this assessment, results of benchmark comparisons and successful validation of MCBEND for Magnox reactors have been used to estimate systematic and random uncertainties on the predictions. In particular, it has been necessary to address the known under-prediction of lower energy fast neutron responses associated with the penetration of large thicknesses of graphite.

  15. Deterministic Modeling of the High Temperature Test Reactor with DRAGON-HEXPEDITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Ortensi; M.A. Pope; R.M. Ferrer

    2010-10-01

    The Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine the INL’s current prismatic reactor analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 fuel column thin annular core, and the fully loaded core critical condition with 30 fuel columns. Special emphasis is devoted to physical phenomena and artifacts in HTTR that are similar to phenomena and artifacts in themore » NGNP base design. The DRAGON code is used in this study since it offers significant ease and versatility in modeling prismatic designs. DRAGON can generate transport solutions via Collision Probability (CP), Method of Characteristics (MOC) and Discrete Ordinates (Sn). A fine group cross-section library based on the SHEM 281 energy structure is used in the DRAGON calculations. The results from this study show reasonable agreement in the calculation of the core multiplication factor with the MC methods, but a consistent bias of 2–3% with the experimental values is obtained. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement partially stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less

  16. Methodology of full-core Monte Carlo calculations with leakage parameter evaluations for benchmark critical experiment analysis

    NASA Astrophysics Data System (ADS)

    Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.

    1997-02-01

    The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.

  17. Optical interconnection network for parallel access to multi-rank memory in future computing systems.

    PubMed

    Wang, Kang; Gu, Huaxi; Yang, Yintang; Wang, Kun

    2015-08-10

    With the number of cores increasing, there is an emerging need for a high-bandwidth low-latency interconnection network, serving core-to-memory communication. In this paper, aiming at the goal of simultaneous access to multi-rank memory, we propose an optical interconnection network for core-to-memory communication. In the proposed network, the wavelength usage is delicately arranged so that cores can communicate with different ranks at the same time and broadcast for flow control can be achieved. A distributed memory controller architecture that works in a pipeline mode is also designed for efficient optical communication and transaction address processes. The scaling method and wavelength assignment for the proposed network are investigated. Compared with traditional electronic bus-based core-to-memory communication, the simulation results based on the PARSEC benchmark show that the bandwidth enhancement and latency reduction are apparent.

  18. Core-shell Au-Pd nanoparticles as cathode catalysts for microbial fuel cell applications

    PubMed Central

    Yang, Gaixiu; Chen, Dong; Lv, Pengmei; Kong, Xiaoying; Sun, Yongming; Wang, Zhongming; Yuan, Zhenhong; Liu, Hui; Yang, Jun

    2016-01-01

    Bimetallic nanoparticles with core-shell structures usually display enhanced catalytic properties due to the lattice strain created between the core and shell regions. In this study, we demonstrate the application of bimetallic Au-Pd nanoparticles with an Au core and a thin Pd shell as cathode catalysts in microbial fuel cells, which represent a promising technology for wastewater treatment, while directly generating electrical energy. In specific, in comparison with the hollow structured Pt nanoparticles, a benchmark for the electrocatalysis, the bimetallic core-shell Au-Pd nanoparticles are found to have superior activity and stability for oxygen reduction reaction in a neutral condition due to the strong electronic interaction and lattice strain effect between the Au core and the Pd shell domains. The maximum power density generated in a membraneless single-chamber microbial fuel cell running on wastewater with core-shell Au-Pd as cathode catalysts is ca. 16.0 W m−3 and remains stable over 150 days, clearly illustrating the potential of core-shell nanostructures in the applications of microbial fuel cells. PMID:27734945

  19. I Know What You Did Last Summer

    ERIC Educational Resources Information Center

    Opalinski, Gail; Ellers, Sherry; Goodman, Amy

    2004-01-01

    This article describes the revised summer school program developed by the Anchorage (AK) School District for students who received poor grades in their core classes or low scores in the Alaska Benchmark Examinations or California Achievement Tests. More than 500 middle school students from the district spent five weeks during the summer honing…

  20. Hospital benchmarking: are U.S. eye hospitals ready?

    PubMed

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added "public" value of benchmarking in health care is questionable.

  1. Inclusion and Human Rights in Health Policies: Comparative and Benchmarking Analysis of 51 Policies from Malawi, Sudan, South Africa and Namibia

    PubMed Central

    MacLachlan, Malcolm; Amin, Mutamad; Mannan, Hasheem; El Tayeb, Shahla; Bedri, Nafisa; Swartz, Leslie; Munthali, Alister; Van Rooy, Gert; McVeigh, Joanne

    2012-01-01

    While many health services strive to be equitable, accessible and inclusive, peoples’ right to health often goes unrealized, particularly among vulnerable groups. The extent to which health policies explicitly seek to achieve such goals sets the policy context in which services are delivered and evaluated. An analytical framework was developed – EquiFrame – to evaluate 1) the extent to which 21 Core Concepts of human rights were addressed in policy documents, and 2) coverage of 12 Vulnerable Groups who might benefit from such policies. Using this framework, analysis of 51 policies across Malawi, Namibia, South Africa and Sudan, confirmed the relevance of all Core Concepts and Vulnerable Groups. Further, our analysis highlighted some very strong policies, serious shortcomings in others as well as country-specific patterns. If social inclusion and human rights do not underpin policy formation, it is unlikely they will be inculcated in service delivery. EquiFrame facilitates policy analysis and benchmarking, and provides a means for evaluating policy revision and development. PMID:22649488

  2. A heterogeneous computing accelerated SCE-UA global optimization method using OpenMP, OpenCL, CUDA, and OpenACC.

    PubMed

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang

    2017-10-01

    The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.

  3. Flexible Tagged Architecture for Trustworthy Multi-core Platforms

    DTIC Science & Technology

    2015-06-01

    well as two kernel benchmarks for SHA - 256 and GMAC, which are popular cryptographic standards. We compared the execution time of these benchmarks...UMC UMC on Flex fabric (FPGA) 266 90,384 10.8% 21 5.8% DIFT DIFT on Flex fabric (FPGA) 256 123,471 14.8% 23 6.3% BC BC on Flex fabric (FPGA) 229...0.25X) (1X) (0.5X) (0.25X) (1X) (0.5X) (0.25X) (1X) (0.5X) (0.25X) sha 1.01 1.01 1.01 1.01 1.06 1.16 1.03 1.07 1.15 1.00 1.33 1.50 gmac 1.01 1.01 1.09

  4. U.S. IOOS coastal and ocean modeling testbed: Inter-model evaluation of tides, waves, and hurricane surge in the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Kerr, P. C.; Donahue, A. S.; Westerink, J. J.; Luettich, R. A.; Zheng, L. Y.; Weisberg, R. H.; Huang, Y.; Wang, H. V.; Teng, Y.; Forrest, D. R.; Roland, A.; Haase, A. T.; Kramer, A. W.; Taylor, A. A.; Rhome, J. R.; Feyen, J. C.; Signell, R. P.; Hanson, J. L.; Hope, M. E.; Estes, R. M.; Dominguez, R. A.; Dunbar, R. P.; Semeraro, L. N.; Westerink, H. J.; Kennedy, A. B.; Smith, J. M.; Powell, M. D.; Cardone, V. J.; Cox, A. T.

    2013-10-01

    A Gulf of Mexico performance evaluation and comparison of coastal circulation and wave models was executed through harmonic analyses of tidal simulations, hindcasts of Hurricane Ike (2008) and Rita (2005), and a benchmarking study. Three unstructured coastal circulation models (ADCIRC, FVCOM, and SELFE) validated with similar skill on a new common Gulf scale mesh (ULLR) with identical frictional parameterization and forcing for the tidal validation and hurricane hindcasts. Coupled circulation and wave models, SWAN+ADCIRC and WWMII+SELFE, along with FVCOM loosely coupled with SWAN, also validated with similar skill. NOAA's official operational forecast storm surge model (SLOSH) was implemented on local and Gulf scale meshes with the same wind stress and pressure forcing used by the unstructured models for hindcasts of Ike and Rita. SLOSH's local meshes failed to capture regional processes such as Ike's forerunner and the results from the Gulf scale mesh further suggest shortcomings may be due to a combination of poor mesh resolution, missing internal physics such as tides and nonlinear advection, and SLOSH's internal frictional parameterization. In addition, these models were benchmarked to assess and compare execution speed and scalability for a prototypical operational simulation. It was apparent that a higher number of computational cores are needed for the unstructured models to meet similar operational implementation requirements to SLOSH, and that some of them could benefit from improved parallelization and faster execution speed.

  5. Scale-4 Analysis of Pressurized Water Reactor Critical Configurations: Volume 2-Sequoyah Unit 2 Cycle 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowman, S.M.

    1995-01-01

    The requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. If credit for the negative reactivity of the depleted (or spent) fuel isotopics is desired, it is necessary to benchmark computational methods against spent fuel critical configurations. This report summarizes a portion of the ongoing effort to benchmark away-from-reactor criticality analysis methods using critical configurations from commercial pressurized-water reactors. The analysis methodology selected for all the calculations reported herein is based on the codes and data provided in the SCALE-4 code system. The isotopic densities for the spent fuel assemblies inmore » the critical configurations were calculated using the SAS2H analytical sequence of the SCALE-4 system. The sources of data and the procedures for deriving SAS2H input parameters are described in detail. The SNIKR code module was used to extract the necessary isotopic densities from the SAS2H results and provide the data in the format required by the SCALE criticality analysis modules. The CSASN analytical sequence in SCALE-4 was used to perform resonance processing of the cross sections. The KENO V.a module of SCALE-4 was used to calculate the effective multiplication factor (k{sub eff}) of each case. The SCALE-4 27-group burnup library containing ENDF/B-IV (actinides) and ENDF/B-V (fission products) data was used for all the calculations. This volume of the report documents the SCALE system analysis of three reactor critical configurations for the Sequoyah Unit 2 Cycle 3. This unit and cycle were chosen because of the relevance in spent fuel benchmark applications: (1) the unit had a significantly long downtime of 2.7 years during the middle of cycle (MOC) 3, and (2) the core consisted entirely of burned fuel at the MOC restart. The first benchmark critical calculation was the MOC restart at hot, full-power (HFP) critical conditions. The other two benchmark critical calculations were the beginning-of-cycle (BOC) startup at both hot, zero-power (HZP) and HFP critical conditions. These latter calculations were used to check for consistency in the calculated results for different burnups and downtimes. The k{sub eff} results were in the range of 1.00014 to 1.00259 with a standard deviation of less than 0.001.« less

  6. GW100: Benchmarking G0W0 for Molecular Systems.

    PubMed

    van Setten, Michiel J; Caruso, Fabio; Sharifzadeh, Sahar; Ren, Xinguo; Scheffler, Matthias; Liu, Fang; Lischner, Johannes; Lin, Lin; Deslippe, Jack R; Louie, Steven G; Yang, Chao; Weigend, Florian; Neaton, Jeffrey B; Evers, Ferdinand; Rinke, Patrick

    2015-12-08

    We present the GW100 set. GW100 is a benchmark set of the ionization potentials and electron affinities of 100 molecules computed with the GW method using three independent GW codes and different GW methodologies. The quasi-particle energies of the highest-occupied molecular orbitals (HOMO) and lowest-unoccupied molecular orbitals (LUMO) are calculated for the GW100 set at the G0W0@PBE level using the software packages TURBOMOLE, FHI-aims, and BerkeleyGW. The use of these three codes allows for a quantitative comparison of the type of basis set (plane wave or local orbital) and handling of unoccupied states, the treatment of core and valence electrons (all electron or pseudopotentials), the treatment of the frequency dependence of the self-energy (full frequency or more approximate plasmon-pole models), and the algorithm for solving the quasi-particle equation. Primary results include reference values for future benchmarks, best practices for convergence within a particular approach, and average error bars for the most common approximations.

  7. LUMA: A many-core, Fluid-Structure Interaction solver based on the Lattice-Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Harwood, Adrian R. G.; O'Connor, Joseph; Sanchez Muñoz, Jonathan; Camps Santasmasas, Marta; Revell, Alistair J.

    2018-01-01

    The Lattice-Boltzmann Method at the University of Manchester (LUMA) project was commissioned to build a collaborative research environment in which researchers of all abilities can study fluid-structure interaction (FSI) problems in engineering applications from aerodynamics to medicine. It is built on the principles of accessibility, simplicity and flexibility. The LUMA software at the core of the project is a capable FSI solver with turbulence modelling and many-core scalability as well as a wealth of input/output and pre- and post-processing facilities. The software has been validated and several major releases benchmarked on supercomputing facilities internationally. The software architecture is modular and arranged logically using a minimal amount of object-orientation to maintain a simple and accessible software.

  8. An MPI-based MoSST core dynamics model

    NASA Astrophysics Data System (ADS)

    Jiang, Weiyuan; Kuang, Weijia

    2008-09-01

    Distributed systems are among the main cost-effective and expandable platforms for high-end scientific computing. Therefore scalable numerical models are important for effective use of such systems. In this paper, we present an MPI-based numerical core dynamics model for simulation of geodynamo and planetary dynamos, and for simulation of core-mantle interactions. The model is developed based on MPI libraries. Two algorithms are used for node-node communication: a "master-slave" architecture and a "divide-and-conquer" architecture. The former is easy to implement but not scalable in communication. The latter is scalable in both computation and communication. The model scalability is tested on Linux PC clusters with up to 128 nodes. This model is also benchmarked with a published numerical dynamo model solution.

  9. Core Competencies for Injury and Violence Prevention

    PubMed Central

    Stephens-Stidham, Shelli; Peek-Asa, Corinne; Bou-Saada, Ingrid; Hunter, Wanda; Lindemer, Kristen; Runyan, Carol

    2009-01-01

    Efforts to reduce the burden of injury and violence require a workforce that is knowledgeable and skilled in prevention. However, there has been no systematic process to ensure that professionals possess the necessary competencies. To address this deficiency, we developed a set of core competencies for public health practitioners in injury and violence prevention programs. The core competencies address domains including public health significance, data, the design and implementation of prevention activities, evaluation, program management, communication, stimulating change, and continuing education. Specific learning objectives establish goals for training in each domain. The competencies assist in efforts to reduce the burden of injury and violence and can provide benchmarks against which to assess progress in professional capacity for injury and violence prevention. PMID:19197083

  10. Test Scheduling for Core-Based SOCs Using Genetic Algorithm Based Heuristic Approach

    NASA Astrophysics Data System (ADS)

    Giri, Chandan; Sarkar, Soumojit; Chattopadhyay, Santanu

    This paper presents a Genetic algorithm (GA) based solution to co-optimize test scheduling and wrapper design for core based SOCs. Core testing solutions are generated as a set of wrapper configurations, represented as rectangles with width equal to the number of TAM (Test Access Mechanism) channels and height equal to the corresponding testing time. A locally optimal best-fit heuristic based bin packing algorithm has been used to determine placement of rectangles minimizing the overall test times, whereas, GA has been utilized to generate the sequence of rectangles to be considered for placement. Experimental result on ITC'02 benchmark SOCs shows that the proposed method provides better solutions compared to the recent works reported in the literature.

  11. Standardised Benchmarking in the Quest for Orthologs

    PubMed Central

    Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe

    2016-01-01

    The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  12. Performance Against WELCOA's Worksite Health Promotion Benchmarks Across Years Among Selected US Organizations.

    PubMed

    Weaver, GracieLee M; Mendenhall, Brandon N; Hunnicutt, David; Picarella, Ryan; Leffelman, Brittanie; Perko, Michael; Bibeau, Daniel L

    2018-05-01

    The purpose of this study was to quantify the performance of organizations' worksite health promotion (WHP) activities against the benchmarking criteria included in the Well Workplace Checklist (WWC). The Wellness Council of America (WELCOA) developed a tool to assess WHP with its 100-item WWC, which represents WELCOA's 7 performance benchmarks. Workplaces. This study includes a convenience sample of organizations who completed the checklist from 2008 to 2015. The sample size was 4643 entries from US organizations. The WWC includes demographic questions, general questions about WHP programs, and scales to measure the performance against the WELCOA 7 benchmarks. Descriptive analyses of WWC items were completed separately for each year of the study period. The majority of the organizations represented each year were multisite, multishift, medium- to large-sized companies mostly in the services industry. Despite yearly changes in participating organizations, results across the WELCOA 7 benchmark scores were consistent year to year. Across all years, benchmarks that organizations performed the lowest were senior-level support, data collection, and programming; wellness teams and supportive environments were the highest scoring benchmarks. In an era marked with economic swings and health-care reform, it appears that organizations are staying consistent in their performance across these benchmarks. The WWC could be useful for organizations, practitioners, and researchers in assessing the quality of WHP programs.

  13. Validating vignette and conjoint survey experiments against real-world behavior

    PubMed Central

    Hainmueller, Jens; Hangartner, Dominik; Yamamoto, Teppei

    2015-01-01

    Survey experiments, like vignette and conjoint analyses, are widely used in the social sciences to elicit stated preferences and study how humans make multidimensional choices. However, there is a paucity of research on the external validity of these methods that examines whether the determinants that explain hypothetical choices made by survey respondents match the determinants that explain what subjects actually do when making similar choices in real-world situations. This study compares results from conjoint and vignette analyses on which immigrant attributes generate support for naturalization with closely corresponding behavioral data from a natural experiment in Switzerland, where some municipalities used referendums to decide on the citizenship applications of foreign residents. Using a representative sample from the same population and the official descriptions of applicant characteristics that voters received before each referendum as a behavioral benchmark, we find that the effects of the applicant attributes estimated from the survey experiments perform remarkably well in recovering the effects of the same attributes in the behavioral benchmark. We also find important differences in the relative performances of the different designs. Overall, the paired conjoint design, where respondents evaluate two immigrants side by side, comes closest to the behavioral benchmark; on average, its estimates are within 2% percentage points of the effects in the behavioral benchmark. PMID:25646415

  14. Note-Taking Interventions to Assist Students with Disabilities in Content Area Classes

    ERIC Educational Resources Information Center

    Boyle, Joseph R.; Forchelli, Gina A.; Cariss, Kaitlyn

    2015-01-01

    As high-stakes testing, Common Core, and state standards become the new norms in schools, teachers are tasked with helping all students meet specific benchmarks. In conjunction with the influx of more students with disabilities being included in inclusive and general education classrooms where lectures with note-taking comprise a majority of…

  15. A Qualitative Study of Urban and Suburban Elementary Student Understandings of Pest-Related Science and Agricultural Education Benchmarks.

    ERIC Educational Resources Information Center

    Trexler, Cary J.

    2000-01-01

    Clinical interviews with nine fifth graders revealed that experiences play a pivotal role in their understanding of pests. They lack well-developed schema and language to discuss pest management. A foundation of core biological concepts was necessary for understanding pests and pest management. (Conatains 34 references.) (SK)

  16. A Psychometric Analysis of Teacher-Made Benchmark Assessments in English Language Arts

    ERIC Educational Resources Information Center

    Milligan, Andrea

    2017-01-01

    The implementation of the Common Core State Standards (CCSS) has placed increased accountability for outcomes on both students and teachers. To address the current youth literacy crisis in the United States, the CCSS call for students to read increasingly complex informational and literary texts. Since teachers are held accountable for students'…

  17. An imaging-based computational model for simulating angiogenesis and tumour oxygenation dynamics

    NASA Astrophysics Data System (ADS)

    Adhikarla, Vikram; Jeraj, Robert

    2016-05-01

    Tumour growth, angiogenesis and oxygenation vary substantially among tumours and significantly impact their treatment outcome. Imaging provides a unique means of investigating these tumour-specific characteristics. Here we propose a computational model to simulate tumour-specific oxygenation changes based on the molecular imaging data. Tumour oxygenation in the model is reflected by the perfused vessel density. Tumour growth depends on its doubling time (T d) and the imaged proliferation. Perfused vessel density recruitment rate depends on the perfused vessel density around the tumour (sMVDtissue) and the maximum VEGF concentration for complete vessel dysfunctionality (VEGFmax). The model parameters were benchmarked to reproduce the dynamics of tumour oxygenation over its entire lifecycle, which is the most challenging test. Tumour oxygenation dynamics were quantified using the peak pO2 (pO2peak) and the time to peak pO2 (t peak). Sensitivity of tumour oxygenation to model parameters was assessed by changing each parameter by 20%. t peak was found to be more sensitive to tumour cell line related doubling time (~30%) as compared to tissue vasculature density (~10%). On the other hand, pO2peak was found to be similarly influenced by the above tumour- and vasculature-associated parameters (~30-40%). Interestingly, both pO2peak and t peak were only marginally affected by VEGFmax (~5%). The development of a poorly oxygenated (hypoxic) core with tumour growth increased VEGF accumulation, thus disrupting the vessel perfusion as well as further increasing hypoxia with time. The model with its benchmarked parameters, is applied to hypoxia imaging data obtained using a [64Cu]Cu-ATSM PET scan of a mouse tumour and the temporal development of the vasculature and hypoxia maps are shown. The work underscores the importance of using tumour-specific input for analysing tumour evolution. An extended model incorporating therapeutic effects can serve as a powerful tool for analysing tumour response to anti-angiogenic therapies.

  18. Development of a New 47-Group Library for the CASL Neutronics Simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Williams, Mark L; Wiarda, Dorothea

    The CASL core simulator MPACT is under development for the neutronics and thermal-hydraulics coupled simulation for the pressurized light water reactors. The key characteristics of the MPACT code include a subgroup method for resonance self-shielding, and a whole core solver with a 1D/2D synthesis method. The ORNL AMPX/SCALE code packages have been significantly improved to support various intermediate resonance self-shielding approximations such as the subgroup and embedded self-shielding methods. New 47-group AMPX and MPACT libraries based on ENDF/B-VII.0 have been generated for the CASL core simulator MPACT of which group structure comes from the HELIOS library. The new 47-group MPACTmore » library includes all nuclear data required for static and transient core simulations. This study discusses a detailed procedure to generate the 47-group AMPX and MPACT libraries and benchmark results for the VERA progression problems.« less

  19. Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in themore » neutron field are reported.« less

  20. Affinity-aware checkpoint restart

    DOE PAGES

    Saini, Ajay; Rezaei, Arash; Mueller, Frank; ...

    2014-12-08

    Current checkpointing techniques employed to overcome faults for HPC applications result in inferior application performance after restart from a checkpoint for a number of applications. This is due to a lack of page and core affinity awareness of the checkpoint/restart (C/R) mechanism, i.e., application tasks originally pinned to cores may be restarted on different cores, and in case of non-uniform memory architectures (NUMA), quite common today, memory pages associated with tasks on a NUMA node may be associated with a different NUMA node after restart. Here, this work contributes a novel design technique for C/R mechanisms to preserve task-to-core mapsmore » and NUMA node specific page affinities across restarts. Experimental results with BLCR, a C/R mechanism, enhanced with affinity awareness demonstrate significant performance benefits of 37%-73% for the NAS Parallel Benchmark codes and 6-12% for NAMD with negligible overheads instead of up to nearly four times longer an execution times without affinity-aware restarts on 16 cores.« less

  1. Affinity-aware checkpoint restart

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saini, Ajay; Rezaei, Arash; Mueller, Frank

    Current checkpointing techniques employed to overcome faults for HPC applications result in inferior application performance after restart from a checkpoint for a number of applications. This is due to a lack of page and core affinity awareness of the checkpoint/restart (C/R) mechanism, i.e., application tasks originally pinned to cores may be restarted on different cores, and in case of non-uniform memory architectures (NUMA), quite common today, memory pages associated with tasks on a NUMA node may be associated with a different NUMA node after restart. Here, this work contributes a novel design technique for C/R mechanisms to preserve task-to-core mapsmore » and NUMA node specific page affinities across restarts. Experimental results with BLCR, a C/R mechanism, enhanced with affinity awareness demonstrate significant performance benefits of 37%-73% for the NAS Parallel Benchmark codes and 6-12% for NAMD with negligible overheads instead of up to nearly four times longer an execution times without affinity-aware restarts on 16 cores.« less

  2. Evaluation of the Pool Critical Assembly Benchmark with Explicitly-Modeled Geometry using MCNP6

    DOE PAGES

    Kulesza, Joel A.; Martz, Roger Lee

    2017-03-01

    Despite being one of the most widely used benchmarks for qualifying light water reactor (LWR) radiation transport methods and data, no benchmark calculation of the Oak Ridge National Laboratory (ORNL) Pool Critical Assembly (PCA) pressure vessel wall benchmark facility (PVWBF) using MCNP6 with explicitly modeled core geometry exists. As such, this paper provides results for such an analysis. First, a criticality calculation is used to construct the fixed source term. Next, ADVANTG-generated variance reduction parameters are used within the final MCNP6 fixed source calculations. These calculations provide unadjusted dosimetry results using three sets of dosimetry reaction cross sections of varyingmore » ages (those packaged with MCNP6, from the IRDF-2002 multi-group library, and from the ACE-formatted IRDFF v1.05 library). These results are then compared to two different sets of measured reaction rates. The comparison agrees in an overall sense within 2% and on a specific reaction- and dosimetry location-basis within 5%. Except for the neptunium dosimetry, the individual foil raw calculation-to-experiment comparisons usually agree within 10% but is typically greater than unity. Finally, in the course of developing these calculations, geometry that has previously not been completely specified is provided herein for the convenience of future analysts.« less

  3. ViSAPy: a Python tool for biophysics-based generation of virtual spiking activity for evaluation of spike-sorting algorithms.

    PubMed

    Hagen, Espen; Ness, Torbjørn V; Khosrowshahi, Amir; Sørensen, Christina; Fyhn, Marianne; Hafting, Torkel; Franke, Felix; Einevoll, Gaute T

    2015-04-30

    New, silicon-based multielectrodes comprising hundreds or more electrode contacts offer the possibility to record spike trains from thousands of neurons simultaneously. This potential cannot be realized unless accurate, reliable automated methods for spike sorting are developed, in turn requiring benchmarking data sets with known ground-truth spike times. We here present a general simulation tool for computing benchmarking data for evaluation of spike-sorting algorithms entitled ViSAPy (Virtual Spiking Activity in Python). The tool is based on a well-established biophysical forward-modeling scheme and is implemented as a Python package built on top of the neuronal simulator NEURON and the Python tool LFPy. ViSAPy allows for arbitrary combinations of multicompartmental neuron models and geometries of recording multielectrodes. Three example benchmarking data sets are generated, i.e., tetrode and polytrode data mimicking in vivo cortical recordings and microelectrode array (MEA) recordings of in vitro activity in salamander retinas. The synthesized example benchmarking data mimics salient features of typical experimental recordings, for example, spike waveforms depending on interspike interval. ViSAPy goes beyond existing methods as it includes biologically realistic model noise, synaptic activation by recurrent spiking networks, finite-sized electrode contacts, and allows for inhomogeneous electrical conductivities. ViSAPy is optimized to allow for generation of long time series of benchmarking data, spanning minutes of biological time, by parallel execution on multi-core computers. ViSAPy is an open-ended tool as it can be generalized to produce benchmarking data or arbitrary recording-electrode geometries and with various levels of complexity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, William R.; Lee, John C.; baxter, Alan

    Information and measured data from the intial Fort St. Vrain (FSV) high temperature gas reactor core is used to develop a benchmark configuration to validate computational methods for analysis of a full-core, commercial HTR configuration. Large uncertainties in the geometry and composition data for the FSV fuel and core are identified, including: (1) the relative numbers of fuel particles for the four particle types, (2) the distribution of fuel kernel diameters for the four particle types, (3) the Th:U ratio in the initial FSV core, (4) and the buffer thickness for the fissile and fertile particles. Sensitivity studies were performedmore » to assess each of these uncertainties. A number of methods were developed to assist in these studies, including: (1) the automation of MCNP5 input files for FSV using Python scripts, (2) a simple method to verify isotopic loadings in MCNP5 input files, (3) an automated procedure to conduct a coupled MCNP5-RELAP5 analysis for a full-core FSV configuration with thermal-hydraulic feedback, and (4) a methodology for sampling kernel diameters from arbitrary power law and Gaussian PDFs that preserved fuel loading and packing factor constraints. A reference FSV fuel configuration was developed based on having a single diameter kernel for each of the four particle types, preserving known uranium and thorium loadings and packing factor (58%). Three fuel models were developed, based on representing the fuel as a mixture of kernels with two diameters, four diameters, or a continuous range of diameters. The fuel particles were put into a fuel compact using either a lattice-bsed approach or a stochastic packing methodology from RPI, and simulated with MCNP5. The results of the sensitivity studies indicated that the uncertainties in the relative numbers and sizes of fissile and fertile kernels were not important nor were the distributions of kernel diameters within their diameter ranges. The uncertainty in the Th:U ratio in the intial FSV core was found to be important with a crude study. The uncertainty in the TRISO buffer thickness was estimated to be unimportant but the study was not conclusive. FSV fuel compacts and a regular FSV fuel element were analyzed with MCNP5 and compared with predictions using a modified version of HELIOS that is capable of analyzing TRISO fuel configurations. The HELIOS analyses were performed by SSP. The eigenvalue discrepancies between HELIOS and MCNP5 are currently on the order of 1% but these are still being evaluated. Full-core FSV configurations were developed for two initial critical configurations - a cold, clean critical loading and a critical configuration at 70% power. MCNP5 predictions are compared to experimental data and the results are mixed. Analyses were also done for the pulsed neutron experiments that were conducted by GA for the initial FSV core. MCNP5 was used to model these experiments and reasonable agreement with measured results has been observed.« less

  5. A theoretical and experimental benchmark study of core-excited states in nitrogen

    NASA Astrophysics Data System (ADS)

    Myhre, Rolf H.; Wolf, Thomas J. A.; Cheng, Lan; Nandi, Saikat; Coriani, Sonia; Gühr, Markus; Koch, Henrik

    2018-02-01

    The high resolution near edge X-ray absorption fine structure spectrum of nitrogen displays the vibrational structure of the core-excited states. This makes nitrogen well suited for assessing the accuracy of different electronic structure methods for core excitations. We report high resolution experimental measurements performed at the SOLEIL synchrotron facility. These are compared with theoretical spectra calculated using coupled cluster theory and algebraic diagrammatic construction theory. The coupled cluster singles and doubles with perturbative triples model known as CC3 is shown to accurately reproduce the experimental excitation energies as well as the spacing of the vibrational transitions. The computational results are also shown to be systematically improved within the coupled cluster hierarchy, with the coupled cluster singles, doubles, triples, and quadruples method faithfully reproducing the experimental vibrational structure.

  6. Assessment of competency in endoscopy: establishing and validating generalizable competency benchmarks for colonoscopy.

    PubMed

    Sedlack, Robert E; Coyle, Walter J

    2016-03-01

    The Mayo Colonoscopy Skills Assessment Tool (MCSAT) has previously been used to describe learning curves and competency benchmarks for colonoscopy; however, these data were limited to a single training center. The newer Assessment of Competency in Endoscopy (ACE) tool is a refinement of the MCSAT tool put forth by the Training Committee of the American Society for Gastrointestinal Endoscopy, intended to include additional important quality metrics. The goal of this study is to validate the changes made by updating this tool and establish more generalizable and reliable learning curves and competency benchmarks for colonoscopy by examining a larger national cohort of trainees. In a prospective, multicenter trial, gastroenterology fellows at all stages of training had their core cognitive and motor skills in colonoscopy assessed by staff. Evaluations occurred at set intervals of every 50 procedures throughout the 2013 to 2014 academic year. Skills were graded by using the ACE tool, which uses a 4-point grading scale defining the continuum from novice to competent. Average learning curves for each skill were established at each interval in training and competency benchmarks for each skill were established using the contrasting groups method. Ninety-three gastroenterology fellows at 10 U.S. academic institutions had 1061 colonoscopies assessed by using the ACE tool. Average scores of 3.5 were found to be inclusive of all minimal competency thresholds identified for each core skill. Cecal intubation times of less than 15 minutes and independent cecal intubation rates of 90% were also identified as additional competency thresholds during analysis. The average fellow achieved all cognitive and motor skill endpoints by 250 procedures, with >90% surpassing these thresholds by 300 procedures. Nationally generalizable learning curves for colonoscopy skills in gastroenterology fellows are described. Average ACE scores of 3.5, cecal intubation rates of 90%, and intubation times less than 15 minutes are recommended as minimal competency criteria. On average, it takes 250 procedures to achieve competence in colonoscopy. The thresholds found in this multicenter cohort by using the ACE tool are nearly identical to the previously established MCSAT benchmarks and are consistent with recent gastroenterology training recommendations but far higher than current training requirements in other specialties. Copyright © 2016 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  7. Social Studies on the Outside Looking In: Redeeming the Neglected Curriculum

    ERIC Educational Resources Information Center

    Hermeling, Andrew Dyrli

    2013-01-01

    Many social studies teachers are nervous about the coming of Common Core State Standards. With so much emphasis placed on literacy, social studies teachers fear they will see content slashed to leave time for meeting English's non-fiction standards. Already reeling from a lack of attention from the benchmarks put in place by No Child Left Behind,…

  8. Graph Theoretic and Motif Analyses of the Hippocampal Neuron Type Potential Connectome.

    PubMed

    Rees, Christopher L; Wheeler, Diek W; Hamilton, David J; White, Charise M; Komendantov, Alexander O; Ascoli, Giorgio A

    2016-01-01

    We computed the potential connectivity map of all known neuron types in the rodent hippocampal formation by supplementing scantly available synaptic data with spatial distributions of axons and dendrites from the open-access knowledge base Hippocampome.org. The network that results from this endeavor, the broadest and most complete for a mammalian cortical region at the neuron-type level to date, contains more than 3200 connections among 122 neuron types across six subregions. Analyses of these data using graph theory metrics unveil the fundamental architectural principles of the hippocampal circuit. Globally, we identify a highly specialized topology minimizing communication cost; a modular structure underscoring the prominence of the trisynaptic loop; a core set of neuron types serving as information-processing hubs as well as a distinct group of particular antihub neurons; a nested, two-tier rich club managing much of the network traffic; and an innate resilience to random perturbations. At the local level, we uncover the basic building blocks, or connectivity patterns, that combine to produce complex global functionality, and we benchmark their utilization in the circuit relative to random networks. Taken together, these results provide a comprehensive connectivity profile of the hippocampus, yielding novel insights on its functional operations at the computationally crucial level of neuron types.

  9. ZPR-6 assembly 7 high {sup 240}Pu core experiments : a fast reactor core with mixed (Pu,U)-oxide fuel and a centeral high{sup 240}Pu zone.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R. M.; Morman, J. A.; Schaefer, R.W.

    ZPR-6 Assembly 7 (ZPR-6/7) encompasses a series of experiments performed at the ZPR-6 facility at Argonne National Laboratory in 1970 and 1971 as part of the Demonstration Reactor Benchmark Program (Reference 1). Assembly 7 simulated a large sodium-cooled LMFBR with mixed oxide fuel, depleted uranium radial and axial blankets, and a core H/D near unity. ZPR-6/7 was designed to test fast reactor physics data and methods, so configurations in the Assembly 7 program were as simple as possible in terms of geometry and composition. ZPR-6/7 had a very uniform core assembled from small plates of depleted uranium, sodium, iron oxide,more » U{sub 3}O{sub 8} and Pu-U-Mo alloy loaded into stainless steel drawers. The steel drawers were placed in square stainless steel tubes in the two halves of a split table machine. ZPR-6/7 had a simple, symmetric core unit cell whose neutronic characteristics were dominated by plutonium and {sup 238}U. The core was surrounded by thick radial and axial regions of depleted uranium to simulate radial and axial blankets and to isolate the core from the surrounding room. The ZPR-6/7 program encompassed 139 separate core loadings which include the initial approach to critical and all subsequent core loading changes required to perform specific experiments and measurements. In this context a loading refers to a particular configuration of fueled drawers, radial blanket drawers and experimental equipment (if present) in the matrix of steel tubes. Two principal core configurations were established. The uniform core (Loadings 1-84) had a relatively uniform core composition. The high {sup 240}Pu core (Loadings 85-139) was a variant on the uniform core. The plutonium in the Pu-U-Mo fuel plates in the uniform core contains 11% {sup 240}Pu. In the high {sup 240}Pu core, all Pu-U-Mo plates in the inner core region (central 61 matrix locations per half of the split table machine) were replaced by Pu-U-Mo plates containing 27% {sup 240}Pu in the plutonium component to construct a central core zone with a composition closer to that in an LMFBR core with high burnup. The high {sup 240}Pu configuration was constructed for two reasons. First, the composition of the high {sup 240}Pu zone more closely matched the composition of LMFBR cores anticipated in design work in 1970. Second, comparison of measurements in the ZPR-6/7 uniform core with corresponding measurements in the high {sup 240}Pu zone provided an assessment of some of the effects of long-term {sup 240}Pu buildup in LMFBR cores. The uniform core version of ZPR-6/7 is evaluated in ZPR-LMFR-EXP-001. This document only addresses measurements in the high {sup 240}Pu core version of ZPR-6/7. Many types of measurements were performed as part of the ZPR-6/7 program. Measurements of criticality, sodium void worth, control rod worth and reaction rate distributions in the high {sup 240}Pu core configuration are evaluated here. For each category of measurements, the uncertainties are evaluated, and benchmark model data are provided.« less

  10. GET_PHYLOMARKERS, a Software Package to Select Optimal Orthologous Clusters for Phylogenomics and Inferring Pan-Genome Phylogenies, Used for a Critical Geno-Taxonomic Revision of the Genus Stenotrophomonas.

    PubMed

    Vinuesa, Pablo; Ochoa-Sánchez, Luz E; Contreras-Moreira, Bruno

    2018-01-01

    The massive accumulation of genome-sequences in public databases promoted the proliferation of genome-level phylogenetic analyses in many areas of biological research. However, due to diverse evolutionary and genetic processes, many loci have undesirable properties for phylogenetic reconstruction. These, if undetected, can result in erroneous or biased estimates, particularly when estimating species trees from concatenated datasets. To deal with these problems, we developed GET_PHYLOMARKERS, a pipeline designed to identify high-quality markers to estimate robust genome phylogenies from the orthologous clusters, or the pan-genome matrix (PGM), computed by GET_HOMOLOGUES. In the first context, a set of sequential filters are applied to exclude recombinant alignments and those producing anomalous or poorly resolved trees. Multiple sequence alignments and maximum likelihood (ML) phylogenies are computed in parallel on multi-core computers. A ML species tree is estimated from the concatenated set of top-ranking alignments at the DNA or protein levels, using either FastTree or IQ-TREE (IQT). The latter is used by default due to its superior performance revealed in an extensive benchmark analysis. In addition, parsimony and ML phylogenies can be estimated from the PGM. We demonstrate the practical utility of the software by analyzing 170 Stenotrophomonas genome sequences available in RefSeq and 10 new complete genomes of Mexican environmental S. maltophilia complex (Smc) isolates reported herein. A combination of core-genome and PGM analyses was used to revise the molecular systematics of the genus. An unsupervised learning approach that uses a goodness of clustering statistic identified 20 groups within the Smc at a core-genome average nucleotide identity (cgANIb) of 95.9% that are perfectly consistent with strongly supported clades on the core- and pan-genome trees. In addition, we identified 16 misclassified RefSeq genome sequences, 14 of them labeled as S. maltophilia , demonstrating the broad utility of the software for phylogenomics and geno-taxonomic studies. The code, a detailed manual and tutorials are freely available for Linux/UNIX servers under the GNU GPLv3 license at https://github.com/vinuesa/get_phylomarkers. A docker image bundling GET_PHYLOMARKERS with GET_HOMOLOGUES is available at https://hub.docker.com/r/csicunam/get_homologues/, which can be easily run on any platform.

  11. Competency based training in robotic surgery: benchmark scores for virtual reality robotic simulation.

    PubMed

    Raison, Nicholas; Ahmed, Kamran; Fossati, Nicola; Buffi, Nicolò; Mottrie, Alexandre; Dasgupta, Prokar; Van Der Poel, Henk

    2017-05-01

    To develop benchmark scores of competency for use within a competency based virtual reality (VR) robotic training curriculum. This longitudinal, observational study analysed results from nine European Association of Urology hands-on-training courses in VR simulation. In all, 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performance metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning-curve analysis. Three basic skill and two advanced skill exercises were identified. Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises; however, advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still did not achieve the benchmark standard in the more difficult exercises. Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  12. [Benchmarking and other functions of ROM: back to basics].

    PubMed

    Barendregt, M

    2015-01-01

    Since 2011 outcome data in the Dutch mental health care have been collected on a national scale. This has led to confusion about the position of benchmarking in the system known as routine outcome monitoring (rom). To provide insight into the various objectives and uses of aggregated outcome data. A qualitative review was performed and the findings were analysed. Benchmarking is a strategy for finding best practices and for improving efficacy and it belongs to the domain of quality management. Benchmarking involves comparing outcome data by means of instrumentation and is relatively tolerant with regard to the validity of the data. Although benchmarking is a function of rom, it must be differentiated form other functions from rom. Clinical management, public accountability, research, payment for performance and information for patients are all functions of rom which require different ways of data feedback and which make different demands on the validity of the underlying data. Benchmarking is often wrongly regarded as being simply a synonym for 'comparing institutions'. It is, however, a method which includes many more factors; it can be used to improve quality and has a more flexible approach to the validity of outcome data and is less concerned than other rom functions about funding and the amount of information given to patients. Benchmarking can make good use of currently available outcome data.

  13. Using relative survival measures for cross-sectional and longitudinal benchmarks of countries, states, and districts: the BenchRelSurv- and BenchRelSurvPlot-macros

    PubMed Central

    2013-01-01

    Background The objective of screening programs is to discover life threatening diseases in as many patients as early as possible and to increase the chance of survival. To be able to compare aspects of health care quality, methods are needed for benchmarking that allow comparisons on various health care levels (regional, national, and international). Objectives Applications and extensions of algorithms can be used to link the information on disease phases with relative survival rates and to consolidate them in composite measures. The application of the developed SAS-macros will give results for benchmarking of health care quality. Data examples for breast cancer care are given. Methods A reference scale (expected, E) must be defined at a time point at which all benchmark objects (observed, O) are measured. All indices are defined as O/E, whereby the extended standardized screening-index (eSSI), the standardized case-mix-index (SCI), the work-up-index (SWI), and the treatment-index (STI) address different health care aspects. The composite measures called overall-performance evaluation (OPE) and relative overall performance indices (ROPI) link the individual indices differently for cross-sectional or longitudinal analyses. Results Algorithms allow a time point and a time interval associated comparison of the benchmark objects in the indices eSSI, SCI, SWI, STI, OPE, and ROPI. Comparisons between countries, states and districts are possible. Exemplarily comparisons between two countries are made. The success of early detection and screening programs as well as clinical health care quality for breast cancer can be demonstrated while the population’s background mortality is concerned. Conclusions If external quality assurance programs and benchmark objects are based on population-based and corresponding demographic data, information of disease phase and relative survival rates can be combined to indices which offer approaches for comparative analyses between benchmark objects. Conclusions on screening programs and health care quality are possible. The macros can be transferred to other diseases if a disease-specific phase scale of prognostic value (e.g. stage) exists. PMID:23316692

  14. Depollution benchmarks for capacitors, batteries and printed wiring boards from waste electrical and electronic equipment (WEEE).

    PubMed

    Savi, Daniel; Kasser, Ueli; Ott, Thomas

    2013-12-01

    The article compiles and analyses sample data for toxic components removed from waste electronic and electrical equipment (WEEE) from more than 30 recycling companies in Switzerland over the past ten years. According to European and Swiss legislation, toxic components like batteries, capacitors and printed wiring boards have to be removed from WEEE. The control bodies of the Swiss take back schemes have been monitoring the activities of WEEE recyclers in Switzerland for about 15 years. All recyclers have to provide annual mass balance data for every year of operation. From this data, percentage shares of removed batteries and capacitors are calculated in relation to the amount of each respective WEEE category treated. A rationale is developed, why such an indicator should not be calculated for printed wiring boards. The distributions of these de-pollution indicators are analysed and their suitability for defining lower threshold values and benchmarks for the depollution of WEEE is discussed. Recommendations for benchmarks and threshold values for the removal of capacitors and batteries are given. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Ultrafast light matter interaction in CdSe/ZnS core-shell quantum dots

    NASA Astrophysics Data System (ADS)

    Yadav, Rajesh Kumar; Sharma, Rituraj; Mondal, Anirban; Adarsh, K. V.

    2018-04-01

    Core-shell quantum dot are imperative for carrier (electron and holes) confinement in core/shell, which provides a stage to explore the linear and nonlinear optical phenomena at the nanoscalelimit. Here we present a comprehensive study of ultrafast excitation dynamics and nonlinear optical absorption of CdSe/ZnS core shell quantum dot with the help of ultrafast spectroscopy. Pump-probe and time-resolved measurements revealed the drop of trapping at CdSe surface due to the presence of the ZnS shell, which makes more efficient photoluminescence. We have carried out femtosecond transient absorption studies of the CdSe/ZnS core-shell quantum dot by irradiation with 400 nm laser light, monitoring the transients in the visible region. The optical nonlinearity of the core-shell quantum dot studied by using the Z-scan technique with 120 fs pulses at the wavelengths of 800 nm. The value of two photon absorption coefficients (β) of core-shell QDs extracted as80cm/GW, and it shows excellent benchmark for the optical limiting onset of 2.5GW/cm2 with the low limiting differential transmittance of 0.10, that is an order of magnitude better than graphene based materials.

  16. Towards the development of a consensual chronostratigraphy for Arctic Ocean sedimentary records

    NASA Astrophysics Data System (ADS)

    Hillaire-Marcel, Claude; de Vernal, Anne; Polyak, Leonid; Stein, Rüdiger; Maccali, Jenny; Jacobel, Allison; Cuny, Kristan

    2017-04-01

    Deciphering Arctic paleoceanograpy and paleoclimate, and linking it to global marine and atmospheric records is much needed for comprehending the Earth's climate history. However, this task is hampered by multiple problems with dating Arctic Ocean sedimentary records related notably to low and highly variable sedimentation rates, scarce and discontinuous biogenic proxies due to low productivity and/or poor preservation, and difficulties correlating regional records to global stacks (e.g., paleomagnetic). Despite recent advances in developing an Arctic Ocean sedimentary stratigraphy, and attempts at setting radiometric benchmark ages of respectively 300 and 150 ka, based on the final decay of 230Th and 231Pa excesses (Thxs, Paxs) (Not et al., 2008), consensual age models are still missing, preventing reliable integration of Arctic records in a global paleoclimatic scheme. Here, we intend to illustrate these issues by comparing consistent Thxs-Paxs chronostratigraphic records from the Mendeleev-Alpha and Lomonosov ridges with the currently used age model based on climatostratigraphic interpretation of sedimentary records (e.g., Polyak et al., 2009; Stein et al., 2010). Data used were collected from the 2005 HOTRAX core MC-11 (northern Mendeleev Ridge) and the 2014 Polarstern core PS87-30 (Lomonosov Ridge). Total collapse depths of Thxs and Paxs are observed by a factor of 3 deeper in core PS87-30 vs core MC-11, indicating average sedimentation rates 3 times higher at the Lomonosov Ridge site. Litho-biostratigraphic markers, such as foraminiferal peaks and manganese-enriched layers, show a similar pattern, with their occurrence 3 times deeper in core PS87-30 than in core MC-11. These very consistent downcore features highlight a gaping difference between the benchmark ages assigned to the total decay of Paxs and Thxs and the current age model based on climatostratigraphic approach involving significantly higher sedimentation rates. This discrepancy begs for its in-depth investigation that would potentially result in a development of the consensual chronostratigraphy for Quaternary Arctic Ocean sediments, critical for integrating the Arctic into global paleoclimatic history.

  17. Relevance of East African Drill Cores to Human Evolution: the Case of the Olorgesailie Drilling Project

    NASA Astrophysics Data System (ADS)

    Potts, R.

    2016-12-01

    Drill cores reaching the local basement of the East African Rift were obtained in 2012 south of the Olorgesailie Basin, Kenya, 20 km from excavations that document key benchmarks in the origin of Homo sapiens. Sediments totaling 216 m were obtained from two drilling locations representing the past 1 million years. The cores were acquired to build a detailed environmental record spatially associated with the transition from Acheulean to Middle Stone Age technology and extensive turnover in mammalian species. The project seeks precise tests of how climate dynamics and tectonic events were linked with these transitions. Core lithology (A.K. Behrensmeyer), geochronology (A. Deino), diatoms (R.B. Owen), phytoliths (R. Kinyanjui), geochemistry (N. Rabideaux, D. Deocampo), among other indicators, show evidence of strong environmental variability in agreement with predicted high-eccentricity modulation of climate during the evolutionary transitions. Increase in hominin mobility, elaboration of symbolic behavior, and concurrent turnover in mammalian species indicating heightened adaptability to unpredictable ecosystems, point to a direct link between the evolutionary transitions and the landscape dynamics reflected in the Olorgesailie drill cores. For paleoanthropologists and Earth scientists, any link between evolutionary transitions and environmental dynamics requires robust evolutionary datasets pertinent to how selection, extinction, population divergence, and other evolutionary processes were impacted by the dynamics uncovered in drill core studies. Fossil and archeological data offer a rich source of data and of robust environment-evolution explanations that must be integrated into efforts by Earth scientists who seek to examine high-resolution climate records of human evolution. Paleoanthropological examples will illustrate the opportunities that exist for connecting evolutionary benchmarks to the data obtained from drilled African muds. Project members: R. Potts, A.K. Behrensmeyer, E. Beverly, K. Brady, J. Bright, E. Brown, J. Clark, A. Cohen, A. Deino, P. deMenocal, D. Deocampo, R. Dommain, J.T. Faith, J. King, R. Kinyanjui, N. Levin, J. Moerman, V. Muiruri, A. Noren, R.B. Owen, N. Rabideaux, R. Renaut, S. Rucina, J. Russell, J. Scott, M. Stockhecke, K. Uno

  18. Benchmarking the ATLAS software through the Kit Validation engine

    NASA Astrophysics Data System (ADS)

    De Salvo, Alessandro; Brasolin, Franco

    2010-04-01

    The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.

  19. Benchmark duration of work hours for development of fatigue symptoms in Japanese workers with adjustment for job-related stress.

    PubMed

    Suwazono, Yasushi; Dochi, Mirei; Kobayashi, Etsuko; Oishi, Mitsuhiro; Okubo, Yasushi; Tanaka, Kumihiko; Sakata, Kouichi

    2008-12-01

    The objective of this study was to calculate benchmark durations and lower 95% confidence limits for benchmark durations of working hours associated with subjective fatigue symptoms by applying the benchmark dose approach while adjusting for job-related stress using multiple logistic regression analyses. A self-administered questionnaire was completed by 3,069 male and 412 female daytime workers (age 18-67 years) in a Japanese steel company. The eight dependent variables in the Cumulative Fatigue Symptoms Index were decreased vitality, general fatigue, physical disorders, irritability, decreased willingness to work, anxiety, depressive feelings, and chronic tiredness. Independent variables were daily working hours, four subscales (job demand, job control, interpersonal relationship, and job suitability) of the Brief Job Stress Questionnaire, and other potential covariates. Using significant parameters for working hours and those for other covariates, the benchmark durations of working hours were calculated for the corresponding Index property. Benchmark response was set at 5% or 10%. Assuming a condition of worst job stress, the benchmark duration/lower 95% confidence limit for benchmark duration of working hours per day with a benchmark response of 5% or 10% were 10.0/9.4 or 11.7/10.7 (irritability) and 9.2/8.9 or 10.4/9.8 (chronic tiredness) in men and 8.9/8.4 or 9.8/8.9 (chronic tiredness) in women. The threshold amounts of working hours for fatigue symptoms under the worst job-related stress were very close to the standard daily working hours in Japan. The results strongly suggest that special attention should be paid to employees whose working hours exceed threshold amounts based on individual levels of job-related stress.

  20. Neutron Reference Benchmark Field Specification: ACRR 44 Inch Lead-Boron (LB44) Bucket Environment (ACRR-LB44-CC-32-CL).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the 44 inch Lead-Boron (LB44) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results ofmore » 31 integral dosimetry measurements in the neutron field are reported.« less

  1. Neutron Reference Benchmark Field Specifications: ACRR Polyethylene-Lead-Graphite (PLG) Bucket Environment (ACRR-PLG-CC-32-CL).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parm, Edward J.; Griffin, Patrick J.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the Polyethylene-Lead-Graphite (PLG) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 37 integralmore » dosimetry measurements in the neutron field are reported.« less

  2. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these banchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  3. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented and demonstrated for a commercial code. The examples are based on finite element models of the Mixed-Mode Bending (MMB) specimen. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, quasi-static benchmark examples were created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Good agreement between the results obtained from the automated propagation analysis and the benchmark results could be achieved by selecting input parameters that had previously been determined during analyses of mode I Double Cantilever Beam and mode II End Notched Flexure specimens. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  4. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    PubMed Central

    Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231

  5. Monte Carlo modelling of TRIGA research reactor

    NASA Astrophysics Data System (ADS)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  6. The convergence of European business cycles 1978-2000

    NASA Astrophysics Data System (ADS)

    Ormerod, Paul; Mounfield, Craig

    2002-05-01

    The degree of convergence of the business cycles of the economies of the European Union (EU) is a key policy issue. In particular, a substantial degree of convergence is needed if the European Central Bank is to be capable of setting a monetary policy which is appropriate to the stage of the cycle of the Euro zone economies. We consider the annual rates of real GDP growth on a quarterly basis in the large core economies of the EU (France, Germany and Italy, plus The Netherlands) over the period 1978Q1-2000Q3. An important empirical question is the degree to which the correlations between these growth rates contain true information rather than noise. The technique of random matrix theory is able to answer this question, and has been recently applied successfully in the physics journals to financial markets data. We find that the correlations between the growth rates of the core EU economies contain substantial amounts of true information, and exhibit considerable stability over time. Even in the late 1970s and early 1980s, these economies moved together closely over the course of the business cycle. There was a slight loosening at the time of German re-unification, but the economies are now, if anything, even more closely correlated. As a benchmark for comparison, we add a series to the EU core data set which by construction is uncorrelated with these business cycles. We then analyse the EU core plus Spain, a country which has attached great importance to greater integration with Europe. In the early part of the period examined, the results are very similar to those obtained with the data set of the EU core plus the random series. However, there is a clear trend in the results, which provide strong evidence to support the view that the Spanish economy has now become closely converged with the core EU economies in terms of its movements over the business cycle. In contrast, the results obtained with a data set of the EU core plus the UK show no such trend. In the late 1970s and early 1980s, the UK economy did exhibit some degree of correlation with those of the core EU. However, there is no clear evidence to suggest that the UK business cycle has moved more closely into line with that of the core EU economies over the 1978-2000 period.

  7. Student Progress to Graduation in New York City High Schools: A Metric Designed by New Visions for Public Schools. Part I: Core Components

    ERIC Educational Resources Information Center

    Fairchild, Susan; Gunton, Brad; Donohue, Beverly; Berry, Carolyn; Genn, Ruth; Knevals, Jessica

    2011-01-01

    Students who achieve critical academic benchmarks such as high attendance rates, continuous levels of credit accumulation, and high grades have a greater likelihood of success throughout high school and beyond. However, keeping students on track toward meeting graduation requirements and quickly identifying students who are at risk of falling off…

  8. Using Localized Survey Items to Augment Standardized Benchmarking Measures: A LibQUAL+[TM] Study

    ERIC Educational Resources Information Center

    Thompson, Bruce; Cook, Colleen; Kyrillidou, Martha

    2006-01-01

    The LibQUAL+[TM] protocol solicits open-ended comments from users with regard to library service quality, gathers data on 22 core items, and, at the option of individual libraries, also garners ratings on five items drawn from a pool of more than 100 choices selected by libraries. In this article, the relationship of scores on these locally…

  9. Selecting a Benchmark Suite to Profile High-Performance Computing (HPC) Machines

    DTIC Science & Technology

    2014-11-01

    architectures. Machines now contain central processing units (CPUs), graphics processing units (GPUs), and many integrated core ( MIC ) architecture all...evaluate the feasibility and applicability of a new architecture just released to the market . Researchers are often unsure how available resources will...architectures. Having a suite of programs running on different architectures, such as GPUs, MICs , and CPUs, adds complexity and technical challenges

  10. LU Factorization with Partial Pivoting for a Multi-CPU, Multi-GPU Shared Memory System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurzak, Jakub; Luszczek, Pitior; Faverge, Mathieu

    2012-03-01

    LU factorization with partial pivoting is a canonical numerical procedure and the main component of the High Performance LINPACK benchmark. This article presents an implementation of the algorithm for a hybrid, shared memory, system with standard CPU cores and GPU accelerators. Performance in excess of one TeraFLOPS is achieved using four AMD Magny Cours CPUs and four NVIDIA Fermi GPUs.

  11. Numerical modeling of fluid and electrical currents through geometries based on synchrotron X-ray tomographic images of reservoir rocks using Avizo and COMSOL

    NASA Astrophysics Data System (ADS)

    Bird, M. B.; Butler, S. L.; Hawkes, C. D.; Kotzer, T.

    2014-12-01

    The use of numerical simulations to model physical processes occurring within subvolumes of rock samples that have been characterized using advanced 3D imaging techniques is becoming increasingly common. Not only do these simulations allow for the determination of macroscopic properties like hydraulic permeability and electrical formation factor, but they also allow the user to visualize processes taking place at the pore scale and they allow for multiple different processes to be simulated on the same geometry. Most efforts to date have used specialized research software for the purpose of simulations. In this contribution, we outline the steps taken to use commercial software Avizo to transform a 3D synchrotron X-ray-derived tomographic image of a rock core sample to an STL (STereoLithography) file which can be imported into the commercial multiphysics modeling package COMSOL. We demonstrate that the use of COMSOL to perform fluid and electrical current flow simulations through the pore spaces. The permeability and electrical formation factor of the sample are calculated and compared with laboratory-derived values and benchmark calculations. Although the simulation domains that we were able to model on a desk top computer were significantly smaller than representative elementary volumes, and we were able to establish Kozeny-Carman and Archie's Law trends on which laboratory measurements and previous benchmark solutions fall. The rock core samples include a Fountainebleau sandstone used for benchmarking and a marly dolostone sampled from a well in the Weyburn oil field of southeastern Saskatchewan, Canada. Such carbonates are known to have complicated pore structures compared with sandstones, yet we are able to calculate reasonable macroscopic properties. We discuss the computing resources required.

  12. Benchmark gamma-ray skyshine experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nason, R.R.; Shultis, J.K.; Faw, R.E.

    1982-01-01

    A benchmark gamma-ray skyshine experiment is descibed in which /sup 60/Co sources were either collimated into an upward 150-deg conical beam or shielded vertically by two different thicknesses of concrete. A NaI(Tl) spectrometer and a high pressure ion chamber were used to measure, respectively, the energy spectrum and the 4..pi..-exposure rate of the air-reflected gamma photons up to 700 m from the source. Analyses of the data and comparison to DOT discrete ordinates calculations are presented.

  13. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pecchia, M.; D'Auria, F.; Mazzantini, O.

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI formore » performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)« less

  14. A theoretical and experimental benchmark study of core-excited states in nitrogen

    DOE PAGES

    Myhre, Rolf H.; Wolf, Thomas J. A.; Cheng, Lan; ...

    2018-02-14

    The high resolution near edge X-ray absorption fine structure spectrum of nitrogen displays the vibrational structure of the core-excited states. This makes nitrogen well suited for assessing the accuracy of different electronic structure methods for core excitations. We report high resolution experimental measurements performed at the SOLEIL synchrotron facility. These are compared with theoretical spectra calculated using coupled cluster theory and algebraic diagrammatic construction theory. The coupled cluster singles and doubles with perturbative triples model known as CC3 is shown to accurately reproduce the experimental excitation energies as well as the spacing of the vibrational transitions. In conclusion, the computationalmore » results are also shown to be systematically improved within the coupled cluster hierarchy, with the coupled cluster singles, doubles, triples, and quadruples method faithfully reproducing the experimental vibrational structure.« less

  15. Coupled Neutronics Thermal-Hydraulic Solution of a Full-Core PWR Using VERA-CS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clarno, Kevin T; Palmtag, Scott; Davidson, Gregory G

    2014-01-01

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a core simulator called VERA-CS to model operating PWR reactors with high resolution. This paper describes how the development of VERA-CS is being driven by a set of progression benchmark problems that specify the delivery of useful capability in discrete steps. As part of this development, this paper will describe the current capability of VERA-CS to perform a multiphysics simulation of an operating PWR at Hot Full Power (HFP) conditions using a set of existing computer codes coupled together in a novel method. Results for several single-assembly casesmore » are shown that demonstrate coupling for different boron concentrations and power levels. Finally, high-resolution results are shown for a full-core PWR reactor modeled in quarter-symmetry.« less

  16. A theoretical and experimental benchmark study of core-excited states in nitrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myhre, Rolf H.; Wolf, Thomas J. A.; Cheng, Lan

    The high resolution near edge X-ray absorption fine structure spectrum of nitrogen displays the vibrational structure of the core-excited states. This makes nitrogen well suited for assessing the accuracy of different electronic structure methods for core excitations. We report high resolution experimental measurements performed at the SOLEIL synchrotron facility. These are compared with theoretical spectra calculated using coupled cluster theory and algebraic diagrammatic construction theory. The coupled cluster singles and doubles with perturbative triples model known as CC3 is shown to accurately reproduce the experimental excitation energies as well as the spacing of the vibrational transitions. In conclusion, the computationalmore » results are also shown to be systematically improved within the coupled cluster hierarchy, with the coupled cluster singles, doubles, triples, and quadruples method faithfully reproducing the experimental vibrational structure.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephen Johnson; Mehdi Salehi; Karl Eisert

    This report describes the progress of our research during the first 30 months (10/01/2004 to 03/31/2007) of the original three-year project cycle. The project was terminated early due to DOE budget cuts. This was a joint project between the Tertiary Oil Recovery Project (TORP) at the University of Kansas and the Idaho National Laboratory (INL). The objective was to evaluate the use of low-cost biosurfactants produced from agriculture process waste streams to improve oil recovery in fractured carbonate reservoirs through wettability mediation. Biosurfactant for this project was produced using Bacillus subtilis 21332 and purified potato starch as the growth medium.more » The INL team produced the biosurfactant and characterized it as surfactin. INL supplied surfactin as required for the tests at KU as well as providing other microbiological services. Interfacial tension (IFT) between Soltrol 130 and both potential benchmark chemical surfactants and crude surfactin was measured over a range of concentrations. The performance of the crude surfactin preparation in reducing IFT was greater than any of the synthetic compounds throughout the concentration range studied but at low concentrations, sodium laureth sulfate (SLS) was closest to the surfactin, and was used as the benchmark in subsequent studies. Core characterization was carried out using both traditional flooding techniques to find porosity and permeability; and NMR/MRI to image cores and identify pore architecture and degree of heterogeneity. A cleaning regime was identified and developed to remove organic materials from cores and crushed carbonate rock. This allowed cores to be fully characterized and returned to a reproducible wettability state when coupled with a crude-oil aging regime. Rapid wettability assessments for crushed matrix material were developed, and used to inform slower Amott wettability tests. Initial static absorption experiments exposed limitations in the use of HPLC and TOC to determine surfactant concentrations. To reliably quantify both benchmark surfactants and surfactin, a surfactant ion-selective electrode was used as an indicator in the potentiometric titration of the anionic surfactants with Hyamine 1622. The wettability change mediated by dilute solutions of a commercial preparation of SLS (STEOL CS-330) and surfactin was assessed using two-phase separation, and water flotation techniques; and surfactant loss due to retention and adsorption on the rock was determined. Qualitative tests indicated that on a molar basis, surfactin is more effective than STEOL CS-330 in altering wettability of crushed Lansing-Kansas City carbonates from oil-wet to water-wet state. Adsorption isotherms of STEOL CS-330 and surfactin on crushed Lansing-Kansas City outcrop and reservoir material showed that surfactin has higher specific adsorption on these oomoldic carbonates. Amott wettability studies confirmed that cleaned cores are mixed-wet, and that the aging procedure renders them oil-wet. Tests of aged cores with no initial water saturation resulted in very little spontaneous oil production, suggesting that water-wet pathways into the matrix are required for wettability change to occur. Further investigation of spontaneous imbibition and forced imbibition of water and surfactant solutions into LKC cores under a variety of conditions--cleaned vs. crude oil-aged; oil saturated vs. initial water saturation; flooded with surfactant vs. not flooded--indicated that in water-wet or intermediate wet cores, sodium laureth sulfate is more effective at enhancing spontaneous imbibition through wettability change. However, in more oil-wet systems, surfactin at the same concentration performs significantly better.« less

  18. Benchmark CCSD(T) and DFT study of binding energies in Be7 - 12: in search of reliable DFT functional for beryllium clusters

    NASA Astrophysics Data System (ADS)

    Labanc, Daniel; Šulka, Martin; Pitoňák, Michal; Černušák, Ivan; Urban, Miroslav; Neogrády, Pavel

    2018-05-01

    We present a computational study of the stability of small homonuclear beryllium clusters Be7 - 12 in singlet electronic states. Our predictions are based on highly correlated CCSD(T) coupled cluster calculations. Basis set convergence towards the complete basis set limit as well as the role of the 1s core electron correlation are carefully examined. Our CCSD(T) data for binding energies of Be7 - 12 clusters serve as a benchmark for performance assessment of several density functional theory (DFT) methods frequently used in beryllium cluster chemistry. We observe that, from Be10 clusters on, the deviation from CCSD(T) benchmarks is stable with respect to size, and fluctuating within 0.02 eV error bar for most examined functionals. This opens up the possibility of scaling the DFT binding energies for large Be clusters using CCSD(T) benchmark values for smaller clusters. We also tried to find analogies between the performance of DFT functionals for Be clusters and for the valence-isoelectronic Mg clusters investigated recently in Truhlar's group. We conclude that it is difficult to find DFT functionals that perform reasonably well for both beryllium and magnesium clusters. Out of 12 functionals examined, only the M06-2X functional gives reasonably accurate and balanced binding energies for both Be and Mg clusters.

  19. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-02-15

    We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 millionmore » vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  20. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-05-29

    We present a new lock-free parallel algorithm for computing betweenness centrality of massive small-world networks. With minor changes to the data structures, our algorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in the HPCS SSCA#2 Graph Analysis benchmark, which has been extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the ThreadStorm processor, and a single-socket Sun multicore server with the UltraSparc T2 processor.more » For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  1. Core Collapse: The Race Between Stellar Evolution and Binary Heating

    NASA Astrophysics Data System (ADS)

    Converse, Joseph M.; Chandar, R.

    2012-01-01

    The dynamical formation of binary stars can dramatically affect the evolution of their host star clusters. In relatively small clusters (M < 6000 Msun) the most massive stars rapidly form binaries, heating the cluster and preventing any significant contraction of the core. The situation in much larger globular clusters (M 105 Msun) is quite different, with many showing collapsed cores, implying that binary formation did not affect them as severely as lower mass clusters. More massive clusters, however, should take longer to form their binaries, allowing stellar evolution more time to prevent the heating by causing the larger stars to die off. Here, we simulate the evolution of clusters between those of open and globular clusters in order to find at what size a star cluster is able to experience true core collapse. Our simulations make use of a new GPU-based computing cluster recently purchased at the University of Toledo. We also present some benchmarks of this new computational resource.

  2. Benthic algae of benchmark streams in agricultural areas of eastern Wisconsin

    USGS Publications Warehouse

    Scudder, Barbara C.; Stewart, Jana S.

    2001-01-01

    Multivariate analyses indicated multiple scales of environmental factors affect algae. Although two-way indicator species analysis (TWINSPAN), detrended correspondence analysis (DCA), and canonical correspondence analysis (CCA) generally separated sites according to RHU, only DCA ordination indicated a separation of sites according to ecoregion. Environmental variables con-elated with DCA axes 1 and 2 and therefore indicated as important explanatory factors for algal distribution and abundance were factors related to stream size, basin land use/cover, geomorphology, hydrogeology, and riparian disturbance. CCA analyses with a more limited set of environmental variables indicated that pH, average width of natural riparian vegetation (segment scale), basin land use/cover and Q/Q2 were the most important variables affecting the distribution and relative abundance of benthic algae at the 20 benchmark streams,

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, Timothy P.; Martz, Roger L.; Kiedrowski, Brian C.

    New unstructured mesh capabilities in MCNP6 (developmental version during summer 2012) show potential for conducting multi-physics analyses by coupling MCNP to a finite element solver such as Abaqus/CAE[2]. Before these new capabilities can be utilized, the ability of MCNP to accurately estimate eigenvalues and pin powers using an unstructured mesh must first be verified. Previous work to verify the unstructured mesh capabilities in MCNP was accomplished using the Godiva sphere [1], and this work attempts to build on that. To accomplish this, a criticality benchmark and a fuel assembly benchmark were used for calculations in MCNP using both the Constructivemore » Solid Geometry (CSG) native to MCNP and the unstructured mesh geometry generated using Abaqus/CAE. The Big Ten criticality benchmark [3] was modeled due to its geometry being similar to that of a reactor fuel pin. The C5G7 3-D Mixed Oxide (MOX) Fuel Assembly Benchmark [4] was modeled to test the unstructured mesh capabilities on a reactor-type problem.« less

  4. Benchmarking of Neutron Flux Parameters at the USGS TRIGA Reactor in Lakewood, Colorado

    NASA Astrophysics Data System (ADS)

    Alzaabi, Osama E.

    The USGS TRIGA Reactor (GSTR) located at the Denver Federal Center in Lakewood Colorado provides opportunities to Colorado School of Mines students to do experimental research in the field of neutron activation analysis. The scope of this thesis is to obtain precise knowledge of neutron flux parameters at the GSTR. The Colorado School of Mines Nuclear Physics group intends to develop several research projects at the GSTR, which requires the precise knowledge of neutron fluxes and energy distributions in several irradiation locations. The fuel burn-up of the new GSTR fuel configuration and the thermal neutron flux of the core were recalculated since the GSTR core configuration had been changed with the addition of two new fuel elements. Therefore, a MCNP software package was used to incorporate the burn up of reactor fuel and to determine the neutron flux at different irradiation locations and at flux monitoring bores. These simulation results were compared with neutron activation analysis results using activated diluted gold wires. A well calibrated and stable germanium detector setup as well as fourteen samplers were designed and built to achieve accuracy in the measurement of the neutron flux. Furthermore, the flux monitoring bores of the GSTR core were used for the first time to measure neutron flux experimentally and to compare to MCNP simulation. In addition, International Atomic Energy Agency (IAEA) standard materials were used along with USGS national standard materials in a previously well calibrated irradiation location to benchmark simulation, germanium detector calibration and sample measurements to international standards.

  5. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    NASA Astrophysics Data System (ADS)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  6. Beyond core count: a look at new mainstream computing platforms for HEP workloads

    NASA Astrophysics Data System (ADS)

    Szostek, P.; Nowak, A.; Bitzes, G.; Valsan, L.; Jarp, S.; Dotti, A.

    2014-06-01

    As Moore's Law continues to deliver more and more transistors, the mainstream processor industry is preparing to expand its investments in areas other than simple core count. These new interests include deep integration of on-chip components, advanced vector units, memory, cache and interconnect technologies. We examine these moving trends with parallelized and vectorized High Energy Physics workloads in mind. In particular, we report on practical experience resulting from experiments with scalable HEP benchmarks on the Intel "Ivy Bridge-EP" and "Haswell" processor families. In addition, we examine the benefits of the new "Haswell" microarchitecture and its impact on multiple facets of HEP software. Finally, we report on the power efficiency of new systems.

  7. Defining College Readiness: Where Are We Now, and Where Do We Need to Be? The Progress of Education Reform. Volume 13, Number 2

    ERIC Educational Resources Information Center

    Zinth, Jennifer Dounay

    2012-01-01

    Multiple catalysts are fueling states' increased urgency to establish a definition of "college readiness". Some states are creating a "college readiness" definition that describes what a student will know and be able to do in such core academic courses as English language arts and math, and that identifies items or benchmarks on state assessments…

  8. A web-based system architecture for ontology-based data integration in the domain of IT benchmarking

    NASA Astrophysics Data System (ADS)

    Pfaff, Matthias; Krcmar, Helmut

    2018-03-01

    In the domain of IT benchmarking (ITBM), a variety of data and information are collected. Although these data serve as the basis for business analyses, no unified semantic representation of such data yet exists. Consequently, data analysis across different distributed data sets and different benchmarks is almost impossible. This paper presents a system architecture and prototypical implementation for an integrated data management of distributed databases based on a domain-specific ontology. To preserve the semantic meaning of the data, the ITBM ontology is linked to data sources and functions as the central concept for database access. Thus, additional databases can be integrated by linking them to this domain-specific ontology and are directly available for further business analyses. Moreover, the web-based system supports the process of mapping ontology concepts to external databases by introducing a semi-automatic mapping recommender and by visualizing possible mapping candidates. The system also provides a natural language interface to easily query linked databases. The expected result of this ontology-based approach of knowledge representation and data access is an increase in knowledge and data sharing in this domain, which will enhance existing business analysis methods.

  9. Analysis of dosimetry from the H.B. Robinson unit 2 pressure vessel benchmark using RAPTOR-M3G and ALPAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, G.A.

    2011-07-01

    Document available in abstract form only, full text of document follows: The dosimetry from the H. B. Robinson Unit 2 Pressure Vessel Benchmark is analyzed with a suite of Westinghouse-developed codes and data libraries. The radiation transport from the reactor core to the surveillance capsule and ex-vessel locations is performed by RAPTOR-M3G, a parallel deterministic radiation transport code that calculates high-resolution neutron flux information in three dimensions. The cross-section library used in this analysis is the ALPAN library, an Evaluated Nuclear Data File (ENDF)/B-VII.0-based library designed for reactor dosimetry and fluence analysis applications. Dosimetry is evaluated with the industry-standard SNLRMLmore » reactor dosimetry cross-section data library. (authors)« less

  10. Preliminary organic analyses of the DSDP /JOIDES/ cores - Legs V-IX.

    NASA Technical Reports Server (NTRS)

    Simoneit, B. R.; Burlingame, A. L.

    1972-01-01

    Descriptions of the methods used and results obtained in analyses of deep sea drilling cores. The analyses were performed in two phases (differing in degree of particularization) depending on the amount of core sample available. The results are presented in relation to the ages and to the fossil fauna and flora of the sediments.

  11. Qualitative Analysis of Common Definitions for Core Advanced Pharmacy Practice Experiences

    PubMed Central

    Danielson, Jennifer; Weber, Stanley S.

    2014-01-01

    Objective. To determine how colleges and schools of pharmacy interpreted the Accreditation Council for Pharmacy Education’s (ACPE’s) Standards 2007 definitions for core advanced pharmacy practice experiences (APPEs), and how they differentiated community and institutional practice activities for introductory pharmacy practice experiences (IPPEs) and APPEs. Methods. A cross-sectional, qualitative, thematic analysis was done of survey data obtained from experiential education directors in US colleges and schools of pharmacy. Open-ended responses to invited descriptions of the 4 core APPEs were analyzed using grounded theory to determine common themes. Type of college or school of pharmacy (private vs public) and size of program were compared. Results. Seventy-one schools (72%) with active APPE programs at the time of the survey responded. Lack of strong frequent themes describing specific activities for the acute care/general medicine core APPE indicated that most respondents agreed on the setting (hospital or inpatient) but the student experience remained highly variable. Themes were relatively consistent between public and private institutions, but there were differences across programs of varying size. Conclusion. Inconsistencies existed in how colleges and schools of pharmacy defined the core APPEs as required by ACPE. More specific descriptions of core APPEs would help to standardize the core practice experiences across institutions and provide an opportunity for quality benchmarking. PMID:24954931

  12. S/sub n/ analysis of the TRX metal lattices with ENDF/B version III data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheeler, F.J.; Pearlstein, S.

    1975-03-01

    Two critical assemblies, designated as thermal-reactor benchmarks TRX-1 and TRX-2 for ENDF/B data testing, were analyzed using the one-dimensional S/sub n/-theory code SCAMP. The two assemblies were simple lattices of aluminum-clad, uranium-metal fuel rods in triangular arrays with D$sub 2$O as moderator and reflector. The fuel was low-enriched (1.3 percent $sup 235$U), 0.387-inch in diameter and had an active height of 48 inches. The volume ratio of water to uranium was 2.35 for the TRX-1 lattice and 4.02 for TRX-2. Full-core S/sub n/ calculations based on Version III data were performed for these assemblies and the results obtained were comparedmore » with the measured values of the multiplication factors, the ratio of epithermal-to-thermal neutron capture in $sup 238$U, the ratio of epithermal-to-thermal fission in $sup 235$U, the ratio of $sup 238$U fission to $sup 235$U fission, and the ratio of capture in $sup 238$U to fission in $sup 235$U. Reaction rates were obtained from a central region of the full- core problems. Multigroup cross sections for the reactor calculation were obtained from S/sub n/ cell calculations with resonance self-shielding calculated using the RABBLE treatment. The results of the analyses are generally consistent with results obtained by other investigators. (auth)« less

  13. The molecular basis of conformational instability of the ecdysone receptor DNA binding domain studied by in silico and in vitro experiments.

    PubMed

    Szamborska-Gbur, Agnieszka; Rymarczyk, Grzegorz; Orłowski, Marek; Kuzynowski, Tomasz; Jakób, Michał; Dziedzic-Letka, Agnieszka; Górecki, Andrzej; Dobryszycki, Piotr; Ożyhar, Andrzej

    2014-01-01

    The heterodimer of the ecdysone receptor (EcR) and ultraspiracle (Usp), members of the nuclear receptors superfamily, regulates gene expression associated with molting and metamorphosis in insects. The DNA binding domains (DBDs) of the Usp and EcR play an important role in their DNA-dependent heterodimerization. Analysis of the crystal structure of the UspDBD/EcRDBD heterocomplex from Drosophila melanogaster on the hsp27 gene response element, suggested an appreciable similarity between both DBDs. However, the chemical denaturation experiments showed a categorically lower stability for the EcRDBD in contrast to the UspDBD. The aim of our study was an elucidation of the molecular basis of this intriguing instability. Toward this end, we mapped the EcRDBD amino acid sequence positions which have an impact on the stability of the EcRDBD. The computational protein design and in vitro analyses of the EcRDBD mutants indicate that non-conserved residues within the α-helix 2, forming the EcRDBD hydrophobic core, represent a specific structural element that contributes to instability. In particular, the L58 appears to be a key residue which differentiates the hydrophobic cores of UspDBD and EcRDBD and is the main reason for the low stability of the EcRDBD. Our results might serve as a benchmark for further studies of the intricate nature of the EcR molecule.

  14. Use of non-invasive genetics to generate core-area population estimates of a threatened predator in the Superior National Forest, USA

    USGS Publications Warehouse

    Barber-Meyer, Shannon; Ryan, Daniel; Grosshuesch, David; Catton, Timothy; Malick-Wahls, Sarah

    2018-01-01

    core areas and averaged 52.3 (SD=8.3, range=43-59) during 2015-2017 in the larger core areas. We found no evidence for a decrease or increase in abundance during either period. Lynx density estimates were approximately 7-10 times lower than densities of lynx in northern populations at the low of the snowshoe hare (Lepus americanus) population cycle. To our knowledge, our results are the first attempt to estimate abundance, trend and density of lynx in Minnesota using non-invasive genetic capture-mark-recapture. Estimates such as ours provide useful benchmarks for future comparisons by providing a context with which to assess 1) potential changes in forest management that may affect lynx recovery and conservation, and 2) possible effects of climate change on the depth, density, and duration of annual snow cover and correspondingly, potential effects on snowshoe hares as well.

  15. Experimental physics characteristics of a heavy-metal-reflected fast-spectrum critical assembly

    NASA Technical Reports Server (NTRS)

    Heneveld, W. H.; Paschall, R. K.; Springer, T. H.; Swanson, V. A.; Thiele, A. W.; Tuttle, R. J.

    1972-01-01

    A zero-power critical assembly was designed, constructed, and operated for the purpose of conducting a series of benchmark experiments dealing with the physics characteristics of a UN-fueled, Li-cooled, Mo-reflected, drum-controlled compact fast reactor for use with a space-power electric conversion system. The range of the previous experimental investigations has been expanded to include the reactivity effects of:(1) surrounding the reactor with 15.24 cm (6 in.) of polyethylene, (2) reducing the heights of a portion of the upper and lower axial reflectors by factors of 2 and 4, (3) adding 45 kg of W to the core uniformly in two steps, (4) adding 9.54 kg of Ta to the core uniformly, and (5) inserting 2.3 kg of polyethylene into the core proper and determining the effect of a Ta addition on the polyethylene worth.

  16. Structure analysis for hole-nuclei close to 132Sn by a large-scale shell-model calculation

    NASA Astrophysics Data System (ADS)

    Wang, Han-Kui; Sun, Yang; Jin, Hua; Kaneko, Kazunari; Tazaki, Shigeru

    2013-11-01

    The structure of neutron-rich nuclei with a few holes in respect of the doubly magic nucleus 132Sn is investigated by means of large-scale shell-model calculations. For a considerably large model space, including orbitals allowing both neutron and proton core excitations, an effective interaction for the extended pairing-plus-quadrupole model with monopole corrections is tested through detailed comparison between the calculation and experimental data. By using the experimental energy of the core-excited 21/2+ level in 131In as a benchmark, monopole corrections are determined that describe the size of the neutron N=82 shell gap. The level spectra, up to 5 MeV of excitation in 131In, 131Sn, 130In, 130Cd, and 130Sn, are well described and clearly explained by couplings of single-hole orbitals and by core excitations.

  17. Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code

    NASA Astrophysics Data System (ADS)

    Wemple, Charles; Zwermann, Winfried

    2017-09-01

    Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.

  18. Evaluation of the influence of the definition of an isolated hip fracture as an exclusion criterion for trauma system benchmarking: a multicenter cohort study.

    PubMed

    Tiao, J; Moore, L; Porgo, T V; Belcaid, A

    2016-06-01

    To assess whether the definition of an IHF used as an exclusion criterion influences the results of trauma center benchmarking. We conducted a multicenter retrospective cohort study with data from an integrated Canadian trauma system. The study population included all patients admitted between 1999 and 2010 to any of the 57 adult trauma centers. Seven definitions of IHF based on diagnostic codes, age, mechanism of injury, and secondary injuries, identified in a systematic review, were used. Trauma centers were benchmarked using risk-adjusted mortality estimates generated using the Trauma Risk Adjustment Model. The agreement between benchmarking results generated under different IHF definitions was evaluated with correlation coefficients on adjusted mortality estimates. Correlation coefficients >0.95 were considered to convey acceptable agreement. The study population consisted of 172,872 patients before exclusion of IHF and between 128,094 and 139,588 patients after exclusion. Correlation coefficients between risk-adjusted mortality estimates generated in populations including and excluding IHF varied between 0.86 and 0.90. Correlation coefficients of estimates generated under different definitions of IHF varied between 0.97 and 0.99, even when analyses were restricted to patients aged ≥65 years. Although the exclusion of patients with IHF has an influence on the results of trauma center benchmarking based on mortality, the definition of IHF in terms of diagnostic codes, age, mechanism of injury and secondary injury has no significant impact on benchmarking results. Results suggest that there is no need to obtain formal consensus on the definition of IHF for benchmarking activities.

  19. Calculation of the Phenix end-of-life test 'Control Rod Withdrawal' with the ERANOS code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiberi, V.

    2012-07-01

    The Inst. of Radiological Protection and Nuclear Safety (IRSN) acts as technical support to French public authorities. As such, IRSN is in charge of safety assessment of operating and under construction reactors, as well as future projects. In this framework, one current objective of IRSN is to evaluate the ability and accuracy of numerical tools to foresee consequences of accidents. Neutronic studies step in the safety assessment from different points of view among which the core design and its protection system. They are necessary to evaluate the core behavior in case of accident in order to assess the integrity ofmore » the first barrier and the absence of a prompt criticality risk. To reach this objective one main physical quantity has to be evaluated accurately: the neutronic power distribution in core during whole reactor lifetime. Phenix end of life tests, carried out in 2009, aim at increasing the experience feedback on sodium cooled fast reactors. These experiments have been done in the framework of the development of the 4. generation of nuclear reactors. Ten tests have been carried out: 6 on neutronic and fuel aspects, 2 on thermal hydraulics and 2 for the emergency shutdown. Two of them have been chosen for an international exercise on thermal hydraulics and neutronics in the frame of an IAEA Coordinated Research Project. Concerning neutronics, the Control Rod Withdrawal test is relevant for safety because it allows evaluating the capability of calculation tools to compute the radial power distribution on fast reactors core configurations in which the flux field is very deformed. IRSN participated to this benchmark with the ERANOS code developed by CEA for fast reactors studies. This paper presents the results obtained in the framework of the benchmark activity. A relatively good agreement was found with available measures considering the approximations done in the modeling. The work underlines the importance of burn-up calculations in order to have a fine core concentrations mesh for the calculation of the power distribution. (authors)« less

  20. Automatic Thread-Level Parallelization in the Chombo AMR Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christen, Matthias; Keen, Noel; Ligocki, Terry

    2011-05-26

    The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less

  1. New core-reflector boundary conditions for transient nodal reactor calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, E.K.; Kim, C.H.; Joo, H.K.

    1995-09-01

    New core-reflector boundary conditions designed for the exclusion of the reflector region in transient nodal reactor calculations are formulated. Spatially flat frequency approximations for the temporal neutron behavior and two types of transverse leakage approximations in the reflector region are introduced to solve the transverse-integrated time-dependent one-dimensional diffusion equation and then to obtain relationships between net current and flux at the core-reflector interfaces. To examine the effectiveness of new core-reflector boundary conditions in transient nodal reactor computations, nodal expansion method (NEM) computations with and without explicit representation of the reflector are performed for Laboratorium fuer Reaktorregelung und Anlagen (LRA) boilingmore » water reactor (BWR) and Nuclear Energy Agency Committee on Reactor Physics (NEACRP) pressurized water reactor (PWR) rod ejection kinetics benchmark problems. Good agreement between two NEM computations is demonstrated in all the important transient parameters of two benchmark problems. A significant amount of CPU time saving is also demonstrated with the boundary condition model with transverse leakage (BCMTL) approximations in the reflector region. In the three-dimensional LRA BWR, the BCMTL and the explicit reflector model computations differ by {approximately}4% in transient peak power density while the BCMTL results in >40% of CPU time saving by excluding both the axial and the radial reflector regions from explicit computational nodes. In the NEACRP PWR problem, which includes six different transient cases, the largest difference is 24.4% in the transient maximum power in the one-node-per-assembly B1 transient results. This difference in the transient maximum power of the B1 case is shown to reduce to 11.7% in the four-node-per-assembly computations. As for the computing time, BCMTL is shown to reduce the CPU time >20% in all six transient cases of the NEACRP PWR.« less

  2. A Numerical Study on the Edgewise Compression Strength of Sandwich Structures with Facesheet-Core Disbonds

    NASA Technical Reports Server (NTRS)

    Bergan, Andrew C.

    2017-01-01

    Damage tolerant design approaches require determination of critical damage modes and flaw sizes in order to establish nondestructive evaluation detection requirements. A finite element model is developed to assess the effect of circular facesheet-core disbonds on the strength of sandwich specimens subjected to edgewise compressive loads for the purpose of predicting the critical flaw size for a variety of design parameters. Postbuckling analyses are conducted in which an initial imperfection is seeded using results from a linear buckling analysis. Both the virtual crack closure technique (VCCT) and cohesive elements are considered for modeling disbond growth. Predictions from analyses using the VCCT and analyses using cohesive elements are in good correlation. A series of parametric analyses are conducted to investigate the effect of core thickness and material, facesheet layup, facesheet-core interface properties, and curvature on the criticality of facesheet-core disbonds of various sizes. The results from these analyses provide a basis for determining the critical flaw size for facesheet-core disbonds subjected to edgewise compression loads and, therefore, nondestructive evaluation flaw detection requirements for this configuration.

  3. Fabrication and Benchmarking of a Stratix V FPGA with Monolithic Integrated Microfluidic Cooling

    DTIC Science & Technology

    2017-03-01

    run. The output from all cores were monitored through the Altera Signaltap tool in order to detect glitches which occurred in the output...dependence on temperature, and static/ leakage power, which comes from several components, such as subthreshold leakage , gate leakage , and reverse bias 220...junction current. Subthreshold leakage current tends to be the most significant temperature dependent component of the power [6,7] and is given by

  4. Interior Head Impact Protective Components and Materials for Use in US Army Vehicles

    DTIC Science & Technology

    2015-08-01

    benchmarked the automotive industry to identify potential commercial-off-the-shelf (COTS) materials. TARDEC initially tested the energy attenuating...this effort leverages the performance criterion used in the automotive industry according to SAE TP201U-01, FMVSS (Federal Motor Vehicle Safety...of the core material not being fully engaged on the Ancra tract. The backing of material ID 14 was reinforced with steel , this resulted in the

  5. Ab initio calculations, structure, NBO and NCI analyses of Xsbnd H⋯π interactions

    NASA Astrophysics Data System (ADS)

    Wu, Qiyang; Su, He; Wang, Hongyan; Wang, Hui

    2018-02-01

    The performance of ab initio methods (MP2, DFT/B3LYP, random-phase approximation (RPA), CCSD(T) and QCISD(T)) in predicting interaction energy of Xsbnd H⋯π (Xsbnd H = HCCH, HCl, HF; π = C2H2, C2H4, C6H6) hydrogen complexes are assessed systematically. The CCSD(T)/CBS benchmarks of interaction energy are reported. It is found that RPA agrees well with CCSD(T)/CBS benchmarks and experimental results. CCSD(T) and QCISD(T) perform the best only when compared with CCSD(T)/CBS benchmarks, MP2 performs well only for experimental data. B3LYP provides the worst accuracy. Additionally, the equilibrium structure, interaction type of Xsbnd H⋯π hydrogen complexes are investigated by the natural bond orbital (NBO) and the non-covalent interaction index (NCI).

  6. Benchmark concentrations for methyl mercury obtained from the 9-year follow-up of the Seychelles Child Development Study.

    PubMed

    van Wijngaarden, Edwin; Beck, Christopher; Shamlaye, Conrad F; Cernichiari, Elsa; Davidson, Philip W; Myers, Gary J; Clarkson, Thomas W

    2006-09-01

    Methyl mercury (MeHg) is highly toxic to the developing nervous system. Human exposure is mainly from fish consumption since small amounts are present in all fish. Findings of developmental neurotoxicity following high-level prenatal exposure to MeHg raised the question of whether children whose mothers consumed fish contaminated with background levels during pregnancy are at an increased risk of impaired neurological function. Benchmark doses determined from studies in New Zealand, and the Faroese and Seychelles Islands indicate that a level of 4-25 parts per million (ppm) measured in maternal hair may carry a risk to the infant. However, there are numerous sources of uncertainty that could affect the derivation of benchmark doses, and it is crucial to continue to investigate the most appropriate derivation of safe consumption levels. Earlier, we published the findings from benchmark analyses applied to the data collected on the Seychelles main cohort at the 66-month follow-up period. Here, we expand on the main cohort analyses by determining the benchmark doses (BMD) of MeHg level in maternal hair based on 643 Seychellois children for whom 26 different neurobehavioral endpoints were measured at 9 years of age. Dose-response models applied to these continuous endpoints incorporated a variety of covariates and included the k-power model, the Weibull model, and the logistic model. The average 95% lower confidence limit of the BMD (BMDL) across all 26 endpoints varied from 20.1 ppm (range=17.2-22.5) for the logistic model to 20.4 ppm (range=17.9-23.0) for the k-power model. These estimates are somewhat lower than those obtained after 66 months of follow-up. The Seychelles Child Development Study continues to provide a firm scientific basis for the derivation of safe levels of MeHg consumption.

  7. Late Holocene sedimentation in coastal areas of the northwestern Ross Sea (Antarctica)

    NASA Astrophysics Data System (ADS)

    Colizza, Ester; Finocchiaro, Furio; Kuhn, Gerhard; Langone, Leonardo; Melis, Romana; Mezgec, Karin; Severi, Mirko; Traversi, Rita; Udisti, Roberto; Stenni, Barbara; Braida, Martina

    2013-04-01

    Sediment cores and box cores collected in two coastal areas of the northwestern Ross Sea (Antarctica) highlight the possibility of studying the Late Holocene period in detail. In this work we propose a study on two box cores and two gravity cores collected in the Cape Hallett and Wood Bay areas during the 2005 PNRA oceanographic cruise. The two sites are feed by Eastern Antarctic Ice Shelf (EAIS) and previous studies have highlighted a complex postglacial sedimentary sequence, also influenced by local morphology. This study is performed within the framework of the PNRA-ESF PolarCLIMATE HOLOCLIP (Holocene climate variability at high-southern latitudes: an integrated perspective) Project. The data set includes: magnetic susceptibility, X-ray analyses, 210Pb, 14C dating, diatoms and foraminifera assemblages, organic carbon, and grain-size analyses. Furthermore XRF core scanner analyses, colour analysis from digital images, and major, minor and trace element concentration analyses (ICP-AES) are performed. Data show that the box core and upper core sediments represent a very recent sedimentation in which it is possible to observe the parameter variability probably linked to climate variability/changes: these variation will be compared with isotopic record form ice cores collected form the same Antarctic sector.

  8. Systematic void fraction studies with RELAP5, FRANCESCA and HECHAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stosic, Z.; Preusche, G.

    1996-08-01

    In enhancing the scope of standard thermal-hydraulic codes applications beyond its capabilities, i.e. coupling with a one and/or three-dimensional kinetics core model, the void fraction, transferred from thermal-hydraulics to the core model, plays a determining role in normal operating range and high core flow, as the generated heat and axial power profiles are direct functions of void distribution in the core. Hence, it is very important to know if the void quality models in the programs which have to be coupled are compatible to allow the interactive exchange of data which are based on these constitutive void-quality relations. The presentedmore » void fraction study is performed in order to give the basis for the conclusion whether a transient core simulation using the RELAP5 void fractions can calculate the axial power shapes adequately. Because of that, the void fractions calculated with RELAP5 are compared with those calculated by BWR safety code for licensing--FRANCESCA and the best estimate model for pre- and post-dryout calculation in BWR heated channel--HECHAN. In addition, a comparison with standard experimental void-quality benchmark tube data is performed for the HECHAN code.« less

  9. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    DOE PAGES

    Yankov, Artem; Collins, Benjamin; Klein, Markus; ...

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less

  10. Neutron Deep Penetration Calculations in Light Water with Monte Carlo TRIPOLI-4® Variance Reduction Techniques

    NASA Astrophysics Data System (ADS)

    Lee, Yi-Kang

    2017-09-01

    Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.

  11. Benchmark results and theoretical treatments for valence-to-core x-ray emission spectroscopy in transition metal compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mortensen, D. R.; Seidler, G. T.; Kas, Joshua J.

    We report measurement of the valence-to-core (VTC) region of the K-shell x-ray emission spectra from several Zn and Fe inorganic compounds, and their critical comparison with several existing theoretical treatments. We find generally good agreement between the respective theories and experiment, and in particular find an important admixture of dipole and quadrupole character for Zn materials that is much weaker in Fe-based systems. These results on materials whose simple crystal structures should not, a prior, pose deep challenges to theory, will prove useful in guiding the further development of DFT and time-dependent DFT methods for VTC-XES predictions and their comparisonmore » to experiment.« less

  12. CORAL: aligning conserved core regions across domain families.

    PubMed

    Fong, Jessica H; Marchler-Bauer, Aron

    2009-08-01

    Homologous protein families share highly conserved sequence and structure regions that are frequent targets for comparative analysis of related proteins and families. Many protein families, such as the curated domain families in the Conserved Domain Database (CDD), exhibit similar structural cores. To improve accuracy in aligning such protein families, we propose a profile-profile method CORAL that aligns individual core regions as gap-free units. CORAL computes optimal local alignment of two profiles with heuristics to preserve continuity within core regions. We benchmarked its performance on curated domains in CDD, which have pre-defined core regions, against COMPASS, HHalign and PSI-BLAST, using structure superpositions and comprehensive curator-optimized alignments as standards of truth. CORAL improves alignment accuracy on core regions over general profile methods, returning a balanced score of 0.57 for over 80% of all domain families in CDD, compared with the highest balanced score of 0.45 from other methods. Further, CORAL provides E-values to aid in detecting homologous protein families and, by respecting block boundaries, produces alignments with improved 'readability' that facilitate manual refinement. CORAL will be included in future versions of the NCBI Cn3D/CDTree software, which can be downloaded at http://www.ncbi.nlm.nih.gov/Structure/cdtree/cdtree.shtml. Supplementary data are available at Bioinformatics online.

  13. Monodisperse core/shell Ni/FePt nanoparticles and their con-version to Ni/Pt to catalyze oxygen reduction

    DOE PAGES

    Zhang, Sen; Hao, Yizhou; Su, Dong; ...

    2014-10-28

    We report a size-controllable synthesis of monodisperse core/shell Ni/FePt nanoparticles (NPs) via a seed-mediated growth and their subsequent conversion to Ni/Pt NPs. Preventing surface oxidation of the Ni seeds is essential for the growth of uniform FePt shells. These Ni/FePt NPs have a thin (≈ 1 nm) FePt shell, and can be converted to Ni/Pt by acetic acid wash to yield active catalysts for oxygen reduction reaction (ORR). Tuning the core size allow for optimization of their electrocatalytic activity. The specific activity and mass activity of 4.2 nm/0.8 nm core/shell Ni/FePt reach 1.95 mA/cm² and 490 mA/mg Pt at 0.9more » V ( vs. reversible hydrogen electrode, RHE), which are much higher than those of benchmark commercial Pt catalyst (0.34 mA/cm² and 92 mA/mg Pt at 0.9 V). Our studies provide a robust approach to monodisperse core/shell NPs with non-precious metal core, making it possible to develop advanced NP catalysts with ultralow Pt content for ORR and many other heterogeneous reactions.« less

  14. Highly Efficient Parallel Multigrid Solver For Large-Scale Simulation of Grain Growth Using the Structural Phase Field Crystal Model

    NASA Astrophysics Data System (ADS)

    Guan, Zhen; Pekurovsky, Dmitry; Luce, Jason; Thornton, Katsuyo; Lowengrub, John

    The structural phase field crystal (XPFC) model can be used to model grain growth in polycrystalline materials at diffusive time-scales while maintaining atomic scale resolution. However, the governing equation of the XPFC model is an integral-partial-differential-equation (IPDE), which poses challenges in implementation onto high performance computing (HPC) platforms. In collaboration with the XSEDE Extended Collaborative Support Service, we developed a distributed memory HPC solver for the XPFC model, which combines parallel multigrid and P3DFFT. The performance benchmarking on the Stampede supercomputer indicates near linear strong and weak scaling for both multigrid and transfer time between multigrid and FFT modules up to 1024 cores. Scalability of the FFT module begins to decline at 128 cores, but it is sufficient for the type of problem we will be examining. We have demonstrated simulations using 1024 cores, and we expect to achieve 4096 cores and beyond. Ongoing work involves optimization of MPI/OpenMP-based codes for the Intel KNL Many-Core Architecture. This optimizes the code for coming pre-exascale systems, in particular many-core systems such as Stampede 2.0 and Cori 2 at NERSC, without sacrificing efficiency on other general HPC systems.

  15. Striking similarities in temporal changes to spring sea ice occurrence across the central Canadian Arctic Archipelago over the last 7000 years

    NASA Astrophysics Data System (ADS)

    Belt, Simon T.; Vare, Lindsay L.; Massé, Guillaume; Manners, Hayley R.; Price, John C.; MacLachlan, Suzanne E.; Andrews, John T.; Schmidt, Sabine

    2010-12-01

    A 7000 year spring sea ice record for Victoria Strait (ARC-4) and Dease Strait (ARC-5) in the Canadian Arctic Archipelago (CAA) has been determined by quantification of the sea ice diatom-derived biomarker IP 25 in two marine sediment piston cores obtained in 2005. The chronologies of the ARC-4 and ARC-5 cores were determined using a combination of 14C AMS dates obtained from macrobenthic fossils and magnetic susceptibility measurements. The ages of the tops of the piston cores were estimated by matching chemical and physical parameters with those obtained from corresponding box cores. These analyses revealed that, while the top of the ARC-4 piston core was estimated to be essentially modern (ca. 60 cal yr BP), a few hundred years of sediment appeared to be absent from the ARC-5 piston core. Downcore changes to IP 25 fluxes for both cores were interpreted in terms of variations in spring sea ice occurrence, and correlations between the individual IP 25 flux profiles for Victoria Strait, Dease Strait and Barrow Strait (reported previously) were shown to be statistically significant at both 50 and 100-year resolutions. The IP 25 data indicate lower spring sea ice occurrences during the early part of the record (ca. 7.0-3.0 cal kyr BP) and for parts of the late Holocene (ca. 1.5-0.8 cal kyr BP), especially for the two lower latitude study locations. In contrast, higher spring sea ice occurrences existed during ca. 3.0-1.5 cal kyr BP and after ca. 800 cal yr BP. The observation of, consecutively, lower and higher spring sea ice occurrence during two periods of the late Holocene, coincides broadly with the Medieval Warm Period and Little Ice Age epochs, respectively. The IP 25 data are complemented by particle size and mineralogical data, although these may alternatively reflect changes in sea level at the study sites. The IP 25 data are also compared to previous proxy-based determinations of palaeo sea ice and palaeoclimate for the CAA, including those based on bowhead whale remains and dinocyst assemblages. The spatial consistency in the proxy data which, most notably, indicates an increase in spring sea ice occurrence around 3 cal kyr BP, provides a potentially useful benchmark for the termination of the Holocene Thermal Maximum for the central CAA.

  16. 77 FR 57090 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-17

    ... bonus payments to three-star plans and eliminating the cap on blended county benchmarks that would... supplement what can be learned from the analyses of administrative and financial data for MAOs, and from an...

  17. Particle shape analysis of volcanic clast samples with the Matlab tool MORPHEO

    NASA Astrophysics Data System (ADS)

    Charpentier, Isabelle; Sarocchi, Damiano; Rodriguez Sedano, Luis Angel

    2013-02-01

    This paper presents a modular Matlab tool, namely MORPHEO, devoted to the study of particle morphology by Fourier analysis. A benchmark made of four sample images with different features (digitized coins, a pebble chart, gears, digitized volcanic clasts) is then proposed to assess the abilities of the software. Attention is brought to the Weibull distribution introduced to enhance fine variations of particle morphology. Finally, as an example, samples pertaining to a lahar deposit located in La Lumbre ravine (Colima Volcano, Mexico) are analysed. MORPHEO and the benchmark are freely available for research purposes.

  18. A Split Forcing Technique to Reduce Log-layer Mismatch in Wall-modeled Turbulent Channel Flows

    NASA Astrophysics Data System (ADS)

    Deleon, Rey; Senocak, Inanc

    2016-11-01

    The conventional approach to sustain a flow field in a periodic channel flow seems to be the culprit behind the log-law mismatch problem that has been reported in many studies hybridizing Reynolds-averaged Navier-Stokes (RANS) and large-eddy simulation (LES) techniques, commonly referred to as hybrid RANS-LES. To address this issue, we propose a split-forcing approach that relies only on the conservation of mass principle. We adopt a basic hybrid RANS-LES technique on a coarse mesh with wall-stress boundary conditions to simulate turbulent channel flows at friction Reynolds numbers of 2000 and 5200 and demonstrate good agreement with benchmark data. We also report a duality in velocity scale that is a specific consequence of the split forcing framework applied to hybrid RANS-LES. The first scale is the friction velocity derived from the wall shear stress. The second scale arises in the core LES region, a value different than at the wall. Second-order turbulence statistics agree well with the benchmark data when normalized by the core friction velocity, whereas the friction velocity at the wall remains the appropriate scale for the mean velocity profile. Based on our findings, we suggest reevaluating more sophisticated hybrid RANS-LES approaches within the split-forcing framework. Work funded by National Science Foundation under Grant No. 1056110 and 1229709. First author acknowledges the University of Idaho President's Doctoral Scholars Award.

  19. Spatial distribution and potential biological risk of some metals in relation to granulometric content in core sediments from Chilika Lake, India.

    PubMed

    Barik, Saroja K; Muduli, Pradipta R; Mohanty, Bita; Rath, Prasanta; Samanta, Srikanta

    2018-01-01

    The article presents first systematic report on the concentration of selected major elements [iron (Fe) and manganese (Mn)] and minor elements [zinc (Zn), copper (Cu), chromium (Cr), lead (Pb), nickel (Ni), and cobalt (Co)] from the core sediment of Chilika Lake, India. The analyzed samples revealed higher content of Pb than the background levels in the entire study area. The extent of contamination from minor and major elements is expressed by assessing (i) the metal enrichments in the sediment through the calculations of anthropogenic factor (AF), pollution load index (PLI), Enrichment factor (EF), and geoaccumulation index (Igeo) and (ii) potential biological risks by the use of sediment quality guidelines like effect range median (ERM) and effect range low (ERL) benchmarks. The estimated indices indicated that sediment is enriched with Pb, Ni, Cr, Cu and Co. The enrichment of these elements seems to be due to the fine granulometric characteristics of the sediment with Fe and Mn oxyhydroxides being the main metal carriers and fishing boats using low grade paints, fuel, and fishing technology using lead beads fixed to fishing nets. Trace element input to the Chilika lake needs to be monitored with due emphasis on Cr and Pb contaminations since the ERM and ERL benchmarks indicated potential biological risk with these metals.

  20. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr

    2014-12-15

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less

  1. CHIC - Coupling Habitability, Interior and Crust

    NASA Astrophysics Data System (ADS)

    Noack, Lena; Labbe, Francois; Boiveau, Thomas; Rivoldini, Attilio; Van Hoolst, Tim

    2014-05-01

    We present a new code developed for simulating convection in terrestrial planets and icy moons. The code CHIC is written in Fortran and employs the finite volume method and finite difference method for solving energy, mass and momentum equations in either silicate or icy mantles. The code uses either Cartesian (2D and 3D box) or spherical coordinates (2D cylinder or annulus). It furthermore contains a 1D parametrised model to obtain temperature profiles in specific regions, for example in the iron core or in the silicate mantle (solving only the energy equation). The 2D/3D convection model uses the same input parameters as the 1D model, which allows for comparison of the different models and adaptation of the 1D model, if needed. The code has already been benchmarked for the following aspects: - viscosity-dependent rheology (Blankenbach et al., 1989) - pseudo-plastic deformation (Tosi et al., in preparation phase) - subduction mechanism and plastic deformation (Quinquis et al., in preparation phase) New features that are currently developed and benchmarked include: - compressibility (following King et al., 2009 and Leng and Zhong, 2008) - different melt modules (Plesa et al., in preparation phase) - freezing of an inner core (comparison with GAIA code, Huettig and Stemmer, 2008) - build-up of oceanic and continental crust (Noack et al., in preparation phase) The code represents a useful tool to couple the interior with the surface of a planet (e.g. via build-up and erosion of crust) and it's atmosphere (via outgassing on the one hand and subduction of hydrated crust and carbonates back into the mantle). It will be applied to investigate several factors that might influence the habitability of a terrestrial planet, and will also be used to simulate icy bodies with high-pressure ice phases. References: Blankenbach et al. (1989). A benchmark comparison for mantle convection codes. GJI 98, 23-38. Huettig and Stemmer (2008). Finite volume discretization for dynamic viscosities on Voronoi grids. PEPI 171(1-4), 137-146. King et al. (2009). A Community Benchmark for 2D Cartesian Compressible Convection in the Earth's Mantle. GJI 179, 1-11. Leng and Zhong (2008). Viscous heating, adiabatic heating and energetic consistency in compressible mantle convection. GJI 173, 693-702.

  2. SAS Code for Calculating Intraclass Correlation Coefficients and Effect Size Benchmarks for Site-Randomized Education Experiments

    ERIC Educational Resources Information Center

    Brandon, Paul R.; Harrison, George M.; Lawton, Brian E.

    2013-01-01

    When evaluators plan site-randomized experiments, they must conduct the appropriate statistical power analyses. These analyses are most likely to be valid when they are based on data from the jurisdictions in which the studies are to be conducted. In this method note, we provide software code, in the form of a SAS macro, for producing statistical…

  3. The impact of Moore's Law and loss of Dennard scaling: Are DSP SoCs an energy efficient alternative to x86 SoCs?

    NASA Astrophysics Data System (ADS)

    Johnsson, L.; Netzer, G.

    2016-10-01

    Moore's law, the doubling of transistors per unit area for each CMOS technology generation, is expected to continue throughout the decade, while Dennard voltage scaling resulting in constant power per unit area stopped about a decade ago. The semiconductor industry's response to the loss of Dennard scaling and the consequent challenges in managing power distribution and dissipation has been leveled off clock rates, a die performance gain reduced from about a factor of 2.8 to 1.4 per technology generation, and multi-core processor dies with increased cache sizes. Increased caches sizes offers performance benefits for many applications as well as energy savings. Accessing data in cache is considerably more energy efficient than main memory accesses. Further, caches consume less power than a corresponding amount of functional logic. As feature sizes continue to be scaled down an increasing fraction of the die must be “underutilized” or “dark” due to power constraints. With power being a prime design constraint there is a concerted effort to find significantly more energy efficient chip architectures than dominant in servers today, with chips potentially incorporating several types of cores to cover a range of applications, or different functions in an application, as is already common for the mobile processor market. Digital Signal Processors (DSPs), largely targeting the embedded and mobile processor markets, typically have been designed for a power consumption of 10% or less of a typical x86 CPU, yet with much more than 10% of the floating-point capability of the same technology generation x86 CPUs. Thus, DSPs could potentially offer an energy efficient alternative to x86 CPUs. Here we report an assessment of the Texas Instruments TMS320C6678 DSP in regards to its energy efficiency for two common HPC benchmarks: STREAM (memory system benchmark) and HPL (CPU benchmark)

  4. Comparison of the PHISICS/RELAP5-3D Ring and Block Model Results for Phase I of the OECD MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2014-04-01

    The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1,more » a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.« less

  5. Visualization assisted by parallel processing

    NASA Astrophysics Data System (ADS)

    Lange, B.; Rey, H.; Vasques, X.; Puech, W.; Rodriguez, N.

    2011-01-01

    This paper discusses the experimental results of our visualization model for data extracted from sensors. The objective of this paper is to find a computationally efficient method to produce a real time rendering visualization for a large amount of data. We develop visualization method to monitor temperature variance of a data center. Sensors are placed on three layers and do not cover all the room. We use particle paradigm to interpolate data sensors. Particles model the "space" of the room. In this work we use a partition of the particle set, using two mathematical methods: Delaunay triangulation and Voronoý cells. Avis and Bhattacharya present these two algorithms in. Particles provide information on the room temperature at different coordinates over time. To locate and update particles data we define a computational cost function. To solve this function in an efficient way, we use a client server paradigm. Server computes data and client display this data on different kind of hardware. This paper is organized as follows. The first part presents related algorithm used to visualize large flow of data. The second part presents different platforms and methods used, which was evaluated in order to determine the better solution for the task proposed. The benchmark use the computational cost of our algorithm that formed based on located particles compared to sensors and on update of particles value. The benchmark was done on a personal computer using CPU, multi core programming, GPU programming and hybrid GPU/CPU. GPU programming method is growing in the research field; this method allows getting a real time rendering instates of a precompute rendering. For improving our results, we compute our algorithm on a High Performance Computing (HPC), this benchmark was used to improve multi-core method. HPC is commonly used in data visualization (astronomy, physic, etc) for improving the rendering and getting real-time.

  6. Impaired health-related quality of life in children and adolescents with chronic conditions: a comparative analysis of 10 disease clusters and 33 disease categories/severities utilizing the PedsQL 4.0 Generic Core Scales.

    PubMed

    Varni, James W; Limbers, Christine A; Burwinkle, Tasha M

    2007-07-16

    Advances in biomedical science and technology have resulted in dramatic improvements in the healthcare of pediatric chronic conditions. With enhanced survival, health-related quality of life (HRQOL) issues have become more salient. The objectives of this study were to compare generic HRQOL across ten chronic disease clusters and 33 disease categories/severities from the perspectives of patients and parents. Comparisons were also benchmarked with healthy children data. The analyses were based on over 2,500 pediatric patients from 10 physician-diagnosed disease clusters and 33 disease categories/severities and over 9,500 healthy children utilizing the PedsQL 4.0 Generic Core Scales. Patients were recruited from general pediatric clinics, subspecialty clinics, and hospitals. Pediatric patients with diabetes, gastrointestinal conditions, cardiac conditions, asthma, obesity, end stage renal disease, psychiatric disorders, cancer, rheumatologic conditions, and cerebral palsy self-reported progressively more impaired overall HRQOL than healthy children, respectively, with medium to large effect sizes. Patients with cerebral palsy self-reported the most impaired HRQOL, while patients with diabetes self-reported the best HRQOL. Parent proxy-reports generally paralleled patient self-report, with several notable differences. The results demonstrate differential effects of pediatric chronic conditions on patient HRQOL across diseases clusters, categories, and severities utilizing the PedsQL 4.0 Generic Core Scales from the perspectives of pediatric patients and parents. The data contained within this study represents a larger and more diverse population of pediatric patients with chronic conditions than previously reported in the extant literature. The findings contribute important information on the differential effects of pediatric chronic conditions on generic HRQOL from the perspectives of children and parents utilizing the PedsQL 4.0 Generic Core Scales. These findings with the PedsQL have clinical implications for the healthcare services provided for children with chronic health conditions. Given the degree of reported impairment based on PedsQL scores across different pediatric chronic conditions, the need for more efficacious targeted treatments for those pediatric patients with more severely impaired HRQOL is clearly and urgently indicated.

  7. Impaired health-related quality of life in children and adolescents with chronic conditions: a comparative analysis of 10 disease clusters and 33 disease categories/severities utilizing the PedsQL™ 4.0 Generic Core Scales

    PubMed Central

    Varni, James W; Limbers, Christine A; Burwinkle, Tasha M

    2007-01-01

    Background Advances in biomedical science and technology have resulted in dramatic improvements in the healthcare of pediatric chronic conditions. With enhanced survival, health-related quality of life (HRQOL) issues have become more salient. The objectives of this study were to compare generic HRQOL across ten chronic disease clusters and 33 disease categories/severities from the perspectives of patients and parents. Comparisons were also benchmarked with healthy children data. Methods The analyses were based on over 2,500 pediatric patients from 10 physician-diagnosed disease clusters and 33 disease categories/severities and over 9,500 healthy children utilizing the PedsQL™ 4.0 Generic Core Scales. Patients were recruited from general pediatric clinics, subspecialty clinics, and hospitals. Results Pediatric patients with diabetes, gastrointestinal conditions, cardiac conditions, asthma, obesity, end stage renal disease, psychiatric disorders, cancer, rheumatologic conditions, and cerebral palsy self-reported progressively more impaired overall HRQOL than healthy children, respectively, with medium to large effect sizes. Patients with cerebral palsy self-reported the most impaired HRQOL, while patients with diabetes self-reported the best HRQOL. Parent proxy-reports generally paralleled patient self-report, with several notable differences. Conclusion The results demonstrate differential effects of pediatric chronic conditions on patient HRQOL across diseases clusters, categories, and severities utilizing the PedsQL™ 4.0 Generic Core Scales from the perspectives of pediatric patients and parents. The data contained within this study represents a larger and more diverse population of pediatric patients with chronic conditions than previously reported in the extant literature. The findings contribute important information on the differential effects of pediatric chronic conditions on generic HRQOL from the perspectives of children and parents utilizing the PedsQL™ 4.0 Generic Core Scales. These findings with the PedsQL™ have clinical implications for the healthcare services provided for children with chronic health conditions. Given the degree of reported impairment based on PedsQL™ scores across different pediatric chronic conditions, the need for more efficacious targeted treatments for those pediatric patients with more severely impaired HRQOL is clearly and urgently indicated. PMID:17634123

  8. RELAP5-3D Results for Phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2012-06-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2.« less

  9. RELAP5-3D results for phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, G.; Epiney, A. S.

    2012-07-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2. (authors)« less

  10. ZPPR-20 phase D : a cylindrical assembly of polyethylene moderated U metal reflected by beryllium oxide and polyethylene.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R.; Grimm, K.; McKnight, R.

    The Zero Power Physics Reactor (ZPPR) fast critical facility was built at the Argonne National Laboratory-West (ANL-W) site in Idaho in 1969 to obtain neutron physics information necessary for the design of fast breeder reactors. The ZPPR-20D Benchmark Assembly was part of a series of cores built in Assembly 20 (References 1 through 3) of the ZPPR facility to provide data for developing a nuclear power source for space applications (SP-100). The assemblies were beryllium oxide reflected and had core fuel compositions containing enriched uranium fuel, niobium and rhenium. ZPPR-20 Phase C (HEU-MET-FAST-075) was built as the reference flight configuration.more » Two other configurations, Phases D and E, simulated accident scenarios. Phase D modeled the water immersion scenario during a launch accident, and Phase E (SUB-HEU-MET-FAST-001) modeled the earth burial scenario during a launch accident. Two configurations were recorded for the simulated water immersion accident scenario (Phase D); the critical configuration, documented here, and the subcritical configuration (SUB-HEU-MET-MIXED-001). Experiments in Assembly 20 Phases 20A through 20F were performed in 1988. The reference water immersion configuration for the ZPPR-20D assembly was obtained as reactor loading 129 on October 7, 1988 with a fissile mass of 167.477 kg and a reactivity of -4.626 {+-} 0.044{cents} (k {approx} 0.9997). The SP-100 core was to be constructed of highly enriched uranium nitride, niobium, rhenium and depleted lithium. The core design called for two enrichment zones with niobium-1% zirconium alloy fuel cladding and core structure. Rhenium was to be used as a fuel pin liner to provide shut down in the event of water immersion and flooding. The core coolant was to be depleted lithium metal ({sup 7}Li). The core was to be surrounded radially with a niobium reactor vessel and bypass which would carry the lithium coolant to the forward inlet plenum. Immediately inside the reactor vessel was a rhenium baffle which would act as a neutron curtain in the event of water immersion. A fission gas plenum and coolant inlet plenum were located axially forward of the core. Some material substitutions had to be made in mocking up the SP-100 design. The ZPPR-20 critical assemblies were fueled by 93% enriched uranium metal because uranium nitride, which was the SP-100 fuel type, was not available. ZPPR Assembly 20D was designed to simulate a water immersion accident. The water was simulated by polyethylene (CH{sub 2}), which contains a similar amount of hydrogen and has a similar density. A very accurate transformation to a simplified model is needed to make any of the ZPPR assemblies a practical criticality-safety benchmark. There is simply too much geometric detail in an exact model of a ZPPR assembly, particularly as complicated an assembly as ZPPR-20D. The transformation must reduce the detail to a practical level without masking any of the important features of the critical experiment. And it must do this without increasing the total uncertainty far beyond that of the original experiment. Such a transformation will be described in a later section. First, Assembly 20D was modeled in full detail--every plate, drawer, matrix tube, and air gap was modeled explicitly. Then the regionwise compositions and volumes from this model were converted to an RZ model. ZPPR Assembly 20D has been determined to be an acceptable criticality-safety benchmark experiment.« less

  11. Predicting College Readiness in STEM: A Longitudinal Study of Iowa Students

    NASA Astrophysics Data System (ADS)

    Rickels, Heather Anne

    The demand for STEM college graduates is increasing. However, recent studies show there are not enough STEM majors to fulfill this need. This deficiency can be partially attributed to a gender discrepancy in the number of female STEM graduates and to the high rate of attrition of STEM majors. As STEM attrition has been associated with students being unprepared for STEM coursework, it is important to understand how STEM graduates change in achievement levels from middle school through high school and to have accurate readiness indicators for first-year STEM coursework. This study aimed to address these issues by comparing the achievement growth of STEM majors to non-STEM majors by gender in Science, Math, and Reading from Grade 6 to Grade 11 through latent growth models (LGMs). Then STEM Readiness Benchmarks were established in Science and Math on the Iowas (IAs) for typical first-year STEM courses and validity evidence was provided for the benchmarks. Results from the LGM analyses indicated that STEM graduates start at higher achievement levels in Grade 6 and maintain higher achievement levels through Grade 11 in all subjects. In addition, gender differences were examined. The findings indicate that students with high achievement levels self-select as STEM majors, regardless of gender. In addition, they suggest that students who are not on-track for a STEM degree may need to begin remediation prior to high school. Results from the benchmark analyses indicate that STEM coursework is more demanding and that students need to be better prepared academically in science and math if planning to pursue a STEM degree. In addition, the STEM Readiness Benchmarks were more accurate in predicting success in STEM courses than if general college readiness benchmarks were utilized. Also, students who met the STEM Readiness Benchmarks were more likely to graduate with a STEM degree. This study provides valuable information on STEM readiness to students, educators, and college admissions officers. Findings from this study can be used to better understand the level of academic achievement necessary to be successful as a STEM major and to provide guidance for students considering STEM majors in college. If students are being encouraged to purse STEM majors, it is important they have accurate information regarding their chances of success in STEM coursework.

  12. Solid-phase data from cores at the proposed Dewey Burdock uranium in-situ recovery mine, near Edgemont, South Dakota

    USGS Publications Warehouse

    Johnson, Raymond H.; Diehl, Sharon F.; Benzel, William M.

    2013-01-01

    This report releases solid-phase data from cores at the proposed Dewey Burdock uranium in-situ recovery site near Edgemont, South Dakota. These cores were collected by Powertech Uranium Corporation, and material not used for their analyses were given to the U.S. Geological Survey for additional sampling and analyses. These additional analyses included total carbon and sulfur, whole rock acid digestion for major and trace elements, 234U/238U activity ratios, X-ray diffraction, thin sections, scanning electron microscopy analyses, and cathodoluminescence. This report provides the methods and data results from these analyses along with a short summary of observations.

  13. HACC: Extreme Scaling and Performance Across Diverse Architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Morozov, Vitali; Frontiere, Nicholas; Finkel, Hal; Pope, Adrian; Heitmann, Katrin

    2013-11-01

    Supercomputing is evolving towards hybrid and accelerator-based architectures with millions of cores. The HACC (Hardware/Hybrid Accelerated Cosmology Code) framework exploits this diverse landscape at the largest scales of problem size, obtaining high scalability and sustained performance. Developed to satisfy the science requirements of cosmological surveys, HACC melds particle and grid methods using a novel algorithmic structure that flexibly maps across architectures, including CPU/GPU, multi/many-core, and Blue Gene systems. We demonstrate the success of HACC on two very different machines, the CPU/GPU system Titan and the BG/Q systems Sequoia and Mira, attaining unprecedented levels of scalable performance. We demonstrate strong and weak scaling on Titan, obtaining up to 99.2% parallel efficiency, evolving 1.1 trillion particles. On Sequoia, we reach 13.94 PFlops (69.2% of peak) and 90% parallel efficiency on 1,572,864 cores, with 3.6 trillion particles, the largest cosmological benchmark yet performed. HACC design concepts are applicable to several other supercomputer applications.

  14. Modeling Cardiac Electrophysiology at the Organ Level in the Peta FLOPS Computing Age

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Lawrence; Bishop, Martin; Hoetzl, Elena

    2010-09-30

    Despite a steep increase in available compute power, in-silico experimentation with highly detailed models of the heart remains to be challenging due to the high computational cost involved. It is hoped that next generation high performance computing (HPC) resources lead to significant reductions in execution times to leverage a new class of in-silico applications. However, performance gains with these new platforms can only be achieved by engaging a much larger number of compute cores, necessitating strongly scalable numerical techniques. So far strong scalability has been demonstrated only for a moderate number of cores, orders of magnitude below the range requiredmore » to achieve the desired performance boost.In this study, strong scalability of currently used techniques to solve the bidomain equations is investigated. Benchmark results suggest that scalability is limited to 512-4096 cores within the range of relevant problem sizes even when systems are carefully load-balanced and advanced IO strategies are employed.« less

  15. Adverse Outcome Pathway Network Analyses: Techniques and benchmarking the AOPwiki

    EPA Science Inventory

    Abstract: As the community of toxicological researchers, risk assessors, and risk managers adopt the adverse outcome pathway (AOP) paradigm for organizing toxicological knowledge, the number and diversity of adverse outcome pathways and AOP networks are continuing to grow. This ...

  16. ANALYSES OF NEUROBEHAVIORAL SCREENING DATA: BENCHMARK DOSE ESTIMATION.

    EPA Science Inventory

    Analysis of neurotoxicological screening data such as those of the functional observational battery (FOB) traditionally relies on analysis of variance (ANOVA) with repeated measurements, followed by determination of a no-adverse-effect level (NOAEL). The US EPA has proposed the ...

  17. 78 FR 12757 - Agency Information Collection Activities: Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-25

    ... the cap on blended county benchmarks that would otherwise limit QBPs. Through this demonstration, CMS... (MAOs) and up to 10 case studies with MAOs in order to supplement what can be learned from the analyses...

  18. BBMerge – Accurate paired shotgun read merging via overlap

    DOE PAGES

    Bushnell, Brian; Rood, Jonathan; Singer, Esther

    2017-10-26

    Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less

  19. High-performance computational fluid dynamics: a custom-code approach

    NASA Astrophysics Data System (ADS)

    Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.

    2016-07-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.

  20. BBMerge – Accurate paired shotgun read merging via overlap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bushnell, Brian; Rood, Jonathan; Singer, Esther

    Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less

  1. Sustainable dimension adaptation measure in green township assessment criteria

    NASA Astrophysics Data System (ADS)

    Yaman, R.; Thadaniti, S.; Ahmad, N.; Halil, F. M.; Nasir, N. M.

    2018-05-01

    Urbanized areas are typically the most significant sources of environmental degradation, thus, an urban assessment criteria tools aiming at equally adapted sustainability dimensions need to be firmly embedded in benchmarking planning and design framework and upon occupancy. The need for integral systematic rating is recognized in order to evaluate the performance of sustainable neighborhood and to promote sustainable urban development. In this study, Green Building Index Township Assessment Criteria (GBI-TAC) will be measure on holistic sustainable dimension pillar (SDP) adaptation in order to assess and redefine the current sustainability assessment criteria for future sustainable neighborhood development (SND). The objective of the research is to find-out whether the current GBI-TAC and its variables fulfilled the holistic SDP adaptations towards sustainable neighborhood development in Malaysia. The stakeholder-inclusion approached is used in this research in order to gather professional’s stakeholders’ opinions regarding the SDP adaptations for sustainable neighborhood development. The data were analysed using IBM SPSS AMOS22 Structural Equation Modelling. The findings suggested an adaptation gap of SDP in current GBI-TAC even though all core-criteria supported SDP adaptation, hence lead to further review and refinement for future Neighborhood Assessment Criteria in Malaysia.

  2. Energy Efficiency Evaluation and Benchmarking of AFRL’s Condor High Performance Computer

    DTIC Science & Technology

    2011-08-01

    AUG 2011 2. REPORT TYPE CONFERENCE PAPER (Post Print) 3. DATES COVERED (From - To) JAN 2011 – JUN 2011 4 . TITLE AND SUBTITLE ENERGY EFFICIENCY...1716 Sony PlayStation 3s (PS3s), adding up to a total of 69,940 cores and a theoretical peak performance of 500 TFLOPS. There are 84 subcluster head...Thus, a critical component to achieving maximum performance is to find the optimum division of processing load between the CPU and GPU. 4 The

  3. Successful implementation of diabetes audits in Australia: the Australian National Diabetes Information Audit and Benchmarking (ANDIAB) initiative.

    PubMed

    Lee, A S; Colagiuri, S; Flack, J R

    2018-04-06

    We developed and implemented a national audit and benchmarking programme to describe the clinical status of people with diabetes attending specialist diabetes services in Australia. The Australian National Diabetes Information Audit and Benchmarking (ANDIAB) initiative was established as a quality audit activity. De-identified data on demographic, clinical, biochemical and outcome items were collected from specialist diabetes services across Australia to provide cross-sectional data on people with diabetes attending specialist centres at least biennially during the years 1998 to 2011. In total, 38 155 sets of data were collected over the eight ANDIAB audits. Each ANDIAB audit achieved its primary objective to collect, collate, analyse, audit and report clinical diabetes data in Australia. Each audit resulted in the production of a pooled data report, as well as individual site reports allowing comparison and benchmarking against other participating sites. The ANDIAB initiative resulted in the largest cross-sectional national de-identified dataset describing the clinical status of people with diabetes attending specialist diabetes services in Australia. ANDIAB showed that people treated by specialist services had a high burden of diabetes complications. This quality audit activity provided a framework to guide planning of healthcare services. © 2018 Diabetes UK.

  4. Yoga for military service personnel with PTSD: A single arm study.

    PubMed

    Johnston, Jennifer M; Minami, Takuya; Greenwald, Deborah; Li, Chieh; Reinhardt, Kristen; Khalsa, Sat Bir S

    2015-11-01

    This study evaluated the effects of yoga on posttraumatic stress disorder (PTSD) symptoms, resilience, and mindfulness in military personnel. Participants completing the yoga intervention were 12 current or former military personnel who met the Diagnostic and Statistical Manual for Mental Disorders-Fourth Edition-Text Revision (DSM-IV-TR) diagnostic criteria for PTSD. Results were also benchmarked against other military intervention studies of PTSD using the Clinician Administered PTSD Scale (CAPS; Blake et al., 2000) as an outcome measure. Results of within-subject analyses supported the study's primary hypothesis that yoga would reduce PTSD symptoms (d = 0.768; t = 2.822; p = .009) but did not support the hypothesis that yoga would significantly increase mindfulness (d = 0.392; t = -0.9500; p = .181) and resilience (d = 0.270; t = -1.220; p = .124) in this population. Benchmarking results indicated that, as compared with the aggregated treatment benchmark (d = 1.074) obtained from published clinical trials, the current study's treatment effect (d = 0.768) was visibly lower, and compared with the waitlist control benchmark (d = 0.156), the treatment effect in the current study was visibly higher. (c) 2015 APA, all rights reserved).

  5. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures

    PubMed Central

    Manolakos, Elias S.

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub. PMID:26605332

  6. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures.

    PubMed

    Sharma, Anuj; Manolakos, Elias S

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub.

  7. Experimental results from the VENUS-F critical reference state for the GUINEVERE accelerator driven system project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uyttenhove, W.; Baeten, P.; Ban, G.

    The GUINEVERE (Generation of Uninterrupted Intense Neutron pulses at the lead Venus Reactor) project was launched in 2006 within the framework of FP6 EUROTRANS in order to validate on-line reactivity monitoring and subcriticality level determination in Accelerator Driven Systems. Therefore the VENUS reactor at SCK.CEN in Mol (Belgium) was modified towards a fast core (VENUS-F) and coupled to the GENEPI-3C accelerator built by CNRS The accelerator can operate in both continuous and pulsed mode. The VENUS-F core is loaded with enriched Uranium and reflected with solid lead. A well-chosen critical reference state is indispensable for the validation of the on-linemore » subcriticality monitoring methodology. Moreover a benchmarking tool is required for nuclear data research and code validation. In this paper the design and the importance of the critical reference state for the GUINEVERE project are motivated. The results of the first experimental phase on the critical core are presented. The control rods worth is determined by the rod drop technique and the application of the Modified Source Multiplication (MSM) method allows the determination of the worth of the safety rods. The results are implemented in the VENUS-F core certificate for full exploitation of the critical core. (authors)« less

  8. Experimental results from the VENUS-F critical reference state for the GUINEVERE accelerator driven system project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uyttenhove, W.; Baeten, P.; Kochetkov, A.

    The GUINEVERE (Generation of Uninterrupted Intense Neutron pulses at the lead Venus Reactor) project was launched in 2006 within the framework of FP6 EUROTRANS in order to validate online reactivity monitoring and subcriticality level determination in accelerator driven systems (ADS). Therefore, the VENUS reactor at SCK.CEN in Mol, Belgium, was modified towards a fast core (VENUS-F) and coupled to the GENEPI-3C accelerator built by CNRS. The accelerator can operate in both continuous and pulsed mode. The VENUS-F core is loaded with enriched Uranium and reflected with solid lead. A well-chosen critical reference state is indispensable for the validation of themore » online subcriticality monitoring methodology. Moreover, a benchmarking tool is required for nuclear data research and code validation. In this paper, the design and the importance of the critical reference state for the GUINEVERE project are motivated. The results of the first experimental phase on the critical core are presented. The control rods worth is determined by the positive period method and the application of the Modified Source Multiplication (MSM) method allows the determination of the worth of the safety rods. The results are implemented in the VENUS-F core certificate for full exploitation of the critical core. (authors)« less

  9. Orthogonal recursive bisection as data decomposition strategy for massively parallel cardiac simulations.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Pitman, Michael C; Rice, John J

    2011-06-01

    We present the orthogonal recursive bisection algorithm that hierarchically segments the anatomical model structure into subvolumes that are distributed to cores. The anatomy is derived from the Visible Human Project, with electrophysiology based on the FitzHugh-Nagumo (FHN) and ten Tusscher (TT04) models with monodomain diffusion. Benchmark simulations with up to 16,384 and 32,768 cores on IBM Blue Gene/P and L supercomputers for both FHN and TT04 results show good load balancing with almost perfect speedup factors that are close to linear with the number of cores. Hence, strong scaling is demonstrated. With 32,768 cores, a 1000 ms simulation of full heart beat requires about 6.5 min of wall clock time for a simulation of the FHN model. For the largest machine partitions, the simulations execute at a rate of 0.548 s (BG/P) and 0.394 s (BG/L) of wall clock time per 1 ms of simulation time. To our knowledge, these simulations show strong scaling to substantially higher numbers of cores than reported previously for organ-level simulation of the heart, thus significantly reducing run times. The ability to reduce runtimes could play a critical role in enabling wider use of cardiac models in research and clinical applications.

  10. Denoising DNA deep sequencing data—high-throughput sequencing errors and their correction

    PubMed Central

    Laehnemann, David; Borkhardt, Arndt

    2016-01-01

    Characterizing the errors generated by common high-throughput sequencing platforms and telling true genetic variation from technical artefacts are two interdependent steps, essential to many analyses such as single nucleotide variant calling, haplotype inference, sequence assembly and evolutionary studies. Both random and systematic errors can show a specific occurrence profile for each of the six prominent sequencing platforms surveyed here: 454 pyrosequencing, Complete Genomics DNA nanoball sequencing, Illumina sequencing by synthesis, Ion Torrent semiconductor sequencing, Pacific Biosciences single-molecule real-time sequencing and Oxford Nanopore sequencing. There is a large variety of programs available for error removal in sequencing read data, which differ in the error models and statistical techniques they use, the features of the data they analyse, the parameters they determine from them and the data structures and algorithms they use. We highlight the assumptions they make and for which data types these hold, providing guidance which tools to consider for benchmarking with regard to the data properties. While no benchmarking results are included here, such specific benchmarks would greatly inform tool choices and future software development. The development of stand-alone error correctors, as well as single nucleotide variant and haplotype callers, could also benefit from using more of the knowledge about error profiles and from (re)combining ideas from the existing approaches presented here. PMID:26026159

  11. Statistical process control as a tool for controlling operating room performance: retrospective analysis and benchmarking.

    PubMed

    Chen, Tsung-Tai; Chang, Yun-Jau; Ku, Shei-Ling; Chung, Kuo-Piao

    2010-10-01

    There is much research using statistical process control (SPC) to monitor surgical performance, including comparisons among groups to detect small process shifts, but few of these studies have included a stabilization process. This study aimed to analyse the performance of surgeons in operating room (OR) and set a benchmark by SPC after stabilized process. The OR profile of 499 patients who underwent laparoscopic cholecystectomy performed by 16 surgeons at a tertiary hospital in Taiwan during 2005 and 2006 were recorded. SPC was applied to analyse operative and non-operative times using the following five steps: first, the times were divided into two segments; second, they were normalized; third, they were evaluated as individual processes; fourth, the ARL(0) was calculated;, and fifth, the different groups (surgeons) were compared. Outliers were excluded to ensure stability for each group and to facilitate inter-group comparison. The results showed that in the stabilized process, only one surgeon exhibited a significantly shorter total process time (including operative time and non-operative time). In this study, we use five steps to demonstrate how to control surgical and non-surgical time in phase I. There are some measures that can be taken to prevent skew and instability in the process. Also, using SPC, one surgeon can be shown to be a real benchmark. © 2010 Blackwell Publishing Ltd.

  12. FY2012 summary of tasks completed on PROTEUS-thermal work.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.H.; Smith, M.A.

    2012-06-06

    PROTEUS is a suite of the neutronics codes, both old and new, that can be used within the SHARP codes being developed under the NEAMS program. Discussion here is focused on updates and verification and validation activities of the SHARP neutronics code, DeCART, for application to thermal reactor analysis. As part of the development of SHARP tools, the different versions of the DeCART code created for PWR, BWR, and VHTR analysis were integrated. Verification and validation tests for the integrated version were started, and the generation of cross section libraries based on the subgroup method was revisited for the targetedmore » reactor types. The DeCART code has been reorganized in preparation for an efficient integration of the different versions for PWR, BWR, and VHTR analysis. In DeCART, the old-fashioned common blocks and header files have been replaced by advanced memory structures. However, the changing of variable names was minimized in order to limit problems with the code integration. Since the remaining stability problems of DeCART were mostly caused by the CMFD methodology and modules, significant work was performed to determine whether they could be replaced by more stable methods and routines. The cross section library is a key element to obtain accurate solutions. Thus, the procedure for generating cross section libraries was revisited to provide libraries tailored for the targeted reactor types. To improve accuracy in the cross section library, an attempt was made to replace the CENTRM code by the MCNP Monte Carlo code as a tool obtaining reference resonance integrals. The use of the Monte Carlo code allows us to minimize problems or approximations that CENTRM introduces since the accuracy of the subgroup data is limited by that of the reference solutions. The use of MCNP requires an additional set of libraries without resonance cross sections so that reference calculations can be performed for a unit cell in which only one isotope of interest includes resonance cross sections, among the isotopes in the composition. The OECD MHTGR-350 benchmark core was simulated using DeCART as initial focus of the verification/validation efforts. Among the benchmark problems, Exercise 1 of Phase 1 is a steady-state benchmark case for the neutronics calculation for which block-wise cross sections were provided in 26 energy groups. This type of problem was designed for a homogenized geometry solver like DIF3D rather than the high-fidelity code DeCART. Instead of the homogenized block cross sections given in the benchmark, the VHTR-specific 238-group ENDF/B-VII.0 library of DeCART was directly used for preliminary calculations. Initial results showed that the multiplication factors of a fuel pin and a fuel block with or without a control rod hole were off by 6, -362, and -183 pcm Dk from comparable MCNP solutions, respectively. The 2-D and 3-D one-third core calculations were also conducted for the all-rods-out (ARO) and all-rods-in (ARI) configurations, producing reasonable results. Figure 1 illustrates the intermediate (1.5 eV - 17 keV) and thermal (below 1.5 eV) group flux distributions. As seen from VHTR cores with annular fuels, the intermediate group fluxes are relatively high in the fuel region, but the thermal group fluxes are higher in the inner and outer graphite reflector regions than in the fuel region. To support the current project, a new three-year I-NERI collaboration involving ANL and KAERI was started in November 2011, focused on performing in-depth verification and validation of high-fidelity multi-physics simulation codes for LWR and VHTR. The work scope includes generating improved cross section libraries for the targeted reactor types, developing benchmark models for verification and validation of the neutronics code with or without thermo-fluid feedback, and performing detailed comparisons of predicted reactor parameters against both Monte Carlo solutions and experimental measurements. The following list summarizes the work conducted so far for PROTEUS-Thermal Tasks: Unification of different versions of DeCART was initiated, and at the same time code modernization was conducted to make code unification efficient; (2) Regeneration of cross section libraries was attempted for the targeted reactor types, and the procedure for generating cross section libraries was updated by replacing CENTRM with MCNP for reference resonance integrals; (3) The MHTGR-350 benchmark core was simulated using DeCART with VHTR-specific 238-group ENDF/B-VII.0 library, and MCNP calculations were performed for comparison; and (4) Benchmark problems for PWR and BWR analysis were prepared for the DeCART verification/validation effort. In the coming months, the work listed above will be completed. Cross section libraries will be generated with optimized group structures for specific reactor types.« less

  13. Benchmarking the quality of breast cancer care in a nationwide voluntary system: the first five-year results (2003–2007) from Germany as a proof of concept

    PubMed Central

    Brucker, Sara Y; Schumacher, Claudia; Sohn, Christoph; Rezai, Mahdi; Bamberg, Michael; Wallwiener, Diethelm

    2008-01-01

    Background The main study objectives were: to establish a nationwide voluntary collaborative network of breast centres with independent data analysis; to define suitable quality indicators (QIs) for benchmarking the quality of breast cancer (BC) care; to demonstrate existing differences in BC care quality; and to show that BC care quality improved with benchmarking from 2003 to 2007. Methods BC centres participated voluntarily in a scientific benchmarking procedure. A generic XML-based data set was developed and used for data collection. Nine guideline-based quality targets serving as rate-based QIs were initially defined, reviewed annually and modified or expanded accordingly. QI changes over time were analysed descriptively. Results During 2003–2007, respective increases in participating breast centres and postoperatively confirmed BCs were from 59 to 220 and from 5,994 to 31,656 (> 60% of new BCs/year in Germany). Starting from 9 process QIs, 12 QIs were developed by 2007 as surrogates for long-term outcome. Results for most QIs increased. From 2003 to 2007, the most notable increases seen were for preoperative histological confirmation of diagnosis (58% (in 2003) to 88% (in 2007)), appropriate endocrine therapy in hormone receptor-positive patients (27 to 93%), appropriate radiotherapy after breast-conserving therapy (20 to 79%) and appropriate radiotherapy after mastectomy (8 to 65%). Conclusion Nationwide external benchmarking of BC care is feasible and successful. The benchmarking system described allows both comparisons among participating institutions as well as the tracking of changes in average quality of care over time for the network as a whole. Marked QI increases indicate improved quality of BC care. PMID:19055735

  14. Benchmarking the quality of breast cancer care in a nationwide voluntary system: the first five-year results (2003-2007) from Germany as a proof of concept.

    PubMed

    Brucker, Sara Y; Schumacher, Claudia; Sohn, Christoph; Rezai, Mahdi; Bamberg, Michael; Wallwiener, Diethelm

    2008-12-02

    The main study objectives were: to establish a nationwide voluntary collaborative network of breast centres with independent data analysis; to define suitable quality indicators (QIs) for benchmarking the quality of breast cancer (BC) care; to demonstrate existing differences in BC care quality; and to show that BC care quality improved with benchmarking from 2003 to 2007. BC centres participated voluntarily in a scientific benchmarking procedure. A generic XML-based data set was developed and used for data collection. Nine guideline-based quality targets serving as rate-based QIs were initially defined, reviewed annually and modified or expanded accordingly. QI changes over time were analysed descriptively. During 2003-2007, respective increases in participating breast centres and postoperatively confirmed BCs were from 59 to 220 and from 5,994 to 31,656 (> 60% of new BCs/year in Germany). Starting from 9 process QIs, 12 QIs were developed by 2007 as surrogates for long-term outcome. Results for most QIs increased. From 2003 to 2007, the most notable increases seen were for preoperative histological confirmation of diagnosis (58% (in 2003) to 88% (in 2007)), appropriate endocrine therapy in hormone receptor-positive patients (27 to 93%), appropriate radiotherapy after breast-conserving therapy (20 to 79%) and appropriate radiotherapy after mastectomy (8 to 65%). Nationwide external benchmarking of BC care is feasible and successful. The benchmarking system described allows both comparisons among participating institutions as well as the tracking of changes in average quality of care over time for the network as a whole. Marked QI increases indicate improved quality of BC care.

  15. Establishing objective benchmarks in robotic virtual reality simulation at the level of a competent surgeon using the RobotiX Mentor simulator.

    PubMed

    Watkinson, William; Raison, Nicholas; Abe, Takashige; Harrison, Patrick; Khan, Shamim; Van der Poel, Henk; Dasgupta, Prokar; Ahmed, Kamran

    2018-05-01

    To establish objective benchmarks at the level of a competent robotic surgeon across different exercises and metrics for the RobotiX Mentor virtual reality (VR) simulator suitable for use within a robotic surgical training curriculum. This retrospective observational study analysed results from multiple data sources, all of which used the RobotiX Mentor VR simulator. 123 participants with varying experience from novice to expert completed the exercises. Competency was established as the 25th centile of the mean advanced intermediate score. Three basic skill exercises and two advanced skill exercises were used. King's College London. 84 Novice, 26 beginner intermediates, 9 advanced intermediates and 4 experts were used in this retrospective observational study. Objective benchmarks derived from the 25th centile of the mean scores of the advanced intermediates provided suitably challenging yet also achievable targets for training surgeons. The disparity in scores was greatest for the advanced exercises. Novice surgeons are able to achieve the benchmarks across all exercises in the majority of metrics. We have successfully created this proof-of-concept study, which requires validation in a larger cohort. Objective benchmarks obtained from the 25th centile of the mean scores of advanced intermediates provide clinically relevant benchmarks at the standard of a competent robotic surgeon that are challenging yet also attainable. That can be used within a VR training curriculum allowing participants to track and monitor their progress in a structured and progressional manner through five exercises. Providing clearly defined targets, ensuring that a universal training standard has been achieved across training surgeons. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Oil-shale data, cores, and samples collected by the U.S. geological survey through 1989

    USGS Publications Warehouse

    Dyni, John R.; Gay, Frances; Michalski, Thomas C.; ,

    1990-01-01

    The U.S. Geological Survey has acquired a large collection of geotechnical data, drill cores, and crushed samples of oil shale from the Eocene Green River Formation in Colorado, Wyoming, and Utah. The data include about 250,000 shale-oil analyses from about 600 core holes. Most of the data is from Colorado where the thickest and highest-grade oil shales of the Green River Formation are found in the Piceance Creek basin. Other data on file but not yet in the computer database include hundreds of lithologic core descriptions, geophysical well logs, and mineralogical and geochemical analyses. The shale-oil analyses are being prepared for release on floppy disks for use on microcomputers. About 173,000 lineal feet of drill core of oil shale and associated rocks, as well as 100,000 crushed samples of oil shale, are stored at the Core Research Center, U.S. Geological Survey, Lakewood, Colo. These materials are available to the public for research.

  17. Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.

    PubMed

    Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J

    2016-07-01

    Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible way to manage infrequent inlet measurements. Its use enables benchmarking on a daily basis and prepares the ground for further investigation. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Verification of ARES transport code system with TAKEDA benchmarks

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue

    2015-10-01

    Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.

  19. Accelerating cardiac bidomain simulations using graphics processing units.

    PubMed

    Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G

    2012-08-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.

  20. FFTF Passive Safety Test Data for Benchmarks for New LMR Designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wootan, David W.; Casella, Andrew M.

    Liquid Metal Reactors (LMRs) continue to be considered as an attractive concept for advanced reactor design. Software packages such as SASSYS are being used to im-prove new LMR designs and operating characteristics. Significant cost and safety im-provements can be realized in advanced liquid metal reactor designs by emphasizing inherent or passive safety through crediting the beneficial reactivity feedbacks associ-ated with core and structural movement. This passive safety approach was adopted for the Fast Flux Test Facility (FFTF), and an experimental program was conducted to characterize the structural reactivity feedback. The FFTF passive safety testing pro-gram was developed to examine howmore » specific design elements influenced dynamic re-activity feedback in response to a reactivity input and to demonstrate the scalability of reactivity feedback results to reactors of current interest. The U.S. Department of En-ergy, Office of Nuclear Energy Advanced Reactor Technology program is in the pro-cess of preserving, protecting, securing, and placing in electronic format information and data from the FFTF, including the core configurations and data collected during the passive safety tests. Benchmarks based on empirical data gathered during operation of the Fast Flux Test Facility (FFTF) as well as design documents and post-irradiation examination will aid in the validation of these software packages and the models and calculations they produce. Evaluation of these actual test data could provide insight to improve analytical methods which may be used to support future licensing applications for LMRs« less

  1. Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

    PubMed Central

    Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf

    2013-01-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867

  2. Techno-economical efficiency and productivity change of wastewater treatment plants: the role of internal and external factors.

    PubMed

    Hernández-Sancho, F; Molinos-Senante, M; Sala-Garrido, R

    2011-12-01

    Efficiency and productivity are important measures for identifying best practice in businesses and optimising resource-use. This study analyses how these two measures change across the period 2003-2008 for 196 wastewater treatment plants (WWTPs) in Spain, by using the benchmarking methods of Data Envelopment Analysis and the Malmquist Productivity Index. To identify which variables contribute to the sustainability of the WWTPs, differences in efficiency scores and productivity indices for external factors are also investigated. Our results indicate that both efficiency and productivity decreased over the five years. We verify that the productivity drop is primarily explained by technical change. Furthermore, certain external variables affected WWTP efficiency, including plant size, treatment technology and energy consumption. However, plants with low energy consumption are the only ones which improve their productivity. Finally, the benchmarking analyses proved to be useful as management tools in the wastewater sector, by providing vital information for improving the sustainability of plants.

  3. Operating Room Efficiency before and after Entrance in a Benchmarking Program for Surgical Process Data.

    PubMed

    Pedron, Sara; Winter, Vera; Oppel, Eva-Maria; Bialas, Enno

    2017-08-23

    Operating room (OR) efficiency continues to be a high priority for hospitals. In this context the concept of benchmarking has gained increasing importance as a means to improve OR performance. The aim of this study was to investigate whether and how participation in a benchmarking and reporting program for surgical process data was associated with a change in OR efficiency, measured through raw utilization, turnover times, and first-case tardiness. The main analysis is based on panel data from 202 surgical departments in German hospitals, which were derived from the largest database for surgical process data in Germany. Panel regression modelling was applied. Results revealed no clear and univocal trend of participation in a benchmarking and reporting program for surgical process data. The largest trend was observed for first-case tardiness. In contrast to expectations, turnover times showed a generally increasing trend during participation. For raw utilization no clear and statistically significant trend could be evidenced. Subgroup analyses revealed differences in effects across different hospital types and department specialties. Participation in a benchmarking and reporting program and thus the availability of reliable, timely and detailed analysis tools to support the OR management seemed to be correlated especially with an increase in the timeliness of staff members regarding first-case starts. The increasing trend in turnover time revealed the absence of effective strategies to improve this aspect of OR efficiency in German hospitals and could have meaningful consequences for the medium- and long-run capacity planning in the OR.

  4. Benchmarking the evaluated proton differential cross sections suitable for the EBS analysis of natSi and 16O

    NASA Astrophysics Data System (ADS)

    Kokkoris, M.; Dede, S.; Kantre, K.; Lagoyannis, A.; Ntemou, E.; Paneta, V.; Preketes-Sigalas, K.; Provatas, G.; Vlastou, R.; Bogdanović-Radović, I.; Siketić, Z.; Obajdin, N.

    2017-08-01

    The evaluated proton differential cross sections suitable for the Elastic Backscattering Spectroscopy (EBS) analysis of natSi and 16O, as obtained from SigmaCalc 2.0, have been benchmarked over a wide energy and angular range at two different accelerator laboratories, namely at N.C.S.R. 'Demokritos', Athens, Greece and at Ruđer Bošković Institute (RBI), Zagreb, Croatia, using a variety of high-purity thick targets of known stoichiometry. The results are presented in graphical and tabular forms, while the observed discrepancies, as well as, the limits in accuracy of the benchmarking procedure, along with target related effects, are thoroughly discussed and analysed. In the case of oxygen the agreement between simulated and experimental spectra was generally good, while for silicon serious discrepancies were observed above Ep,lab = 2.5 MeV, suggesting that a further tuning of the appropriate nuclear model parameters in the evaluated differential cross-section datasets is required.

  5. Benchmarking comparison and validation of MCNP photon interaction data

    NASA Astrophysics Data System (ADS)

    Colling, Bethany; Kodeli, I.; Lilley, S.; Packer, L. W.

    2017-09-01

    The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p). Suitable benchmark experiments (iron and water) were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p) with MCNP6 and 84p if using MCNP-5.

  6. Accelerating finite-rate chemical kinetics with coprocessors: Comparing vectorization methods on GPUs, MICs, and CPUs

    NASA Astrophysics Data System (ADS)

    Stone, Christopher P.; Alferman, Andrew T.; Niemeyer, Kyle E.

    2018-05-01

    Accurate and efficient methods for solving stiff ordinary differential equations (ODEs) are a critical component of turbulent combustion simulations with finite-rate chemistry. The ODEs governing the chemical kinetics at each mesh point are decoupled by operator-splitting allowing each to be solved concurrently. An efficient ODE solver must then take into account the available thread and instruction-level parallelism of the underlying hardware, especially on many-core coprocessors, as well as the numerical efficiency. A stiff Rosenbrock and a nonstiff Runge-Kutta ODE solver are both implemented using the single instruction, multiple thread (SIMT) and single instruction, multiple data (SIMD) paradigms within OpenCL. Both methods solve multiple ODEs concurrently within the same instruction stream. The performance of these parallel implementations was measured on three chemical kinetic models of increasing size across several multicore and many-core platforms. Two separate benchmarks were conducted to clearly determine any performance advantage offered by either method. The first benchmark measured the run-time of evaluating the right-hand-side source terms in parallel and the second benchmark integrated a series of constant-pressure, homogeneous reactors using the Rosenbrock and Runge-Kutta solvers. The right-hand-side evaluations with SIMD parallelism on the host multicore Xeon CPU and many-core Xeon Phi co-processor performed approximately three times faster than the baseline multithreaded C++ code. The SIMT parallel model on the host and Phi was 13%-35% slower than the baseline while the SIMT model on the NVIDIA Kepler GPU provided approximately the same performance as the SIMD model on the Phi. The runtimes for both ODE solvers decreased significantly with the SIMD implementations on the host CPU (2.5-2.7 ×) and Xeon Phi coprocessor (4.7-4.9 ×) compared to the baseline parallel code. The SIMT implementations on the GPU ran 1.5-1.6 times faster than the baseline multithreaded CPU code; however, this was significantly slower than the SIMD versions on the host CPU or the Xeon Phi. The performance difference between the three platforms was attributed to thread divergence caused by the adaptive step-sizes within the ODE integrators. Analysis showed that the wider vector width of the GPU incurs a higher level of divergence than the narrower Sandy Bridge or Xeon Phi. The significant performance improvement provided by the SIMD parallel strategy motivates further research into more ODE solver methods that are both SIMD-friendly and computationally efficient.

  7. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  8. Investigating the impact of the cielo cray XE6 architecture on scientific application codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajan, Mahesh; Barrett, Richard; Pedretti, Kevin Thomas Tauke

    2010-12-01

    Cielo, a Cray XE6, is the Department of Energy NNSA Advanced Simulation and Computing (ASC) campaign's newest capability machine. Rated at 1.37 PFLOPS, it consists of 8,944 dual-socket oct-core AMD Magny-Cours compute nodes, linked using Cray's Gemini interconnect. Its primary mission objective is to enable a suite of the ASC applications implemented using MPI to scale to tens of thousands of cores. Cielo is an evolutionary improvement to a successful architecture previously available to many of our codes, thus enabling a basis for understanding the capabilities of this new architecture. Using three codes strategically important to the ASC campaign, andmore » supplemented with some micro-benchmarks that expose the fundamental capabilities of the XE6, we report on the performance characteristics and capabilities of Cielo.« less

  9. Examining national trends in worker health with the National Health Interview Survey.

    PubMed

    Luckhaupt, Sara E; Sestito, John P

    2013-12-01

    To describe data from the National Health Interview Survey (NHIS), both the annual core survey and periodic occupational health supplements (OHSs), available for examining national trends in worker health. The NHIS is an annual in-person household survey with a cross-sectional multistage clustered sample design to produce nationally representative health data. The 2010 NHIS included an OHS. Prevalence rates of various health conditions and health behaviors among workers based on multiple years of NHIS core data are available. In addition, the 2010 NHIS-OHS data provide prevalence rates of selected health conditions, work organization factors, and occupational exposures among US workers by industry and occupation. The publicly available NHIS data can be used to identify areas of concern for various industries and for benchmarking data from specific worker groups against national averages.

  10. Parameters of Higher Education Quality Assessment System at Universities

    ERIC Educational Resources Information Center

    Savickiene, Izabela

    2005-01-01

    The article analyses the system of institutional quality assessment at universities and lays foundation to its functional, morphological and processual parameters. It also presents the concept of the system and discusses the distribution of systems into groups, defines information, accountability, improvement and benchmarking functions of higher…

  11. Development of risk-based nanomaterial groups for occupational exposure control

    NASA Astrophysics Data System (ADS)

    Kuempel, E. D.; Castranova, V.; Geraci, C. L.; Schulte, P. A.

    2012-09-01

    Given the almost limitless variety of nanomaterials, it will be virtually impossible to assess the possible occupational health hazard of each nanomaterial individually. The development of science-based hazard and risk categories for nanomaterials is needed for decision-making about exposure control practices in the workplace. A possible strategy would be to select representative (benchmark) materials from various mode of action (MOA) classes, evaluate the hazard and develop risk estimates, and then apply a systematic comparison of new nanomaterials with the benchmark materials in the same MOA class. Poorly soluble particles are used here as an example to illustrate quantitative risk assessment methods for possible benchmark particles and occupational exposure control groups, given mode of action and relative toxicity. Linking such benchmark particles to specific exposure control bands would facilitate the translation of health hazard and quantitative risk information to the development of effective exposure control practices in the workplace. A key challenge is obtaining sufficient dose-response data, based on standard testing, to systematically evaluate the nanomaterials' physical-chemical factors influencing their biological activity. Categorization processes involve both science-based analyses and default assumptions in the absence of substance-specific information. Utilizing data and information from related materials may facilitate initial determinations of exposure control systems for nanomaterials.

  12. RETRANO3 benchmarks for Beaver Valley plant transients and FSAR analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaumont, E.T.; Feltus, M.A.

    1993-01-01

    Any best-estimate code (e.g., RETRANO3) results must be validated against plant data and final safety analysis report (FSAR) predictions. The need for two independent means of benchmarking is necessary to ensure that the results were not biased toward a particular data set and to have a certain degree of accuracy. The code results need to be compared with previous results and show improvements over previous code results. Ideally, the two best means of benchmarking a thermal hydraulics code are comparing results from previous versions of the same code along with actual plant data. This paper describes RETRAN03 benchmarks against RETRAN02more » results, actual plant data, and FSAR predictions. RETRAN03, the Electric Power Research Institute's latest version of the RETRAN thermal-hydraulic analysis codes, offers several upgrades over its predecessor, RETRAN02 Mod5. RETRAN03 can use either implicit or semi-implicit numerics, whereas RETRAN02 Mod5 uses only semi-implicit numerics. Another major upgrade deals with slip model options. RETRAN03 added several new models, including a five-equation model for more accurate modeling of two-phase flow. RETPAN02 Mod5 should give similar but slightly more conservative results than RETRAN03 when executed with RETRAN02 Mod5 options.« less

  13. Using data and quality monitoring to enhance maternity outcomes: a qualitative study of risk managers' perspectives.

    PubMed

    Simms, Rebecca A; Yelland, Andrew; Ping, Helen; Beringer, Antonia J; Draycott, Timothy J; Fox, Robert

    2014-06-01

    Risk management is a core part of healthcare practice, especially within maternity services, where litigation and societal costs are high. There has been little investigation into the experiences and opinions of those staff directly involved in risk management: lead obstetricians and specialist risk midwives, who are ideally placed to identify how current implementation of risk management strategies can be improved. A qualitative study of consultant-led maternity units in an English region. Semistructured interviews were conducted with the obstetric and midwifery risk management leads for each unit. We explored their approach to risk management, particularly their opinions regarding quality monitoring and related barriers/issues. Interviews were recorded, transcribed and thematically analysed. Twenty-seven staff from 12/15 maternity units participated. Key issues identified included: concern for the accuracy and validity of their local data, potential difficulties related to data collation, the negative impact of external interference by national regulatory bodies on local clinical priorities, the influence of the local culture of the maternity unit on levels of engagement in the risk management process, and scepticism about the value of benchmarking of maternity units without adjustment for population characteristics. Local maternity risk managers may provide valuable, clinically relevant insights into current issues in clinical data monitoring. Improvements should focus on the accuracy and ease of data collation with a need for an agreed maternity indicators set, populated from validated databases, and not reliant on data collection systems that distract clinicians from patient activity and quality improvement. It is clear that working relationships between risk managers, their own clinical teams and external national bodies require improvement and alignment. Further discussion regarding benchmarking between maternity units is required prior to implementation. These findings are likely to be relevant to other clinical specialties. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  14. Benchmarking GPU and CPU codes for Heisenberg spin glass over-relaxation

    NASA Astrophysics Data System (ADS)

    Bernaschi, M.; Parisi, G.; Parisi, L.

    2011-06-01

    We present a set of possible implementations for Graphics Processing Units (GPU) of the Over-relaxation technique applied to the 3D Heisenberg spin glass model. The results show that a carefully tuned code can achieve more than 100 GFlops/s of sustained performance and update a single spin in about 0.6 nanoseconds. A multi-hit technique that exploits the GPU shared memory further reduces this time. Such results are compared with those obtained by means of a highly-tuned vector-parallel code on latest generation multi-core CPUs.

  15. CERN Computing in Commercial Clouds

    NASA Astrophysics Data System (ADS)

    Cordeiro, C.; Field, L.; Garrido Bear, B.; Giordano, D.; Jones, B.; Keeble, O.; Manzi, A.; Martelli, E.; McCance, G.; Moreno-García, D.; Traylen, S.

    2017-10-01

    By the end of 2016 more than 10 Million core-hours of computing resources have been delivered by several commercial cloud providers to the four LHC experiments to run their production workloads, from simulation to full chain processing. In this paper we describe the experience gained at CERN in procuring and exploiting commercial cloud resources for the computing needs of the LHC experiments. The mechanisms used for provisioning, monitoring, accounting, alarming and benchmarking will be discussed, as well as the involvement of the LHC collaborations in terms of managing the workflows of the experiments within a multicloud environment.

  16. Accreditation of University Undergraduate Programs in Nigeria from 2001-2012: Implications for Graduates Employability

    ERIC Educational Resources Information Center

    Dada, M. S.; Imam, Hauwa

    2015-01-01

    This study analysed accreditation exercises of universities undergraduate programs in Nigeria from 2001-2013. Accreditation is a quality assurance mechanism to ensure that undergraduate programs offered in Nigeria satisfies benchmark minimum academic standards for producing graduates with requisite skills for employability. The study adopted the…

  17. 7 CFR 1717.1204 - Policies and conditions applicable to settlements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and action plans by the members to change their operations, management, and organizational structure... to meet its financial obligations will be based on analyses and documentation by RUS of the borrower... based on comparisons with benchmark electric utilities; and (H) The accuracy and completeness of the...

  18. Core competencies for pharmaceutical physicians and drug development scientists

    PubMed Central

    Silva, Honorio; Stonier, Peter; Buhler, Fritz; Deslypere, Jean-Paul; Criscuolo, Domenico; Nell, Gerfried; Massud, Joao; Geary, Stewart; Schenk, Johanna; Kerpel-Fronius, Sandor; Koski, Greg; Clemens, Norbert; Klingmann, Ingrid; Kesselring, Gustavo; van Olden, Rudolf; Dubois, Dominique

    2013-01-01

    Professional groups, such as IFAPP (International Federation of Pharmaceutical Physicians and Pharmaceutical Medicine), are expected to produce the defined core competencies to orient the discipline and the academic programs for the development of future competent professionals and to advance the profession. On the other hand, PharmaTrain, an Innovative Medicines Initiative project, has become the largest public-private partnership in biomedicine in the European Continent and aims to provide postgraduate courses that are designed to meet the needs of professionals working in medicines development. A working group was formed within IFAPP including representatives from PharmaTrain, academic institutions and national member associations, with special interest and experience on Quality Improvement through education. The objectives were: to define a set of core competencies for pharmaceutical physicians and drug development scientists, to be summarized in a Statement of Competence and to benchmark and align these identified core competencies with the Learning Outcomes (LO) of the PharmaTrain Base Course. The objectives were successfully achieved. Seven domains and 60 core competencies were identified and aligned accordingly. The effective implementation of training programs using the competencies or the PharmaTrain LO anywhere in the world may transform the drug development process to an efficient and integrated process for better and safer medicines. The PharmaTrain Base Course might provide the cognitive framework to achieve the desired Statement of Competence for Pharmaceutical Physicians and Drug Development Scientists worldwide. PMID:23986704

  19. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    NASA Astrophysics Data System (ADS)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  20. Soft-core processor study for node-based architectures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Houten, Jonathan Roger; Jarosz, Jason P.; Welch, Benjamin James

    2008-09-01

    Node-based architecture (NBA) designs for future satellite projects hold the promise of decreasing system development time and costs, size, weight, and power and positioning the laboratory to address other emerging mission opportunities quickly. Reconfigurable Field Programmable Gate Array (FPGA) based modules will comprise the core of several of the NBA nodes. Microprocessing capabilities will be necessary with varying degrees of mission-specific performance requirements on these nodes. To enable the flexibility of these reconfigurable nodes, it is advantageous to incorporate the microprocessor into the FPGA itself, either as a hardcore processor built into the FPGA or as a soft-core processor builtmore » out of FPGA elements. This document describes the evaluation of three reconfigurable FPGA based processors for use in future NBA systems--two soft cores (MicroBlaze and non-fault-tolerant LEON) and one hard core (PowerPC 405). Two standard performance benchmark applications were developed for each processor. The first, Dhrystone, is a fixed-point operation metric. The second, Whetstone, is a floating-point operation metric. Several trials were run at varying code locations, loop counts, processor speeds, and cache configurations. FPGA resource utilization was recorded for each configuration. Cache configurations impacted the results greatly; for optimal processor efficiency it is necessary to enable caches on the processors. Processor caches carry a penalty; cache error mitigation is necessary when operating in a radiation environment.« less

  1. Coupled-cluster based approach for core-level states in condensed phase: Theory and application to different protonated forms of aqueous glycine

    DOE PAGES

    Sadybekov, Arman; Krylov, Anna I.

    2017-07-07

    A theoretical approach for calculating core-level states in condensed phase is presented. The approach is based on equation-of-motion coupled-cluster theory (EOMCC) and effective fragment potential (EFP) method. By introducing an approximate treatment of double excitations in the EOM-CCSD (EOM-CC with single and double substitutions) ansatz, we address poor convergence issues that are encountered for the core-level states and significantly reduce computational costs. While the approximations introduce relatively large errors in the absolute values of transition energies, the errors are systematic. Consequently, chemical shifts, changes in ionization energies relative to reference systems, are reproduced reasonably well. By using different protonation formsmore » of solvated glycine as a benchmark system, we show that our protocol is capable of reproducing the experimental chemical shifts with a quantitative accuracy. The results demonstrate that chemical shifts are very sensitive to the solvent interactions and that explicit treatment of solvent, such as EFP, is essential for achieving quantitative accuracy.« less

  2. Accelerating 3D Hall MHD Magnetosphere Simulations with Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Bard, C.; Dorelli, J.

    2017-12-01

    The resolution required to simulate planetary magnetospheres with Hall magnetohydrodynamics result in program sizes approaching several hundred million grid cells. These would take years to run on a single computational core and require hundreds or thousands of computational cores to complete in a reasonable time. However, this requires access to the largest supercomputers. Graphics processing units (GPUs) provide a viable alternative: one GPU can do the work of roughly 100 cores, bringing Hall MHD simulations of Ganymede within reach of modest GPU clusters ( 8 GPUs). We report our progress in developing a GPU-accelerated, three-dimensional Hall magnetohydrodynamic code and present Hall MHD simulation results for both Ganymede (run on 8 GPUs) and Mercury (56 GPUs). We benchmark our Ganymede simulation with previous results for the Galileo G8 flyby, namely that adding the Hall term to ideal MHD simulations changes the global convection pattern within the magnetosphere. Additionally, we present new results for the G1 flyby as well as initial results from Hall MHD simulations of Mercury and compare them with the corresponding ideal MHD runs.

  3. Targeted proteomics coming of age - SRM, PRM and DIA performance evaluated from a core facility perspective.

    PubMed

    Kockmann, Tobias; Trachsel, Christian; Panse, Christian; Wahlander, Asa; Selevsek, Nathalie; Grossmann, Jonas; Wolski, Witold E; Schlapbach, Ralph

    2016-08-01

    Quantitative mass spectrometry is a rapidly evolving methodology applied in a large number of omics-type research projects. During the past years, new designs of mass spectrometers have been developed and launched as commercial systems while in parallel new data acquisition schemes and data analysis paradigms have been introduced. Core facilities provide access to such technologies, but also actively support the researchers in finding and applying the best-suited analytical approach. In order to implement a solid fundament for this decision making process, core facilities need to constantly compare and benchmark the various approaches. In this article we compare the quantitative accuracy and precision of current state of the art targeted proteomics approaches single reaction monitoring (SRM), parallel reaction monitoring (PRM) and data independent acquisition (DIA) across multiple liquid chromatography mass spectrometry (LC-MS) platforms, using a readily available commercial standard sample. All workflows are able to reproducibly generate accurate quantitative data. However, SRM and PRM workflows show higher accuracy and precision compared to DIA approaches, especially when analyzing low concentrated analytes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Coupled-cluster based approach for core-level states in condensed phase: Theory and application to different protonated forms of aqueous glycine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadybekov, Arman; Krylov, Anna I.

    A theoretical approach for calculating core-level states in condensed phase is presented. The approach is based on equation-of-motion coupled-cluster theory (EOMCC) and effective fragment potential (EFP) method. By introducing an approximate treatment of double excitations in the EOM-CCSD (EOM-CC with single and double substitutions) ansatz, we address poor convergence issues that are encountered for the core-level states and significantly reduce computational costs. While the approximations introduce relatively large errors in the absolute values of transition energies, the errors are systematic. Consequently, chemical shifts, changes in ionization energies relative to reference systems, are reproduced reasonably well. By using different protonation formsmore » of solvated glycine as a benchmark system, we show that our protocol is capable of reproducing the experimental chemical shifts with a quantitative accuracy. The results demonstrate that chemical shifts are very sensitive to the solvent interactions and that explicit treatment of solvent, such as EFP, is essential for achieving quantitative accuracy.« less

  5. Comparison of the PHISICS/RELAP5-3D ring and block model results for phase I of the OECD/NEA MHTGR-350 benchmark

    DOE PAGES

    Strydom, G.; Epiney, A. S.; Alfonsi, Andrea; ...

    2015-12-02

    The PHISICS code system has been under development at INL since 2010. It consists of several modules providing improved coupled core simulation capability: INSTANT (3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and modules performing criticality searches, fuel shuffling and generalized perturbation. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D was finalized in 2013, and as part of the verification and validation effort the first phase of the OECD/NEA MHTGR-350 Benchmark has now been completed. The theoretical basis and latest development status of the coupled PHISICS/RELAP5-3D tool are described in more detailmore » in a concurrent paper. This paper provides an overview of the OECD/NEA MHTGR-350 Benchmark and presents the results of Exercises 2 and 3 defined for Phase I. Exercise 2 required the modelling of a stand-alone thermal fluids solution at End of Equilibrium Cycle for the Modular High Temperature Reactor (MHTGR). The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 required a coupled neutronics and thermal fluids solution, and the PHISICS/RELAP5-3D code suite was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of results obtained with the traditional RELAP5-3D “ring” model approach against a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity that can be obtained by this “block” model is illustrated with comparison results on the temperature, power density and flux distributions. Furthermore, it is shown that the ring model leads to significantly lower fuel temperatures (up to 10%) when compared with the higher fidelity block model, and that the additional model development and run-time efforts are worth the gains obtained in the improved spatial temperature and flux distributions.« less

  6. Using in-situ observations of atmospheric water vapor isotopes to benchmark and isotope-enabled General Circulation Models and improve ice core paleo-climate reconstruction

    NASA Astrophysics Data System (ADS)

    Steen-Larsen, Hans Christian; Sveinbjörnsdottir, Arny; Masson-Delmotte, Valerie; Werner, Martin; Risi, Camille; Yoshimura, Kei

    2016-04-01

    We have since 2010 carried out in-situ continuous water vapor isotope observations on top of the Greenland Ice Sheet (3 seasons at NEEM), in Svalbard (1 year), in Iceland (4 years), in Bermuda (4 years). The expansive dataset containing high accuracy and precision measurements of δ18O, δD, and the d-excess allow us to validate and benchmark the treatment of the atmospheric hydrological cycle's processes in General Circulation Models using simulations nudged to reanalysis products. Recent findings from both Antarctica and Greenland have documented strong interaction between the snow surface isotopes and the near surface atmospheric water vapor isotopes on diurnal to synoptic time scales. In fact, it has been shown that the snow surface isotopes take up the synoptic driven atmospheric water vapor isotopic signal in-between precipitation events, erasing the precipitation isotope signal in the surface snow. This highlights the importance of using General or Regional Climate Models, which accurately are able to simulate the atmospheric water vapor isotopic composition, to understand and interpret the ice core isotope signal. With this in mind we have used three isotope-enabled General Circulation Models (isoGSM, ECHAM5-wiso, and LMDZiso) nudged to reanalysis products. We have compared the simulations of daily mean isotope values directly with our in-situ observations. This has allowed us to characterize the variability of the isotopic composition in the models and compared it to our observations. We have specifically focused on the d-excess in order to characterize why both the mean and the variability is significantly lower than our observations. We argue that using water vapor isotopes to benchmark General Circulation Models offers an excellent tool for improving the treatment and parameterization of the atmospheric hydrological cycle. Recent studies have documented a very large inter-model dispersion in the treatment of the Arctic water cycle under a future global warming and greenhouse gas emission scenario. Our results call for action to create an international pan-Arctic monitoring water vapor isotope network in order to improve future projections of Arctic climate.

  7. Publications - GMC 426 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    DGGS GMC 426 Publication Details Title: 40Ar/39Ar step heat analyses of core from the N. Kalikpik Test Layer, P.W., 2014, 40Ar/39Ar step heat analyses of core from the N. Kalikpik Test Well #1: Alaska

  8. Historical trends in organochlorine compounds in river basins identified using sediment cores from reservoirs

    USGS Publications Warehouse

    Van Metre, P.C.; Callender, E.; Fuller, C.C.

    1997-01-01

    This study used chemical analyses of dated sediment cores from reservoirs to define historical trends in water quality in the influent river basins. This work applies techniques from paleolimnology to reservoirs, and in the process, highlights differences between sediment-core interpretations for reservoirs and natural lakes. Sediment cores were collected from six reservoirs in the central and southeastern United States, sectioned, and analyzed for 137Cs and organochlorine compounds. 137Cs analyses were used to demonstrate limited post-depositional mixing, to indicate sediment deposition dates, and to estimate sediment focusing factors. Relative lack of mixing, high sedimentation rates, and high focusing factors distinguish reservoir sediment cores from cores collected in natural lakes. Temporal trends in concentrations of PCBs, total DDT (DDT + DDD + DDE), and chlordane reflect historical use and regulation of these compounds and differences in land use between reservoir drainages. PCB and total DDT core burdens, normalized for sediment focusing, greatly exceed reported cumulative regional atmospheric fallout of PCBs and total DDT estimated using cores from peat hogs and natural lakes, indicating the dominance of fluvial inputs of both groups of compounds to the reservoirs.This study used chemical analyses of dated sediment cores from reservoirs to define historical trends in water quality in the influent river basins. This work applies techniques from paleolimnology to reservoirs, and in the process, highlights differences between sediment-core interpretations for reservoirs and natural lakes. Sediment cores were collected from six reservoirs in the central and southeastern United States, sectioned, and analyzed for 137Cs and organochlorine compounds. 137Cs analyses were used to demonstrate limited post-depositional mixing, to indicate sediment deposition dates, and to estimate sediment focusing factors. Relative lack of mixing, high sedimentation rates, and high focusing factors distinguish reservoir sediment cores from cores collected in natural lakes. Temporal trends in concentrations of PCBs, total DOT (DDT+DDD+DDE), and chlordane reflect historical use and regulation of these compounds and differences in land use between reservoir drainages. PCB and total DDT core burdens, normalized for sediment focusing, greatly exceed reported cumulative regional atmospheric fallout of PCBs and total DDT estimated using cores from peat bogs and natural lakes, indicating the dominance of fluvial inputs of both groups of compounds to the reservoirs.

  9. Analyses of water, core material, and elutriate samples collected near Buras, Louisiana (New Orleans to Venice, Louisiana, Hurricane Protection Project)

    USGS Publications Warehouse

    Leone, Harold A.

    1977-01-01

    Eight core-material-sampling sites were chosen by the U.S. Army Corps of Engineers as possible borrow areas for fill material to be used in levee contruction near Buras, La. Eleven receiving-water sites also were selected to represent the water that will contact the porposed levees. Analyses of selected nutrients, metals, pesticides, and other organic constitutents were performed upon these bed-material and native-water samples as well as upon elutriate samples of specific core material-receiving water systems. The results of these analyses are presented without interpretation. (Woodard-USGS)

  10. Are catenas relevant to soil maps and pedology in Iowa in the twenty-first century?

    NASA Astrophysics Data System (ADS)

    Richter, Jennifer; Burras, C. Lee

    2014-05-01

    The modern intensity of agriculture brings to question whether anthropogenic impacts on soil profiles and catenas in agricultural areas are minor or dominant pedogenic influences. Answering this question is crucial to evaluating the modern relevance of historic soil maps, which use the traditional catena model as their foundation. This study quantifies the magnitude of change within the soil profile and across the landscape that result from decadal scale agriculture. Four benchmark catenas located on the Des Moines Lobe in Iowa, USA, were re-examined to determine the changes that occurred in the soils over the intervening years. The first site was initially studied by Walker and Ruhe in the mid 1960's. Burras and Scholtes initially examined the second catena in the early 1980's, while the remaining two catenas were first studied in the early 1990's by Steinwand and Fenton, and the late 1990's by Konen. Thus, the catenas were re-sampled for this study roughly 50, 30, 20, and 15 years, respectively, after the initial study. In this part of Iowa, continuous row crop agriculture (primarily Zea mays and Glycine max) and extensive subsurface drainage are very common. All study sites are closed-basin catenas located within 40 km of each other with a parent material of Late Wisconsinan glacial till. Soil cores to a depth of approximately two meters were taken with a truck mounted Giddings hydraulic soil sampler at 27 to 30 meter intervals along one transect for each of the four catenas, resulting in a total of forty-eight cores. The soil cores were then brought to the laboratory where soil descriptions and laboratory analyses are being completed. Soil descriptions include information about horizon type and depth, Munsell color, texture, rock fragments, structure, consistence, clay films, roots, pores, presence of carbonates, and redoximorphic features. Laboratory analyses include bulk density, particle size, total carbon and nitrogen content, cation exchange capacity, stable aggregate content, and pH. The resulting data is being analyzed and compared to historic data and models of pedogenesis. Preliminary and anticipated results indicate that soil properties such as bulk density, pH, geometric mean particle size, structure, A-horizon thickness, carbon distribution, depth to carbonates, and redoximorphic features have been altered by agricultural land use over the past 50 years. This indicates that anthropogenic impacts due to agriculture are a significant pedogenic influence, which is decreasing the scientific value of historic soil maps.

  11. Whole-rock analyses of core samples from the 1988 drilling of Kilauea Iki lava lake, Hawaii

    USGS Publications Warehouse

    Helz, Rosalind Tuthill; Taggart, Joseph E.

    2010-01-01

    This report presents and evaluates 64 major-element analyses of previously unanalyzed Kilauea Iki drill core, plus three samples from the 1959 and 1960 eruptions of Kilauea, obtained by X-ray fluorescence (XRF) analysis during the period 1992 to 1995. All earlier major-element analyses of Kilauea Iki core, obtained by classical (gravimetric) analysis, were reported and evaluated in Helz and others (1994). In order to assess how well the newer data compare with this earlier suite of analyses, a subset of 24 samples, which had been analyzed by classical analysis, was reanalyzed using the XRF technique; those results are presented and evaluated in this report also. The XRF analyses have not been published previously. This report also provides an overview of how the chemical variations observed in these new data fit in with the chemical zonation patterns and petrologic processes inferred in earlier studies of Kilauea Iki.

  12. Flowing gas, non-nuclear experiments on the gas core reactor

    NASA Technical Reports Server (NTRS)

    Kunze, J. F.; Suckling, D. H.; Copper, C. G.

    1972-01-01

    Flow tests were conducted on models of the gas core (cavity) reactor. Variations in cavity wall and injection configurations were aimed at establishing flow patterns that give a maximum of the nuclear criticality eigenvalue. Correlation with the nuclear effect was made using multigroup diffusion theory normalized by previous benchmark critical experiments. Air was used to simulate the hydrogen propellant in the flow tests, and smoked air, argon, or freon to simulate the central nuclear fuel gas. All tests were run in the down-firing direction so that gravitational effects simulated the acceleration effect of a rocket. Results show that acceptable flow patterns with high volume fraction for the simulated nuclear fuel gas and high flow rate ratios of propellant to fuel can be obtained. Using a point injector for the fuel, good flow patterns are obtained by directing the outer gas at high velocity along the cavity wall, using louvered or oblique-angle-honeycomb injection schemes.

  13. Recommendations for Training in Pediatric Psychology: Defining Core Competencies Across Training Levels

    PubMed Central

    Janicke, David M.; McQuaid, Elizabeth L.; Mullins, Larry L.; Robins, Paul M.; Wu, Yelena P.

    2014-01-01

    Objective As a field, pediatric psychology has focused considerable efforts on the education and training of students and practitioners. Alongside a broader movement toward competency attainment in professional psychology and within the health professions, the Society of Pediatric Psychology commissioned a Task Force to establish core competencies in pediatric psychology and address the need for contemporary training recommendations. Methods The Task Force adapted the framework proposed by the Competency Benchmarks Work Group on preparing psychologists for health service practice and defined competencies applicable across training levels ranging from initial practicum training to entry into the professional workforce in pediatric psychology. Results Competencies within 6 cluster areas, including science, professionalism, interpersonal, application, education, and systems, and 1 crosscutting cluster, crosscutting knowledge competencies in pediatric psychology, are presented in this report. Conclusions Recommendations for the use of, and the further refinement of, these suggested competencies are discussed. PMID:24719239

  14. Standards for vision science libraries: 2014 revision.

    PubMed

    Motte, Kristin; Caldwell, C Brooke; Lamson, Karen S; Ferimer, Suzanne; Nims, J Chris

    2014-10-01

    This Association of Vision Science Librarians revision of the "Standards for Vision Science Libraries" aspires to provide benchmarks to address the needs for the services and resources of modern vision science libraries (academic, medical or hospital, pharmaceutical, and so on), which share a core mission, are varied by type, and are located throughout the world. Through multiple meeting discussions, member surveys, and a collaborative revision process, the standards have been updated for the first time in over a decade. While the range of types of libraries supporting vision science services, education, and research is wide, all libraries, regardless of type, share core attributes, which the standards address. The current standards can and should be used to help develop new vision science libraries or to expand the growth of existing libraries, as well as to support vision science librarians in their work to better provide services and resources to their respective users.

  15. Standards for vision science libraries: 2014 revision

    PubMed Central

    Motte, Kristin; Caldwell, C. Brooke; Lamson, Karen S.; Ferimer, Suzanne; Nims, J. Chris

    2014-01-01

    Objective: This Association of Vision Science Librarians revision of the “Standards for Vision Science Libraries” aspires to provide benchmarks to address the needs for the services and resources of modern vision science libraries (academic, medical or hospital, pharmaceutical, and so on), which share a core mission, are varied by type, and are located throughout the world. Methods: Through multiple meeting discussions, member surveys, and a collaborative revision process, the standards have been updated for the first time in over a decade. Results: While the range of types of libraries supporting vision science services, education, and research is wide, all libraries, regardless of type, share core attributes, which the standards address. Conclusions: The current standards can and should be used to help develop new vision science libraries or to expand the growth of existing libraries, as well as to support vision science librarians in their work to better provide services and resources to their respective users. PMID:25349547

  16. Reversibility of Pt-Skin and Pt-Skeleton Nanostructures in Acidic Media.

    PubMed

    Durst, Julien; Lopez-Haro, Miguel; Dubau, Laetitia; Chatenet, Marian; Soldo-Olivier, Yvonne; Guétaz, Laure; Bayle-Guillemaud, Pascale; Maillard, Frédéric

    2014-02-06

    Following a well-defined series of acid and heat treatments on a benchmark Pt3Co/C sample, three different nanostructures of interest for the electrocatalysis of the oxygen reduction reaction were tailored. These nanostructures could be sorted into the "Pt-skin" structure, made of one pure Pt overlayer, and the "Pt-skeleton" structure, made of 2-3 Pt overlayers surrounding the Pt-Co alloy core. Using a unique combination of high-resolution aberration-corrected STEM-EELS, XRD, EXAFS, and XANES measurements, we provide atomically resolved pictures of these different nanostructures, including measurement of the Pt-shell thickness forming in acidic media and the resulting changes of the bulk and core chemical composition. It is shown that the Pt-skin is reverted toward the Pt-skeleton upon contact with acid electrolyte. This change in structure causes strong variations of the chemical composition.

  17. Network evolution model for supply chain with manufactures as the core.

    PubMed

    Fang, Haiyang; Jiang, Dali; Yang, Tinghong; Fang, Ling; Yang, Jian; Li, Wu; Zhao, Jing

    2018-01-01

    Building evolution model of supply chain networks could be helpful to understand its development law. However, specific characteristics and attributes of real supply chains are often neglected in existing evolution models. This work proposes a new evolution model of supply chain with manufactures as the core, based on external market demand and internal competition-cooperation. The evolution model assumes the external market environment is relatively stable, considers several factors, including specific topology of supply chain, external market demand, ecological growth and flow conservation. The simulation results suggest that the networks evolved by our model have similar structures as real supply chains. Meanwhile, the influences of external market demand and internal competition-cooperation to network evolution are analyzed. Additionally, 38 benchmark data sets are applied to validate the rationality of our evolution model, in which, nine manufacturing supply chains match the features of the networks constructed by our model.

  18. Network evolution model for supply chain with manufactures as the core

    PubMed Central

    Jiang, Dali; Fang, Ling; Yang, Jian; Li, Wu; Zhao, Jing

    2018-01-01

    Building evolution model of supply chain networks could be helpful to understand its development law. However, specific characteristics and attributes of real supply chains are often neglected in existing evolution models. This work proposes a new evolution model of supply chain with manufactures as the core, based on external market demand and internal competition-cooperation. The evolution model assumes the external market environment is relatively stable, considers several factors, including specific topology of supply chain, external market demand, ecological growth and flow conservation. The simulation results suggest that the networks evolved by our model have similar structures as real supply chains. Meanwhile, the influences of external market demand and internal competition-cooperation to network evolution are analyzed. Additionally, 38 benchmark data sets are applied to validate the rationality of our evolution model, in which, nine manufacturing supply chains match the features of the networks constructed by our model. PMID:29370201

  19. Computational Performance of a Parallelized Three-Dimensional High-Order Spectral Element Toolbox

    NASA Astrophysics Data System (ADS)

    Bosshard, Christoph; Bouffanais, Roland; Clémençon, Christian; Deville, Michel O.; Fiétier, Nicolas; Gruber, Ralf; Kehtari, Sohrab; Keller, Vincent; Latt, Jonas

    In this paper, a comprehensive performance review of an MPI-based high-order three-dimensional spectral element method C++ toolbox is presented. The focus is put on the performance evaluation of several aspects with a particular emphasis on the parallel efficiency. The performance evaluation is analyzed with help of a time prediction model based on a parameterization of the application and the hardware resources. A tailor-made CFD computation benchmark case is introduced and used to carry out this review, stressing the particular interest for clusters with up to 8192 cores. Some problems in the parallel implementation have been detected and corrected. The theoretical complexities with respect to the number of elements, to the polynomial degree, and to communication needs are correctly reproduced. It is concluded that this type of code has a nearly perfect speed up on machines with thousands of cores, and is ready to make the step to next-generation petaflop machines.

  20. Interactive high-resolution isosurface ray casting on multicore processors.

    PubMed

    Wang, Qin; JaJa, Joseph

    2008-01-01

    We present a new method for the interactive rendering of isosurfaces using ray casting on multi-core processors. This method consists of a combination of an object-order traversal that coarsely identifies possible candidate 3D data blocks for each small set of contiguous pixels, and an isosurface ray casting strategy tailored for the resulting limited-size lists of candidate 3D data blocks. While static screen partitioning is widely used in the literature, our scheme performs dynamic allocation of groups of ray casting tasks to ensure almost equal loads among the different threads running on multi-cores while maintaining spatial locality. We also make careful use of memory management environment commonly present in multi-core processors. We test our system on a two-processor Clovertown platform, each consisting of a Quad-Core 1.86-GHz Intel Xeon Processor, for a number of widely different benchmarks. The detailed experimental results show that our system is efficient and scalable, and achieves high cache performance and excellent load balancing, resulting in an overall performance that is superior to any of the previous algorithms. In fact, we achieve an interactive isosurface rendering on a 1024(2) screen for all the datasets tested up to the maximum size of the main memory of our platform.

  1. Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domino, Stefan P.; Ananthan, Shreyas; Knaus, Robert C.

    The former Nalu interior heterogeneous algorithm design, which was originally designed to manage matrix assembly operations over all elemental topology types, has been modified to operate over homogeneous collections of mesh entities. This newly templated kernel design allows for removal of workset variable resize operations that were formerly required at each loop over a Sierra ToolKit (STK) bucket (nominally, 512 entities in size). Extensive usage of the Standard Template Library (STL) std::vector has been removed in favor of intrinsic Kokkos memory views. In this milestone effort, the transition to Kokkos as the underlying infrastructure to support performance and portability onmore » many-core architectures has been deployed for key matrix algorithmic kernels. A unit-test driven design effort has developed a homogeneous entity algorithm that employs a team-based thread parallelism construct. The STK Single Instruction Multiple Data (SIMD) infrastructure is used to interleave data for improved vectorization. The collective algorithm design, which allows for concurrent threading and SIMD management, has been deployed for the core low-Mach element- based algorithm. Several tests to ascertain SIMD performance on Intel KNL and Haswell architectures have been carried out. The performance test matrix includes evaluation of both low- and higher-order methods. The higher-order low-Mach methodology builds on polynomial promotion of the core low-order control volume nite element method (CVFEM). Performance testing of the Kokkos-view/SIMD design indicates low-order matrix assembly kernel speed-up ranging between two and four times depending on mesh loading and node count. Better speedups are observed for higher-order meshes (currently only P=2 has been tested) especially on KNL. The increased workload per element on higher-order meshes bene ts from the wide SIMD width on KNL machines. Combining multiple threads with SIMD on KNL achieves a 4.6x speedup over the baseline, with assembly timings faster than that observed on Haswell architecture. The computational workload of higher-order meshes, therefore, seems ideally suited for the many-core architecture and justi es further exploration of higher-order on NGP platforms. A Trilinos/Tpetra-based multi-threaded GMRES preconditioned by symmetric Gauss Seidel (SGS) represents the core solver infrastructure for the low-Mach advection/diffusion implicit solves. The threaded solver stack has been tested on small problems on NREL's Peregrine system using the newly developed and deployed Kokkos-view/SIMD kernels. fforts are underway to deploy the Tpetra-based solver stack on NERSC Cori system to benchmark its performance at scale on KNL machines.« less

  2. Compression Behavior of Fluted-Core Composite Panels

    NASA Technical Reports Server (NTRS)

    Schultz, Marc R.; Oremont, Leonard; Guzman, J. Carlos; McCarville, Douglas; Rose, Cheryl A.; Hilburger, Mark W.

    2011-01-01

    In recent years, fiber-reinforced composites have become more accepted for aerospace applications. Specifically, during NASA s recent efforts to develop new launch vehicles, composite materials were considered and baselined for a number of structures. Because of mass and stiffness requirements, sandwich composites are often selected for many applications. However, there are a number of manufacturing and in-service concerns associated with traditional honeycomb-core sandwich composites that in certain instances may be alleviated through the use of other core materials or construction methods. Fluted-core, which consists of integral angled web members with structural radius fillers spaced between laminate face sheets, is one such construction alternative and is considered herein. Two different fluted-core designs were considered: a subscale design and a full-scale design sized for a heavy-lift-launch-vehicle interstage. In particular, axial compression of fluted-core composites was evaluated with experiments and finite-element analyses (FEA); axial compression is the primary loading condition in dry launch-vehicle barrel sections. Detailed finite-element models were developed to represent all components of the fluted-core construction, and geometrically nonlinear analyses were conducted to predict both buckling and material failures. Good agreement was obtained between test data and analyses, for both local buckling and ultimate material failure. Though the local buckling events are not catastrophic, the resulting deformations contribute to material failures. Consequently, an important observation is that the material failure loads and modes would not be captured by either linear analyses or nonlinear smeared-shell analyses. Compression-after-impact (CAI) performance of fluted core composites was also investigated by experimentally testing samples impacted with 6 ft.-lb. impact energies. It was found that such impacts reduced the ultimate load carrying capability by approximately 40% on the subscale test articles and by less than 20% on the full-scale test articles. Nondestructive inspection of the damage zones indicated that the detectable damage was limited to no more than one flute on either side of any given impact. More study is needed, but this may indicate that an inherent damage-arrest capability of fluted core could provide benefits over traditional sandwich designs in certain weight-critical applications.

  3. SpaceCubeX: A Framework for Evaluating Hybrid Multi-Core CPU FPGA DSP Architectures

    NASA Technical Reports Server (NTRS)

    Schmidt, Andrew G.; Weisz, Gabriel; French, Matthew; Flatley, Thomas; Villalpando, Carlos Y.

    2017-01-01

    The SpaceCubeX project is motivated by the need for high performance, modular, and scalable on-board processing to help scientists answer critical 21st century questions about global climate change, air quality, ocean health, and ecosystem dynamics, while adding new capabilities such as low-latency data products for extreme event warnings. These goals translate into on-board processing throughput requirements that are on the order of 100-1,000 more than those of previous Earth Science missions for standard processing, compression, storage, and downlink operations. To study possible future architectures to achieve these performance requirements, the SpaceCubeX project provides an evolvable testbed and framework that enables a focused design space exploration of candidate hybrid CPU/FPGA/DSP processing architectures. The framework includes ArchGen, an architecture generator tool populated with candidate architecture components, performance models, and IP cores, that allows an end user to specify the type, number, and connectivity of a hybrid architecture. The framework requires minimal extensions to integrate new processors, such as the anticipated High Performance Spaceflight Computer (HPSC), reducing time to initiate benchmarking by months. To evaluate the framework, we leverage a wide suite of high performance embedded computing benchmarks and Earth science scenarios to ensure robust architecture characterization. We report on our projects Year 1 efforts and demonstrate the capabilities across four simulation testbed models, a baseline SpaceCube 2.0 system, a dual ARM A9 processor system, a hybrid quad ARM A53 and FPGA system, and a hybrid quad ARM A53 and DSP system.

  4. Disaster metrics: quantitative benchmarking of hospital surge capacity in trauma-related multiple casualty events.

    PubMed

    Bayram, Jamil D; Zuabi, Shawki; Subbarao, Italo

    2011-06-01

    Hospital surge capacity in multiple casualty events (MCE) is the core of hospital medical response, and an integral part of the total medical capacity of the community affected. To date, however, there has been no consensus regarding the definition or quantification of hospital surge capacity. The first objective of this study was to quantitatively benchmark the various components of hospital surge capacity pertaining to the care of critically and moderately injured patients in trauma-related MCE. The second objective was to illustrate the applications of those quantitative parameters in local, regional, national, and international disaster planning; in the distribution of patients to various hospitals by prehospital medical services; and in the decision-making process for ambulance diversion. A 2-step approach was adopted in the methodology of this study. First, an extensive literature search was performed, followed by mathematical modeling. Quantitative studies on hospital surge capacity for trauma injuries were used as the framework for our model. The North Atlantic Treaty Organization triage categories (T1-T4) were used in the modeling process for simplicity purposes. Hospital Acute Care Surge Capacity (HACSC) was defined as the maximum number of critical (T1) and moderate (T2) casualties a hospital can adequately care for per hour, after recruiting all possible additional medical assets. HACSC was modeled to be equal to the number of emergency department beds (#EDB), divided by the emergency department time (EDT); HACSC = #EDB/EDT. In trauma-related MCE, the EDT was quantitatively benchmarked to be 2.5 (hours). Because most of the critical and moderate casualties arrive at hospitals within a 6-hour period requiring admission (by definition), the hospital bed surge capacity must match the HACSC at 6 hours to ensure coordinated care, and it was mathematically benchmarked to be 18% of the staffed hospital bed capacity. Defining and quantitatively benchmarking the different components of hospital surge capacity is vital to hospital preparedness in MCE. Prospective studies of our mathematical model are needed to verify its applicability, generalizability, and validity.

  5. WarpEngine, a Flexible Platform for Distributed Computing Implemented in the VEGA Program and Specially Targeted for Virtual Screening Studies.

    PubMed

    Pedretti, Alessandro; Mazzolari, Angelica; Vistoli, Giulio

    2018-05-21

    The manuscript describes WarpEngine, a novel platform implemented within the VEGA ZZ suite of software for performing distributed simulations both in local and wide area networks. Despite being tailored for structure-based virtual screening campaigns, WarpEngine possesses the required flexibility to carry out distributed calculations utilizing various pieces of software, which can be easily encapsulated within this platform without changing their source codes. WarpEngine takes advantages of all cheminformatics features implemented in the VEGA ZZ program as well as of its largely customizable scripting architecture thus allowing an efficient distribution of various time-demanding simulations. To offer an example of the WarpEngine potentials, the manuscript includes a set of virtual screening campaigns based on the ACE data set of the DUD-E collections using PLANTS as the docking application. Benchmarking analyses revealed a satisfactory linearity of the WarpEngine performances, the speed-up values being roughly equal to the number of utilized cores. Again, the computed scalability values emphasized that a vast majority (i.e., >90%) of the performed simulations benefit from the distributed platform presented here. WarpEngine can be freely downloaded along with the VEGA ZZ program at www.vegazz.net .

  6. QuickProbs—A Fast Multiple Sequence Alignment Algorithm Designed for Graphics Processors

    PubMed Central

    Gudyś, Adam; Deorowicz, Sebastian

    2014-01-01

    Multiple sequence alignment is a crucial task in a number of biological analyses like secondary structure prediction, domain searching, phylogeny, etc. MSAProbs is currently the most accurate alignment algorithm, but its effectiveness is obtained at the expense of computational time. In the paper we present QuickProbs, the variant of MSAProbs customised for graphics processors. We selected the two most time consuming stages of MSAProbs to be redesigned for GPU execution: the posterior matrices calculation and the consistency transformation. Experiments on three popular benchmarks (BAliBASE, PREFAB, OXBench-X) on quad-core PC equipped with high-end graphics card show QuickProbs to be 5.7 to 9.7 times faster than original CPU-parallel MSAProbs. Additional tests performed on several protein families from Pfam database give overall speed-up of 6.7. Compared to other algorithms like MAFFT, MUSCLE, or ClustalW, QuickProbs proved to be much more accurate at similar speed. Additionally we introduce a tuned variant of QuickProbs which is significantly more accurate on sets of distantly related sequences than MSAProbs without exceeding its computation time. The GPU part of QuickProbs was implemented in OpenCL, thus the package is suitable for graphics processors produced by all major vendors. PMID:24586435

  7. Publications - GMC 367 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    . Minerals Management Service, and Core Laboratories Publication Date: Aug 2009 Publisher: Alaska Division of Bibliographic Reference U.S. Minerals Management Service, and Core Laboratories, 2009, Sidewall core analyses

  8. Benchmark datasets for phylogenomic pipeline validation, applications for foodborne pathogen surveillance

    PubMed Central

    Rand, Hugh; Shumway, Martin; Trees, Eija K.; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E.; Defibaugh-Chavez, Stephanie; Carleton, Heather A.; Klimke, William A.; Katz, Lee S.

    2017-01-01

    Background As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. Methods We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and “known” phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Results Our “outbreak” benchmark datasets represent the four major foodborne bacterial pathogens (Listeria monocytogenes, Salmonella enterica, Escherichia coli, and Campylobacter jejuni) and one simulated dataset where the “known tree” can be accurately called the “true tree”. The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. Discussion These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross-institutional collaborations. Our work is part of a global effort to provide collaborative infrastructure for sequence data and analytic tools—we welcome additional benchmark datasets in our recommended format, and, if relevant, we will add these on our GitHub site. Together, these datasets, dataset format, and the underlying GitHub infrastructure present a recommended path for worldwide standardization of phylogenomic pipelines. PMID:29372115

  9. Benchmark datasets for phylogenomic pipeline validation, applications for foodborne pathogen surveillance.

    PubMed

    Timme, Ruth E; Rand, Hugh; Shumway, Martin; Trees, Eija K; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E; Defibaugh-Chavez, Stephanie; Carleton, Heather A; Klimke, William A; Katz, Lee S

    2017-01-01

    As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and "known" phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Our "outbreak" benchmark datasets represent the four major foodborne bacterial pathogens ( Listeria monocytogenes , Salmonella enterica , Escherichia coli , and Campylobacter jejuni ) and one simulated dataset where the "known tree" can be accurately called the "true tree". The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross-institutional collaborations. Our work is part of a global effort to provide collaborative infrastructure for sequence data and analytic tools-we welcome additional benchmark datasets in our recommended format, and, if relevant, we will add these on our GitHub site. Together, these datasets, dataset format, and the underlying GitHub infrastructure present a recommended path for worldwide standardization of phylogenomic pipelines.

  10. Deriving Criteria-supporting Benchmark Values from Empirical Response Relationships: Comparison of Statistical Techniques and Effect of Log-transforming the Nutrient Variable

    EPA Science Inventory

    In analyses supporting the development of numeric nutrient criteria, multiple statistical techniques can be used to extract critical values from stressor response relationships. However there is little guidance for choosing among techniques, and the extent to which log-transfor...

  11. Credit Cards on Campus: Academic Inquiry, Objective Empiricism, or Advocacy Research?

    ERIC Educational Resources Information Center

    Manning, Robert D.; Kirshak, Ray

    2005-01-01

    Professors John M. Barron and Michael E. Staten's article in Vol. 34, No. 3 of this journal, "Usage of Credit Cards Received through College Student-Marketing Programs," purports to "provide benchmark measures of college student credit card usage." Based on empirical analyses of proprietary industry data, they conclude that "There is no…

  12. Reverse Engineering Course at Philadelphia University in Jordan

    ERIC Educational Resources Information Center

    Younis, M. Bani; Tutunji, T.

    2012-01-01

    Reverse engineering (RE) is the process of testing and analysing a system or a device in order to identify, understand and document its functionality. RE is an efficient tool in industrial benchmarking where competitors' products are dissected and evaluated for performance and costs. RE can play an important role in the re-configuration and…

  13. Comparing Marital Status and Divorce Status in Civilian and Military Populations

    ERIC Educational Resources Information Center

    Karney, Benjamin R.; Loughran, David S.; Pollard, Michael S.

    2012-01-01

    Since military operations began in Afghanistan and Iraq, lengthy deployments have led to concerns about the vulnerability of military marriages. Yet evaluating military marriages requires some benchmark against which marital outcomes in the military may be compared. These analyses drew from personnel records from the entire male population of the…

  14. Benchmarking Operations to Promote Learning: An Internal Supply Chain Perspective

    ERIC Educational Resources Information Center

    Benton, Helen; Binder, Mario; Egel-Hess, Wolfgang

    2007-01-01

    Despite the widespread discussion of organisational learning, there is little scholarly contribution on promoting learning through the practical application of management tools. This is especially true in a complex internal supply chain context of an organisation. This paper seeks to address this gap by exploring and analysing the capability of…

  15. IRIS, Gender, and Student Achievement at University of Genova

    ERIC Educational Resources Information Center

    Bonfa, Antonella; Freddano, Michela

    2012-01-01

    The article analyses the gender effects on student achievement at University of Genova and it is a part of the research performed by the University of Genova called "Benchmarks interfaculty students: Development of a gender perspective to find strategies to understand what leads students to success in their studies", financed by the…

  16. Introduction to the IWA task group on biofilm modeling.

    PubMed

    Noguera, D R; Morgenroth, E

    2004-01-01

    An International Water Association (IWA) Task Group on Biofilm Modeling was created with the purpose of comparatively evaluating different biofilm modeling approaches. The task group developed three benchmark problems for this comparison, and used a diversity of modeling techniques that included analytical, pseudo-analytical, and numerical solutions to the biofilm problems. Models in one, two, and three dimensional domains were also compared. The first benchmark problem (BM1) described a monospecies biofilm growing in a completely mixed reactor environment and had the purpose of comparing the ability of the models to predict substrate fluxes and concentrations for a biofilm system of fixed total biomass and fixed biomass density. The second problem (BM2) represented a situation in which substrate mass transport by convection was influenced by the hydrodynamic conditions of the liquid in contact with the biofilm. The third problem (BM3) was designed to compare the ability of the models to simulate multispecies and multisubstrate biofilms. These three benchmark problems allowed identification of the specific advantages and disadvantages of each modeling approach. A detailed presentation of the comparative analyses for each problem is provided elsewhere in these proceedings.

  17. Progression-free survival as primary endpoint in randomized clinical trials of targeted agents for advanced renal cell carcinoma. Correlation with overall survival, benchmarking and power analysis.

    PubMed

    Bria, Emilio; Massari, Francesco; Maines, Francesca; Pilotto, Sara; Bonomi, Maria; Porta, Camillo; Bracarda, Sergio; Heng, Daniel; Santini, Daniele; Sperduti, Isabella; Giannarelli, Diana; Cognetti, Francesco; Tortora, Giampaolo; Milella, Michele

    2015-01-01

    A correlation, power and benchmarking analysis between progression-free and overall survival (PFS, OS) of randomized trials with targeted agents or immunotherapy for advanced renal cell carcinoma (RCC) was performed to provide a practical tool for clinical trial design. For 1st-line of treatment, a significant correlation was observed between 6-month PFS and 12-month OS, between 3-month PFS and 9-month OS and between the distributions of the cumulative PFS and OS estimates. According to the regression equation derived for 1st-line targeted agents, 7859, 2873, 712, and 190 patients would be required to determine a 3%, 5%, 10% and 20% PFS advantage at 6 months, corresponding to an absolute increase in 12-month OS rates of 2%, 3%, 6% and 11%, respectively. These data support PFS as a reliable endpoint for advanced RCC receiving up-front therapies. Benchmarking and power analyses, on the basis of the updated survival expectations, may represent practical tools for future trial' design. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Ice cores and SeaRISE: What we do (and don't) know

    NASA Technical Reports Server (NTRS)

    Alley, Richard B.

    1991-01-01

    Ice core analyses are needed in SeaRISE to learn what the West Antarctic ice sheet and other marine ice sheets were like in the past, what climate changes led to their present states, and how they behave. The major results of interest to SeaRISE from previous ice core analyses in West Antarctic are that the end of the last ice age caused temperature and accumulation rate increases in inland regions, leading to ice sheet thickening followed by thinning to the present.

  19. Roofline model toolkit: A practical tool for architectural and program analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo, Yu Jung; Williams, Samuel; Van Straalen, Brian

    We present preliminary results of the Roofline Toolkit for multicore, many core, and accelerated architectures. This paper focuses on the processor architecture characterization engine, a collection of portable instrumented micro benchmarks implemented with Message Passing Interface (MPI), and OpenMP used to express thread-level parallelism. These benchmarks are specialized to quantify the behavior of different architectural features. Compared to previous work on performance characterization, these microbenchmarks focus on capturing the performance of each level of the memory hierarchy, along with thread-level parallelism, instruction-level parallelism and explicit SIMD parallelism, measured in the context of the compilers and run-time environments. We also measuremore » sustained PCIe throughput with four GPU memory managed mechanisms. By combining results from the architecture characterization with the Roofline model based solely on architectural specifications, this work offers insights for performance prediction of current and future architectures and their software systems. To that end, we instrument three applications and plot their resultant performance on the corresponding Roofline model when run on a Blue Gene/Q architecture.« less

  20. Recovery of time evolution of Grad-Shafranov equilibria from single-spacecraft data: Benchmarking and application to a flux transfer event

    NASA Astrophysics Data System (ADS)

    Hasegawa, Hiroshi; Sonnerup, Bengt U. Ã.-.; Nakamura, Takuma K. M.

    2010-11-01

    First results are presented of a method, developed by Sonnerup and Hasegawa (2010), for analyzing time evolution of magnetohydrostatic Grad-Shafranov (GS) equilibria, using data recorded by an observing probe as it traverses a quasi-static, two-dimensional (2D), magnetic-field/plasma structure. The method recovers spatial initial values used in the classical GS reconstruction for an interval before and after the time of actual measurements, by advancing them backward and forward in time based on a set of equations for an incompressible plasma; the consequence is generation of multiple GS maps or a movie of the 2D field structure. The method is successfully benchmarked by use of a 2D magnetohydrodynamic simulation of time-dependent magnetic reconnection, and then is applied to a flux transfer event (FTE) seen by the Cluster spacecraft at the dayside high-latitude magnetopause. The application shows that the field lines constituting the FTE flux rope were contracting toward its center as a result of modest convective flow in the region around the core of the flux rope.

  1. Tracking Student Progression through the Core Curriculum. CCRC Analytics

    ERIC Educational Resources Information Center

    Hodara, Michelle; Rodriguez, Olga

    2013-01-01

    This report demonstrates useful methods for examining student progression through the core curriculum. The authors carry out analyses at two colleges in two different states, illustrating students' overall progression through the core curriculum and the relationship of this "core" progression to their college outcomes. By means of this analysis,…

  2. Characterizing Subcore Heterogeneity: A New Analytical Model and Technique to Observe the Spatial Variation of Transverse Dispersion

    NASA Astrophysics Data System (ADS)

    Boon, Maartje; Niu, Ben; Krevor, Sam

    2015-04-01

    Transverse dispersion, the lateral spread of chemical components in an aqueous solution caused by small heterogeneities in a rock, plays an important role in spreading, mixing and reaction during flow through porous media. Conventionally, transverse dispersion has been determined with the use of an annular core device and concentration measurements of the effluent (Blackwell, 1962; Hassinger and Von Rosenberg, 1968) or concentration measurements at probe locations along the core (Han et al, 1985; Harleman and Rumer, 1963). Both methods were designed around an analytical model of the transport equations assuming a single constant for the transverse dispersion coefficient, which is used to analyse the experimental data. We have developed a new core flood test with the aim of characterising chemical transport and dispersion directly in three dimensions to (1) produce higher precision observations of transverse dispersion than has been possible before and (2) so that the effects of rock heterogeneity on transport can also be observed and summarised using statistical descriptions allowing for a more nuanced picture of transport than allowed by description with a single transverse dispersion coefficient. The dispersion of a NaI aqueous solution injected into a Berea sandstone rock core was visualised in 3D with the use of a medical x-ray CT scanner. A device consisting out of three annular regions was used for injection. Water was injected into the centre and outer annular region and a NaI aqueous solution was injected in the middle annular region. An analytical solution to the flow and transport equations for this new inlet configuration was derived to design the tests. The Berea sandstone core was 20 cm long and had a diameter of 7.62cm. The core flood experiments were carried out for Peclet nr 0.5 and Peclet nr 2. At steady state, x-ray images were taken every 0.2 cm along the core. This resulted in a high quality 3D digital data set of the concentration distribution of the NaI aqueous solution at steady state for the different Peclet numbers. The average transverse dispersion coefficient (Dt) was calculated from the change in variance of the transverse distance travelled by the NaI solution along the core. A Dt of 2.396e-04 cm2/min was obtained for Peclet nr 0.5 and a Dt of 4.771e-04 cm2/min for Peclet nr 2. These values coincide precisely with the Dt calculated from the pore scale modelling on Berea sandstone of Bijeljic and Blunt, 2007, and serves as a benchmark demonstrating the utility and repeatability of the technique. This new technique shows promise for use in characterising average transport characteristics and analysing the impacts of natural rock heterogeneity. Acknowledgement: This work was carried out as part of the Qatar Carbonates and Carbon Storage Research Centre (QCCSRC). The authors gratefully acknowledge the funding of QCCSRC provided jointly by Qatar Petroleum, Shell, and the Qatar Science & Technology Park and for supporting the present project and the permission to present this research. References: 1. Blackwell, 1962 - Laboratory studies of microscopic dispersion phenomena. Society of Petroleum Engineers Journal 2, no.1:1-8 2. Bijeljic, B., and M. J. Blunt (2007), Pore-scale modeling of transverse dispersion in porous media, Water Resour. Res., 43, W12S11, doi:10.1029/2006WR005700. 3. Han, N.W., Bhakta, J and Carbonell, R.G., 1985 - Longitudinal and lateral dispersion in packed beds: Effect of column length and particle size distribution. AIChE Journal31, no.2:277-288. 4. Harleman, D.R., and R.R. Rumer. 1963. Longitudinal and lateral dispersion in an isotropic porous medium. Journal of Fluid Mechanics16, no. 2:385-394. 5. Hassinger, R.C. and Von Rosenberg, D.U., 1968 - A mathematical and experimental examination of transverse dispersion coefficients. Society of Petroleum Engineers Journal 8, no.1:195-204.

  3. Coupled Monte Carlo neutronics and thermal hydraulics for power reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernnat, W.; Buck, M.; Mattes, M.

    The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code ormore » memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunett, A. J.; Fei, T.; Strons, P. S.

    The Transient Reactor Test Facility (TREAT), located at Idaho National Laboratory (INL), is a test facility designed to evaluate the performance of reactor fuels and materials under transient accident conditions. The facility, an air-cooled, graphite-moderated reactor designed to utilize fuel containing high-enriched uranium (HEU), has been in non-operational standby status since 1994. Currently, in support of the missions of the Department of Energy (DOE) National Nuclear Security Administration (NNSA) Material Management and Minimization (M3) Reactor Conversion Program, a new core design is being developed for TREAT that will utilize low-enriched uranium (LEU). The primary objective of this conversion effort ismore » to design an LEU core that is capable of meeting the performance characteristics of the existing HEU core. Minimal, if any, changes are anticipated for the supporting systems (e.g. reactor trip system, filtration/cooling system, etc.); therefore, the LEU core must also be able to function with the existing supporting systems, and must also satisfy acceptable safety limits. In support of the LEU conversion effort, a range of ancillary safety analyses are required to evaluate the LEU core operation relative to that of the existing facility. These analyses cover neutronics, shielding, and thermal hydraulic topics that have been identified as having the potential to have reduced safety margins due to conversion to LEU fuel, or are required to support the required safety analyses documentation. The majority of these ancillary tasks have been identified in [1] and [2]. The purpose of this report is to document the ancillary safety analyses that have been performed at Argonne National Laboratory during the early stages of the LEU design effort, and to describe ongoing and anticipated analyses. For all analyses presented in this report, methodologies are utilized that are consistent with, or improved from, those used in analyses for the HEU Final Safety Analysis Report (FSAR) [3]. Depending on the availability of historical data derived from HEU TREAT operation, results calculated for the LEU core are compared to measurements obtained from HEU TREAT operation. While all analyses in this report are largely considered complete and have been reviewed for technical content, it is important to note that all topics will be revisited once the LEU design approaches its final stages of maturity. For most safety significant issues, it is expected that the analyses presented here will be bounding, but additional calculations will be performed as necessary to support safety analyses and safety documentation. It should also be noted that these analyses were completed as the LEU design evolved, and therefore utilized different LEU reference designs. Preliminary shielding, neutronic, and thermal hydraulic analyses have been completed and have generally demonstrated that the various LEU core designs will satisfy existing safety limits and standards also satisfied by the existing HEU core. These analyses include the assessment of the dose rate in the hodoscope room, near a loaded fuel transfer cask, above the fuel storage area, and near the HEPA filters. The potential change in the concentration of tramp uranium and change in neutron flux reaching instrumentation has also been assessed. Safety-significant thermal hydraulic items addressed in this report include thermally-induced mechanical distortion of the grid plate, and heating in the radial reflector.« less

  5. Direct potable reuse microbial risk assessment methodology: Sensitivity analysis and application to State log credit allocations.

    PubMed

    Soller, Jeffrey A; Eftim, Sorina E; Nappier, Sharon P

    2018-01-01

    Understanding pathogen risks is a critically important consideration in the design of water treatment, particularly for potable reuse projects. As an extension to our published microbial risk assessment methodology to estimate infection risks associated with Direct Potable Reuse (DPR) treatment train unit process combinations, herein, we (1) provide an updated compilation of pathogen density data in raw wastewater and dose-response models; (2) conduct a series of sensitivity analyses to consider potential risk implications using updated data; (3) evaluate the risks associated with log credit allocations in the United States; and (4) identify reference pathogen reductions needed to consistently meet currently applied benchmark risk levels. Sensitivity analyses illustrated changes in cumulative annual risks estimates, the significance of which depends on the pathogen group driving the risk for a given treatment train. For example, updates to norovirus (NoV) raw wastewater values and use of a NoV dose-response approach, capturing the full range of uncertainty, increased risks associated with one of the treatment trains evaluated, but not the other. Additionally, compared to traditional log-credit allocation approaches, our results indicate that the risk methodology provides more nuanced information about how consistently public health benchmarks are achieved. Our results indicate that viruses need to be reduced by 14 logs or more to consistently achieve currently applied benchmark levels of protection associated with DPR. The refined methodology, updated model inputs, and log credit allocation comparisons will be useful to regulators considering DPR projects and design engineers as they consider which unit treatment processes should be employed for particular projects. Published by Elsevier Ltd.

  6. Global Futures: a multithreaded execution model for Global Arrays-based applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Krishnamoorthy, Sriram; Vishnu, Abhinav

    2012-05-31

    We present Global Futures (GF), an execution model extension to Global Arrays, which is based on a PGAS-compatible Active Message-based paradigm. We describe the design and implementation of Global Futures and illustrate its use in a computational chemistry application benchmark (Hartree-Fock matrix construction using the Self-Consistent Field method). Our results show how we used GF to increase the scalability of the Hartree-Fock matrix build to up to 6,144 cores of an Infiniband cluster. We also show how GF's multithreaded execution has comparable performance to the traditional process-based SPMD model.

  7. Development and Applications of Orthogonality Constrained Density Functional Theory for the Accurate Simulation of X-Ray Absorption Spectroscopy

    NASA Astrophysics Data System (ADS)

    Derricotte, Wallace D.

    The aim of this dissertation is to address the theoretical challenges of calculating core-excited states within the framework of orthogonality constrained density functional theory (OCDFT). OCDFT is a well-established variational, time independent formulation of DFT for the computation of electronic excited states. In this work, the theory is first extended to compute core-excited states and generalized to calculate multiple excited state solutions. An initial benchmark is performed on a set of 40 unique core-excitations, highlighting that OCDFT excitation energies have a mean absolute error of 1.0 eV. Next, a novel implementation of the spin-free exact-two-component (X2C) one-electron treatment of scalar relativistic effects is presented and combined with OCDFT in an effort to calculate core excited states of transition metal complexes. The X2C-OCDFT spectra of three organotitanium complexes (TiCl4, TiCpCl3, and TiCp2Cl2) are shown to be in good agreement with experimental results and show a maximum absolute error of 5-6 eV. Next the issue of assigning core excited states is addressed by introducing an automated approach to analyzing the excited state MO by quantifying its local contributions using a unique orbital basis known as localized intrinsic valence virtual orbitals (LIVVOs). The utility of this approach is highlighted by studying sulfur core-excitations in ethanethiol and benzenethiol, as well as the hydrogen bonding in the water dimer. Finally, an approach to selectively target specic core-excited states in OCDFT based on atomic orbital subspace projection is presented in an effort to target core excited states of chemisorbed organic molecules. The core excitation spectrum of pyrazine chemisorbed on Si(100) is calculated using OCDFT and further characterized using the LIVVO approach.

  8. Benchmarking of trauma care worldwide: the potential value of an International Trauma Data Bank (ITDB).

    PubMed

    Haider, Adil H; Hashmi, Zain G; Gupta, Sonia; Zafar, Syed Nabeel; David, Jean-Stephane; Efron, David T; Stevens, Kent A; Zafar, Hasnain; Schneider, Eric B; Voiglio, Eric; Coimbra, Raul; Haut, Elliott R

    2014-08-01

    National trauma registries have helped improve patient outcomes across the world. Recently, the idea of an International Trauma Data Bank (ITDB) has been suggested to establish global comparative assessments of trauma outcomes. The objective of this study was to determine whether global trauma data could be combined to perform international outcomes benchmarking. We used observed/expected (O/E) mortality ratios to compare two trauma centers [European high-income country (HIC) and Asian lower-middle income country (LMIC)] with centers in the North American National Trauma Data Bank (NTDB). Patients (≥16 years) with blunt/penetrating injuries were included. Multivariable logistic regression, adjusting for known predictors of trauma mortality, was performed. Estimates were used to predict the expected deaths at each center and to calculate O/E mortality ratios for benchmarking. A total of 375,433 patients from 301 centers were included from the NTDB (2002-2010). The LMIC trauma center had 806 patients (2002-2010), whereas the HIC reported 1,003 patients (2002-2004). The most important known predictors of trauma mortality were adequately recorded in all datasets. Mortality benchmarking revealed that the HIC center performed similarly to the NTDB centers [O/E = 1.11 (95% confidence interval (CI) 0.92-1.35)], whereas the LMIC center showed significantly worse survival [O/E = 1.52 (1.23-1.88)]. Subset analyses of patients with blunt or penetrating injury showed similar results. Using only a few key covariates, aggregated global trauma data can be used to adequately perform international trauma center benchmarking. The creation of the ITDB is feasible and recommended as it may be a pivotal step towards improving global trauma outcomes.

  9. The Royal Australian and New Zealand College of Radiologists (RANZCR) relative value unit workload model, its limitations and the evolution to a safety, quality and performance framework.

    PubMed

    Pitman, A; Jones, D N; Stuart, D; Lloydhope, K; Mallitt, K; O'Rourke, P

    2009-10-01

    The study reports on the evolution of the Australian radiologist relative value unit (RVU) model of measuring radiologist reporting workloads in teaching hospital departments, and aims to outline a way forward for the development of a broad national safety, quality and performance framework that enables value mapping, measurement and benchmarking. The Radiology International Benchmarking Project of Queensland Health provided a suitable high-level national forum where the existing Pitman-Jones RVU model was applied to contemporaneous data, and its shortcomings and potential avenues for future development were analysed. Application of the Pitman-Jones model to Queensland data and also a Victorian benchmark showed that the original recommendation of 40,000 crude RVU per full-time equivalent consultant radiologist (97-98 baseline level) has risen only moderately, to now lie around 45,000 crude RVU/full-time equivalent. Notwithstanding this, the model has a number of weaknesses and is becoming outdated, as it cannot capture newer time-consuming examinations particularly in CT. A significant re-evaluation of the value of medical imaging is required, and is now occurring. We must rethink how we measure, benchmark, display and continually improve medical imaging safety, quality and performance, throughout the imaging care cycle and beyond. It will be necessary to ensure alignment with patient needs, as well as clinical and organisational objectives. Clear recommendations for the development of an updated national reporting workload RVU system are available, and an opportunity now exists for developing a much broader national model. A more sophisticated and balanced multidimensional safety, quality and performance framework that enables measurement and benchmarking of all important elements of health-care service is needed.

  10. Preliminary Evidence on the Effectiveness of Psychological Treatments Delivered at a University Counseling Center

    ERIC Educational Resources Information Center

    Minami, Takuya; Davies, D. Robert; Tierney, Sandra Callen; Bettmann, Joanna E.; McAward, Scott M.; Averill, Lynnette A.; Huebner, Lois A.; Weitzman, Lauren M.; Benbrook, Amy R.; Serlin, Ronald C.; Wampold, Bruce E.

    2009-01-01

    Treatment data from a university counseling center (UCC) that utilized the Outcome Questionnaire-45.2 (OQ-45; M. J. Lambert et al., 2004), a self-report general clinical symptom measure, was compared against treatment efficacy benchmarks from clinical trials of adult major depression that utilized similar measures. Statistical analyses suggested…

  11. Professional Development for Sessional Staff in Higher Education: A Review of Current Evidence

    ERIC Educational Resources Information Center

    Hitch, Danielle; Mahoney, Paige; Macfarlane, Susie

    2018-01-01

    The aim of this study was to provide an integrated review of evidence published in the past decade around professional development for sessional staff in higher education. Using the Integrating Theory, Evidence and Action method, the review analysed recent evidence using the three principles of the Benchmarking Leadership and Advancement of…

  12. Benchmark studies of induced radioactivity produced in LHC materials, Part II: Remanent dose rates.

    PubMed

    Brugger, M; Khater, H; Mayer, S; Prinz, A; Roesler, S; Ulrici, L; Vincke, H

    2005-01-01

    A new method to estimate remanent dose rates, to be used with the Monte Carlo code FLUKA, was benchmarked against measurements from an experiment that was performed at the CERN-EU high-energy reference field facility. An extensive collection of samples of different materials were placed downstream of, and laterally to, a copper target, intercepting a positively charged mixed hadron beam with a momentum of 120 GeV c(-1). Emphasis was put on the reduction of uncertainties by taking measures such as careful monitoring of the irradiation parameters, using different instruments to measure dose rates, adopting detailed elemental analyses of the irradiated materials and making detailed simulations of the irradiation experiment. The measured and calculated dose rates are in good agreement.

  13. Benchmark Credentialing Results for NRG-BR001: The First National Cancer Institute-Sponsored Trial of Stereotactic Body Radiation Therapy for Multiple Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Hallaq, Hania A., E-mail: halhallaq@radonc.uchicago.edu; Chmura, Steven J.; Salama, Joseph K.

    Purpose: The NRG-BR001 trial is the first National Cancer Institute–sponsored trial to treat multiple (range 2-4) extracranial metastases with stereotactic body radiation therapy. Benchmark credentialing is required to ensure adherence to this complex protocol, in particular, for metastases in close proximity. The present report summarizes the dosimetric results and approval rates. Methods and Materials: The benchmark used anonymized data from a patient with bilateral adrenal metastases, separated by <5 cm of normal tissue. Because the planning target volume (PTV) overlaps with organs at risk (OARs), institutions must use the planning priority guidelines to balance PTV coverage (45 Gy in 3 fractions) againstmore » OAR sparing. Submitted plans were processed by the Imaging and Radiation Oncology Core and assessed by the protocol co-chairs by comparing the doses to targets, OARs, and conformity metrics using nonparametric tests. Results: Of 63 benchmarks submitted through October 2015, 94% were approved, with 51% approved at the first attempt. Most used volumetric arc therapy (VMAT) (78%), a single plan for both PTVs (90%), and prioritized the PTV over the stomach (75%). The median dose to 95% of the volume was 44.8 ± 1.0 Gy and 44.9 ± 1.0 Gy for the right and left PTV, respectively. The median dose to 0.03 cm{sup 3} was 14.2 ± 2.2 Gy to the spinal cord and 46.5 ± 3.1 Gy to the stomach. Plans that spared the stomach significantly reduced the dose to the left PTV and stomach. Conformity metrics were significantly better for single plans that simultaneously treated both PTVs with VMAT, intensity modulated radiation therapy, or 3-dimensional conformal radiation therapy compared with separate plans. No significant differences existed in the dose at 2 cm from the PTVs. Conclusions: Although most plans used VMAT, the range of conformity and dose falloff was large. The decision to prioritize either OARs or PTV coverage varied considerably, suggesting that the toxicity outcomes in the trial could be affected. Several benchmarks met the dose-volume histogram metrics but produced unacceptable plans owing to low conformity. Dissemination of a frequently-asked-questions document improved the approval rate at the first attempt. Benchmark credentialing was found to be a valuable tool for educating institutions about the protocol requirements.« less

  14. Benchmark Credentialing Results for NRG-BR001: The First National Cancer Institute-Sponsored Trial of Stereotactic Body Radiation Therapy for Multiple Metastases.

    PubMed

    Al-Hallaq, Hania A; Chmura, Steven J; Salama, Joseph K; Lowenstein, Jessica R; McNulty, Susan; Galvin, James M; Followill, David S; Robinson, Clifford G; Pisansky, Thomas M; Winter, Kathryn A; White, Julia R; Xiao, Ying; Matuszak, Martha M

    2017-01-01

    The NRG-BR001 trial is the first National Cancer Institute-sponsored trial to treat multiple (range 2-4) extracranial metastases with stereotactic body radiation therapy. Benchmark credentialing is required to ensure adherence to this complex protocol, in particular, for metastases in close proximity. The present report summarizes the dosimetric results and approval rates. The benchmark used anonymized data from a patient with bilateral adrenal metastases, separated by <5 cm of normal tissue. Because the planning target volume (PTV) overlaps with organs at risk (OARs), institutions must use the planning priority guidelines to balance PTV coverage (45 Gy in 3 fractions) against OAR sparing. Submitted plans were processed by the Imaging and Radiation Oncology Core and assessed by the protocol co-chairs by comparing the doses to targets, OARs, and conformity metrics using nonparametric tests. Of 63 benchmarks submitted through October 2015, 94% were approved, with 51% approved at the first attempt. Most used volumetric arc therapy (VMAT) (78%), a single plan for both PTVs (90%), and prioritized the PTV over the stomach (75%). The median dose to 95% of the volume was 44.8 ± 1.0 Gy and 44.9 ± 1.0 Gy for the right and left PTV, respectively. The median dose to 0.03 cm 3 was 14.2 ± 2.2 Gy to the spinal cord and 46.5 ± 3.1 Gy to the stomach. Plans that spared the stomach significantly reduced the dose to the left PTV and stomach. Conformity metrics were significantly better for single plans that simultaneously treated both PTVs with VMAT, intensity modulated radiation therapy, or 3-dimensional conformal radiation therapy compared with separate plans. No significant differences existed in the dose at 2 cm from the PTVs. Although most plans used VMAT, the range of conformity and dose falloff was large. The decision to prioritize either OARs or PTV coverage varied considerably, suggesting that the toxicity outcomes in the trial could be affected. Several benchmarks met the dose-volume histogram metrics but produced unacceptable plans owing to low conformity. Dissemination of a frequently-asked-questions document improved the approval rate at the first attempt. Benchmark credentialing was found to be a valuable tool for educating institutions about the protocol requirements. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Molecular diffusion of stable water isotopes in polar firn as a proxy for past temperatures

    NASA Astrophysics Data System (ADS)

    Holme, Christian; Gkinis, Vasileios; Vinther, Bo M.

    2018-03-01

    Polar precipitation archived in ice caps contains information on past temperature conditions. Such information can be retrieved by measuring the water isotopic signals of δ18O and δD in ice cores. These signals have been attenuated during densification due to molecular diffusion in the firn column, where the magnitude of the diffusion is isotopologue specific and temperature dependent. By utilizing the differential diffusion signal, dual isotope measurements of δ18O and δD enable multiple temperature reconstruction techniques. This study assesses how well six different methods can be used to reconstruct past surface temperatures from the diffusion-based temperature proxies. Two of the methods are based on the single diffusion lengths of δ18O and δD , three of the methods employ the differential diffusion signal, while the last uses the ratio between the single diffusion lengths. All techniques are tested on synthetic data in order to evaluate their accuracy and precision. We perform a benchmark test to thirteen high resolution Holocene data sets from Greenland and Antarctica, which represent a broad range of mean annual surface temperatures and accumulation rates. Based on the benchmark test, we comment on the accuracy and precision of the methods. Both the benchmark test and the synthetic data test demonstrate that the most precise reconstructions are obtained when using the single isotope diffusion lengths, with precisions of approximately 1.0 °C . In the benchmark test, the single isotope diffusion lengths are also found to reconstruct consistent temperatures with a root-mean-square-deviation of 0.7 °C . The techniques employing the differential diffusion signals are more uncertain, where the most precise method has a precision of 1.9 °C . The diffusion length ratio method is the least precise with a precision of 13.7 °C . The absolute temperature estimates from this method are also shown to be highly sensitive to the choice of fractionation factor parameterization.

  16. Polymerizable Molecular Silsesquioxane Cage Armored Hybrid Microcapsules with In Situ Shell Functionalization.

    PubMed

    Xing, Yuxiu; Peng, Jun; Xu, Kai; Lin, Weihong; Gao, Shuxi; Ren, Yuanyuan; Gui, Xuefeng; Liang, Shengyuan; Chen, Mingcai

    2016-02-01

    We prepared core-shell polymer-silsesquioxane hybrid microcapsules from cage-like methacryloxypropyl silsesquioxanes (CMSQs) and styrene (St). The presence of CMSQ can moderately reduce the interfacial tension between St and water and help to emulsify the monomer prior to polymerization. Dynamic light scattering (DLS) and TEM analysis demonstrated that uniform core-shell latex particles were achieved. The polymer latex particles were subsequently transformed into well-defined hollow nanospheres by removing the polystyrene (PS) core with 1:1 ethanol/cyclohexane. High-resolution TEM and nitrogen adsorption-desorption analysis showed that the final nanospheres possessed hollow cavities and had porous shells; the pore size was approximately 2-3 nm. The nanospheres exhibited large surface areas (up to 486 m 2  g -1 ) and preferential adsorption, and they demonstrated the highest reported methylene blue adsorption capacity (95.1 mg g -1 ). Moreover, the uniform distribution of the methacryloyl moiety on the hollow nanospheres endowed them with more potential properties. These results could provide a new benchmark for preparing hollow microspheres by a facile one-step template-free method for various applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, P. T.; Shadid, J. N.; Hu, J. J.

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  18. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE PAGES

    Lin, P. T.; Shadid, J. N.; Hu, J. J.; ...

    2017-11-06

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  19. Some conservation issues for the dynamical cores of NWP and climate models

    NASA Astrophysics Data System (ADS)

    Thuburn, J.

    2008-03-01

    The rationale for designing atmospheric numerical model dynamical cores with certain conservation properties is reviewed. The conceptual difficulties associated with the multiscale nature of realistic atmospheric flow, and its lack of time-reversibility, are highlighted. A distinction is made between robust invariants, which are conserved or nearly conserved in the adiabatic and frictionless limit, and non-robust invariants, which are not conserved in the limit even though they are conserved by exactly adiabatic frictionless flow. For non-robust invariants, a further distinction is made between processes that directly transfer some quantity from large to small scales, and processes involving a cascade through a continuous range of scales; such cascades may either be explicitly parameterized, or handled implicitly by the dynamical core numerics, accepting the implied non-conservation. An attempt is made to estimate the relative importance of different conservation laws. It is argued that satisfactory model performance requires spurious sources of a conservable quantity to be much smaller than any true physical sources; for several conservable quantities the magnitudes of the physical sources are estimated in order to provide benchmarks against which any spurious sources may be measured.

  20. Reengineering of waste management at the Oak Ridge National Laboratory. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myrick, T.E.

    1997-08-01

    A reengineering evaluation of the waste management program at the Oak Ridge National Laboratory (ORNL) was conducted during the months of February through July 1997. The goal of the reengineering was to identify ways in which the waste management process could be streamlined and improved to reduce costs while maintaining full compliance and customer satisfaction. A Core Team conducted preliminary evaluations and determined that eight particular aspects of the ORNL waste management program warranted focused investigations during the reengineering. The eight areas included Pollution Prevention, Waste Characterization, Waste Certification/Verification, Hazardous/Mixed Waste Stream, Generator/WM Teaming, Reporting/Records, Disposal End Points, and On-Sitemore » Treatment/Storage. The Core Team commissioned and assembled Process Teams to conduct in-depth evaluations of each of these eight areas. The Core Team then evaluated the Process Team results and consolidated the 80 process-specific recommendations into 15 overall recommendations. Benchmarking of a commercial nuclear facility, a commercial research facility, and a DOE research facility was conducted to both validate the efficacy of these findings and seek additional ideas for improvement. The outcome of this evaluation is represented by the 15 final recommendations that are described in this report.« less

  1. Initial Coupling of the RELAP-7 and PRONGHORN Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Ortensi; D. Andrs; A.A. Bingham

    2012-10-01

    Modern nuclear reactor safety codes require the ability to solve detailed coupled neutronic- thermal fluids problems. For larger cores, this implies fully coupled higher dimensionality spatial dynamics with appropriate feedback models that can provide enough resolution to accurately compute core heat generation and removal during steady and unsteady conditions. The reactor analysis code PRONGHORN is being coupled to RELAP-7 as a first step to extend RELAP’s current capabilities. This report details the mathematical models, the type of coupling, and the testing results from the integrated system. RELAP-7 is a MOOSE-based application that solves the continuity, momentum, and energy equations inmore » 1-D for a compressible fluid. The pipe and joint capabilities enable it to model parts of the power conversion unit. The PRONGHORN application, also developed on the MOOSE infrastructure, solves the coupled equations that define the neutron diffusion, fluid flow, and heat transfer in a full core model. The two systems are loosely coupled to simplify the transition towards a more complex infrastructure. The integration is tested on a simplified version of the OECD/NEA MHTGR-350 Coupled Neutronics-Thermal Fluids benchmark model.« less

  2. Core-Noise

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.

    2010-01-01

    This presentation is a technical progress report and near-term outlook for NASA-internal and NASA-sponsored external work on core (combustor and turbine) noise funded by the Fundamental Aeronautics Program Subsonic Fixed Wing (SFW) Project. Sections of the presentation cover: the SFW system level noise metrics for the 2015, 2020, and 2025 timeframes; the emerging importance of core noise and its relevance to the SFW Reduced-Noise-Aircraft Technical Challenge; the current research activities in the core-noise area, with some additional details given about the development of a high-fidelity combustion-noise prediction capability; the need for a core-noise diagnostic capability to generate benchmark data for validation of both high-fidelity work and improved models, as well as testing of future noise-reduction technologies; relevant existing core-noise tests using real engines and auxiliary power units; and examples of possible scenarios for a future diagnostic facility. The NASA Fundamental Aeronautics Program has the principal objective of overcoming today's national challenges in air transportation. The SFW Reduced-Noise-Aircraft Technical Challenge aims to enable concepts and technologies to dramatically reduce the perceived aircraft noise outside of airport boundaries. This reduction of aircraft noise is critical for enabling the anticipated large increase in future air traffic. Noise generated in the jet engine core, by sources such as the compressor, combustor, and turbine, can be a significant contribution to the overall noise signature at low-power conditions, typical of approach flight. At high engine power during takeoff, jet and fan noise have traditionally dominated over core noise. However, current design trends and expected technological advances in engine-cycle design as well as noise-reduction methods are likely to reduce non-core noise even at engine-power points higher than approach. In addition, future low-emission combustor designs could increase the combustion-noise component. The trend towards high-power-density cores also means that the noise generated in the low-pressure turbine will likely increase. Consequently, the combined result from these emerging changes will be to elevate the overall importance of turbomachinery core noise, which will need to be addressed in order to meet future noise goals.

  3. Preliminary report on the geology, geophysics and hydrology of USBM/AEC Colorado core hole No. 2, Piceance Creek Basin, Rio Blanco County, Colorado

    USGS Publications Warehouse

    Ege, J.R.; Carroll, R.D.; Welder, F.A.

    1967-01-01

    Approximately 1,400 feet of continuous core was taken .between 800-2,214 feet in depth from USBM/AEC Colorado core hole No. 2. The drill, site is located in the Piceance Creek basin, Rio Blanco County, Colorado. From ground surface the drill hole penetrated 1,120 feet of the Evacuation Creek Member and 1,094 feet of oil shale in the Parachute Creek Member of the Green River Formation. Oil shale yielding more than 20 gallons per ton occurs between 1,260-2,214 feet in depth. A gas explosion near the bottom of the hole resulted in abandonment of the exploratory hole which was still in oil shale. The top of the nahcolite zone is at 1,693 feet. Below this depth the core contains common to abundant amounts of sodium bicarbonate salt intermixed with oil shale. The core is divided into seven structural zones that reflect changes in joint intensity, core loss and broken core due to natural causes. The zone of poor core recovery is in the Interval between 1,300-1,450 feet. Results of preliminary geophysical log analyses indicate that oil yields determined by Fischer assay compare favorably with yields determined by geophysical log analyses. There is strong evidence that analyses of complete core data from Colorado core holes No. 1 and No. 2 reveal a reliable relationship between geophysical log response and oil yield. The quality of the logs is poor in the rich shale section and the possibility of repeating the logging program should be considered. Observations during drilling, coring, and hydrologic testing of USBM/AEC Colorado core hole No. 2 reveal that the Parachute Creek Member of the Green River Formation is the principal aquifer water in the Parachute Creek Member is under artesian pressure. The upper part of the aquifer has a higher hydrostatic head than, and is hydrologically separated from the lower part of the aquifer. The transmissibility of the aquifer is about 3500 gpd per foot. The maximum water yield of the core hole during testing was about 500 gpm. Chemical analyses of water samples indicate that the content of dissolved solids is low, the principal ions being sodium and bicarbonate. Although the hole was originally cored, to a depth of 2,214 feet, ,the present depth is about 2,100 feet. This report presents a preliminary evaluation of core examination, geophysical log interpretation and hydrological tests from the USBM/AEC Colorado core hole No. 2. The cooperation of the U.S. Bureau of Mines is gratefully acknowledged. The reader is referred to Carroll and others (1967) for comparison of USBM/AEC Col0rado core hole No. 1 with USBM/AEC Colorado core hole No. 2.

  4. Computation of the free energy due to electron density fluctuation of a solute in solution: A QM/MM method with perturbation approach combined with a theory of solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuoka, Daiki; Takahashi, Hideaki, E-mail: hideaki@m.tohoku.ac.jp; Morita, Akihiro

    2014-04-07

    We developed a perturbation approach to compute solvation free energy Δμ within the framework of QM (quantum mechanical)/MM (molecular mechanical) method combined with a theory of energy representation (QM/MM-ER). The energy shift η of the whole system due to the electronic polarization of the solute is evaluated using the second-order perturbation theory (PT2), where the electric field formed by surrounding solvent molecules is treated as the perturbation to the electronic Hamiltonian of the isolated solute. The point of our approach is that the energy shift η, thus obtained, is to be adopted for a novel energy coordinate of the distributionmore » functions which serve as fundamental variables in the free energy functional developed in our previous work. The most time-consuming part in the QM/MM-ER simulation can be, thus, avoided without serious loss of accuracy. For our benchmark set of molecules, it is demonstrated that the PT2 approach coupled with QM/MM-ER gives hydration free energies in excellent agreements with those given by the conventional method utilizing the Kohn-Sham SCF procedure except for a few molecules in the benchmark set. A variant of the approach is also proposed to deal with such difficulties associated with the problematic systems. The present approach is also advantageous to parallel implementations. We examined the parallel efficiency of our PT2 code on multi-core processors and found that the speedup increases almost linearly with respect to the number of cores. Thus, it was demonstrated that QM/MM-ER coupled with PT2 deserves practical applications to systems of interest.« less

  5. Tracking millennial-scale Holocene glacial advance and retreat using osmium isotopes: Insights from the Greenland ice sheet

    USGS Publications Warehouse

    Rooney, Alan D.; Selby, David; Llyod, Jeremy M.; Roberts, David H.; Luckge, Andreas; Sageman, Bradley B.; Prouty, Nancy G.

    2016-01-01

    High-resolution Os isotope stratigraphy can aid in reconstructing Pleistocene ice sheet fluctuation and elucidating the role of local and regional weathering fluxes on the marine Os residence time. This paper presents new Os isotope data from ocean cores adjacent to the West Greenland ice sheet that have excellent chronological controls. Cores MSM-520 and DA00-06 represent distal to proximal sites adjacent to two West Greenland ice streams. Core MSM-520 has a steadily decreasing Os signal over the last 10 kyr (187Os/188Os = 1.35–0.81). In contrast, Os isotopes from core DA00-06 (proximal to the calving front of Jakobshavn Isbræ) highlight four stages of ice stream retreat and advance over the past 10 kyr (187Os/188Os = 2.31; 1.68; 2.09; 1.47). Our high-resolution chemostratigraphic records provide vital benchmarks for ice-sheet modelers as we attempt to better constrain the future response of major ice sheets to climate change. Variations in Os isotope composition from sediment and macro-algae (seaweed) sourced from regional and global settings serve to emphasize the overwhelming effect weathering sources have on seawater Os isotope composition. Further, these findings demonstrate that the residence time of Os is shorter than previous estimates of ∼104 yr.

  6. A Composite Depth Scale for Sediments from Crevice Lake, Montana

    USGS Publications Warehouse

    Rosenbaum, J.G.; Skipp, G.; Honke, J.; Chapman, C.

    2010-01-01

    As part of a study to derive records of past environmental change from lake sediments in the western United States, a set of cores was collected from Crevice Lake, Montana, in late February and early March 2001. Crevice Lake (latitude 45.000N, longitude 110.578W, elevation 1,713 meters) lies adjacent to the Yellowstone River at the north edge of Yellowstone National Park. The lake is more than 31 meters deep and has a surface area of 7.76 hectares. The combination of small surface area and significant depth promote anoxic bottom-water conditions that preserve annual laminations (varves) in the sediment. Three types of cores were collected through the ice. The uppermost sediments were obtained in freeze cores that preserved the sediment water interface. Two sites were cored with a 5-centimeter diameter corer. Five cores were taken with a 2-meter-long percussion piston corer. The percussion core uses a plastic core liner with an inside diameter of 9 centimeters. Coring was done at two sites. Because of the relatively large diameter of the percussion cores, samples from these cores were used for a variety of analyses including pollen, charcoal, diatoms, stable isotopes, organic and inorganic carbon, elemental analyses, and magnetic properties.

  7. Gains in efficiency and scientific potential of continental climate reconstruction provided by the LRC LacCore Facility, University of Minnesota

    NASA Astrophysics Data System (ADS)

    Noren, A.; Brady, K.; Myrbo, A.; Ito, E.

    2007-12-01

    Lacustrine sediment cores comprise an integral archive for the determination of continental paleoclimate, for their potentially high temporal resolution and for their ability to resolve spatial variability in climate across vast sections of the globe. Researchers studying these archives now have a large, nationally-funded, public facility dedicated to the support of their efforts. The LRC LacCore Facility, funded by NSF and the University of Minnesota, provides free or low-cost assistance to any portion of research projects, depending on the specific needs of the project. A large collection of field equipment (site survey equipment, coring devices, boats/platforms, water sampling devices) for nearly any lacustrine setting is available for rental, and Livingstone-type corers and drive rods may be purchased. LacCore staff can accompany field expeditions to operate these devices and curate samples, or provide training prior to device rental. The Facility maintains strong connections to experienced shipping agents and customs brokers, which vastly improves transport and importation of samples. In the lab, high-end instrumentation (e.g., multisensor loggers, high-resolution digital linescan cameras) provides a baseline of fundamental analyses before any sample material is consumed. LacCore staff provide support and training in lithological description, including smear-slide, XRD, and SEM analyses. The LRC botanical macrofossil reference collection is a valuable resource for both core description and detailed macrofossil analysis. Dedicated equipment and space for various subsample analyses streamlines these endeavors; subsamples for several analyses may be submitted for preparation or analysis by Facility technicians for a fee (e.g., carbon and sulfur coulometry, grain size, pollen sample preparation and analysis, charcoal, biogenic silica, LOI, freeze drying). The National Lacustrine Core Repository now curates ~9km of sediment cores from expeditions around the world, and stores metadata and analytical data for all cores processed at the facility. Any researcher may submit sample requests for material in archived cores. Supplies for field (e.g., polycarbonate pipe, endcaps), lab (e.g., sample containers, pollen sample spike), and curation (e.g., D-tubes) are sold at cost. In collaboration with facility users, staff continually develop new equipment, supplies, and procedures as needed in order to provide the best and most comprehensive set of services to the research community.

  8. Trace-element analyses of core samples from the 1967-1988 drillings of Kilauea Iki lava lake, Hawaii

    USGS Publications Warehouse

    Helz, Rosalind Tuthill

    2012-01-01

    This report presents previously unpublished analyses of trace elements in drill core samples from Kilauea Iki lava lake and from the 1959 eruption that fed the lava lake. The two types of data presented were obtained by instrumental neutron-activation analysis (INAA) and energy-dispersive X-ray fluorescence analysis (EDXRF). The analyses were performed in U.S. Geological Survey (USGS) laboratories from 1989 to 1994. This report contains 93 INAA analyses on 84 samples and 68 EDXRF analyses on 68 samples. The purpose of the study was to document trace-element variation during chemical differentiation, especially during the closed-system differentiation of Kilauea Iki lava lake.

  9. Paritaprevir and Ritonavir Liver Concentrations in Rats as Assessed by Different Liver Sampling Techniques

    PubMed Central

    Venuto, Charles S.; Markatou, Marianthi; Woolwine-Cunningham, Yvonne; Furlage, Rosemary; Ocque, Andrew J.; DiFrancesco, Robin; Dumas, Emily O.; Wallace, Paul K.; Morse, Gene D.

    2017-01-01

    ABSTRACT The liver is crucial to pharmacology, yet substantial knowledge gaps exist in the understanding of its basic pharmacologic processes. An improved understanding for humans requires reliable and reproducible liver sampling methods. We compared liver concentrations of paritaprevir and ritonavir in rats by using samples collected by fine-needle aspiration (FNA), core needle biopsy (CNB), and surgical resection. Thirteen Sprague-Dawley rats were evaluated, nine of which received paritaprevir/ritonavir at 30/20 mg/kg of body weight by oral gavage daily for 4 or 5 days. Drug concentrations were measured using liquid chromatography-tandem mass spectrometry on samples collected via FNA (21G needle) with 1, 3, or 5 passes (FNA1, FNA3, and FNA5); via CNB (16G needle); and via surgical resection. Drug concentrations in plasma were also assessed. Analyses included noncompartmental pharmacokinetic analysis and use of Bland-Altman techniques. All liver tissue samples had higher paritaprevir and ritonavir concentrations than those in plasma. Resected samples, considered the benchmark measure, resulted in estimations of the highest values for the pharmacokinetic parameters of exposure (maximum concentration of drug in serum [Cmax] and area under the concentration-time curve from 0 to 24 h [AUC0–24]) for paritaprevir and ritonavir. Bland-Altman analyses showed that the best agreement occurred between tissue resection and CNB, with 15% bias, followed by FNA3 and FNA5, with 18% bias, and FNA1 and FNA3, with a 22% bias for paritaprevir. Paritaprevir and ritonavir are highly concentrated in rat liver. Further research is needed to validate FNA sampling for humans, with the possible derivation and application of correction factors for drug concentration measurements. PMID:28264852

  10. Paritaprevir and Ritonavir Liver Concentrations in Rats as Assessed by Different Liver Sampling Techniques.

    PubMed

    Venuto, Charles S; Markatou, Marianthi; Woolwine-Cunningham, Yvonne; Furlage, Rosemary; Ocque, Andrew J; DiFrancesco, Robin; Dumas, Emily O; Wallace, Paul K; Morse, Gene D; Talal, Andrew H

    2017-05-01

    The liver is crucial to pharmacology, yet substantial knowledge gaps exist in the understanding of its basic pharmacologic processes. An improved understanding for humans requires reliable and reproducible liver sampling methods. We compared liver concentrations of paritaprevir and ritonavir in rats by using samples collected by fine-needle aspiration (FNA), core needle biopsy (CNB), and surgical resection. Thirteen Sprague-Dawley rats were evaluated, nine of which received paritaprevir/ritonavir at 30/20 mg/kg of body weight by oral gavage daily for 4 or 5 days. Drug concentrations were measured using liquid chromatography-tandem mass spectrometry on samples collected via FNA (21G needle) with 1, 3, or 5 passes (FNA 1 , FNA 3 , and FNA 5 ); via CNB (16G needle); and via surgical resection. Drug concentrations in plasma were also assessed. Analyses included noncompartmental pharmacokinetic analysis and use of Bland-Altman techniques. All liver tissue samples had higher paritaprevir and ritonavir concentrations than those in plasma. Resected samples, considered the benchmark measure, resulted in estimations of the highest values for the pharmacokinetic parameters of exposure (maximum concentration of drug in serum [ C max ] and area under the concentration-time curve from 0 to 24 h [AUC 0-24 ]) for paritaprevir and ritonavir. Bland-Altman analyses showed that the best agreement occurred between tissue resection and CNB, with 15% bias, followed by FNA 3 and FNA 5 , with 18% bias, and FNA 1 and FNA 3 , with a 22% bias for paritaprevir. Paritaprevir and ritonavir are highly concentrated in rat liver. Further research is needed to validate FNA sampling for humans, with the possible derivation and application of correction factors for drug concentration measurements. Copyright © 2017 American Society for Microbiology.

  11. Hardware accelerated high performance neutron transport computation based on AGENT methodology

    NASA Astrophysics Data System (ADS)

    Xiao, Shanjie

    The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the computation time for 3D full-core neutron transport analysis, making the AGENT methodology unique and advantageous, and thus supplies the possibility to extend the application range of neutron transport analysis in either industry engineering or academic research.

  12. Structural Analyses of Stirling Power Convertor Heater Head for Long-Term Reliability, Durability, and Performance

    NASA Technical Reports Server (NTRS)

    Halford, Gary R.; Shah, Ashwin; Arya, Vinod K.; Krause, David L.; Bartolotta, Paul A.

    2002-01-01

    Deep-space missions require onboard electric power systems with reliable design lifetimes of up to 10 yr and beyond. A high-efficiency Stirling radioisotope power system is a likely candidate for future deep-space missions and Mars rover applications. To ensure ample durability, the structurally critical heater head of the Stirling power convertor has undergone extensive computational analyses of operating temperatures (up to 650 C), stresses, and creep resistance of the thin-walled Inconel 718 bill of material. Durability predictions are presented in terms of the probability of survival. A benchmark structural testing program has commenced to support the analyses. This report presents the current status of durability assessments.

  13. Characterization of Hepatitis C Virus Core Protein Multimerization and Membrane Envelopment: Revelation of a Cascade of Core-Membrane Interactions ▿

    PubMed Central

    Ai, Li-Shuang; Lee, Yu-Wen; Chen, Steve S.-L.

    2009-01-01

    The molecular basis underlying hepatitis C virus (HCV) core protein maturation and morphogenesis remains elusive. We characterized the concerted events associated with core protein multimerization and interaction with membranes. Analyses of core proteins expressed from a subgenomic system showed that the signal sequence located between the core and envelope glycoprotein E1 is critical for core association with endoplasmic reticula (ER)/late endosomes and the core's envelopment by membranes, which was judged by the core's acquisition of resistance to proteinase K digestion. Despite exerting an inhibitory effect on the core's association with membranes, (Z-LL)2-ketone, a specific inhibitor of signal peptide peptidase (SPP), did not affect core multimeric complex formation, suggesting that oligomeric core complex formation proceeds prior to or upon core attachment to membranes. Protease-resistant core complexes that contained both innate and processed proteins were detected in the presence of (Z-LL)2-ketone, implying that core envelopment occurs after intramembrane cleavage. Mutations of the core that prevent signal peptide cleavage or coexpression with an SPP loss-of-function D219A mutant decreased the core's envelopment, demonstrating that SPP-mediated cleavage is required for core envelopment. Analyses of core mutants with a deletion in domain I revealed that this domain contains sequences crucial for core envelopment. The core proteins expressed by infectious JFH1 and Jc1 RNAs in Huh7 cells also assembled into a multimeric complex, associated with ER/late-endosomal membranes, and were enveloped by membranes. Treatment with (Z-LL)2-ketone or coexpression with D219A mutant SPP interfered with both core envelopment and infectious HCV production, indicating a critical role of core envelopment in HCV morphogenesis. The results provide mechanistic insights into the sequential and coordinated processes during the association of the HCV core protein with membranes in the early phase of virus maturation and morphogenesis. PMID:19605478

  14. District Heating Systems Performance Analyses. Heat Energy Tariff

    NASA Astrophysics Data System (ADS)

    Ziemele, Jelena; Vigants, Girts; Vitolins, Valdis; Blumberga, Dagnija; Veidenbergs, Ivars

    2014-12-01

    The paper addresses an important element of the European energy sector: the evaluation of district heating (DH) system operations from the standpoint of increasing energy efficiency and increasing the use of renewable energy resources. This has been done by developing a new methodology for the evaluation of the heat tariff. The paper presents an algorithm of this methodology, which includes not only a data base and calculation equation systems, but also an integrated multi-criteria analysis module using MADM/MCDM (Multi-Attribute Decision Making / Multi-Criteria Decision Making) based on TOPSIS (Technique for Order Performance by Similarity to Ideal Solution). The results of the multi-criteria analysis are used to set the tariff benchmarks. The evaluation methodology has been tested for Latvian heat tariffs, and the obtained results show that only half of heating companies reach a benchmark value equal to 0.5 for the efficiency closeness to the ideal solution indicator. This means that the proposed evaluation methodology would not only allow companies to determine how they perform with regard to the proposed benchmark, but also to identify their need to restructure so that they may reach the level of a low-carbon business.

  15. Preliminary Assessment of the Impact on Reactor Vessel dpa Rates Due to Installation of a Proposed Low Enriched Uranium (LEU) Core in the High Flux Isotope Reactor (HFIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daily, Charles R.

    2015-10-01

    An assessment of the impact on the High Flux Isotope Reactor (HFIR) reactor vessel (RV) displacements-per-atom (dpa) rates due to operations with the proposed low enriched uranium (LEU) core described by Ilas and Primm has been performed and is presented herein. The analyses documented herein support the conclusion that conversion of HFIR to low-enriched uranium (LEU) core operations using the LEU core design of Ilas and Primm will have no negative impact on HFIR RV dpa rates. Since its inception, HFIR has been operated with highly enriched uranium (HEU) cores. As part of an effort sponsored by the National Nuclearmore » Security Administration (NNSA), conversion to LEU cores is being considered for future HFIR operations. The HFIR LEU configurations analyzed are consistent with the LEU core models used by Ilas and Primm and the HEU balance-of-plant models used by Risner and Blakeman in the latest analyses performed to support the HFIR materials surveillance program. The Risner and Blakeman analyses, as well as the studies documented herein, are the first to apply the hybrid transport methods available in the Automated Variance reduction Generator (ADVANTG) code to HFIR RV dpa rate calculations. These calculations have been performed on the Oak Ridge National Laboratory (ORNL) Institutional Cluster (OIC) with version 1.60 of the Monte Carlo N-Particle 5 (MCNP5) computer code.« less

  16. International Space Station Alpha (ISSA) Integrated Traffic Model

    NASA Technical Reports Server (NTRS)

    Gates, R. E.

    1995-01-01

    The paper discusses the development process of the International Space Station Alpha (ISSA) Integrated Traffic Model which is a subsystem analyses tool utilized in the ISSA design analysis cycles. Fast-track prototyping of the detailed relationships between daily crew and station consumables, propellant needs, maintenance requirements and crew rotation via spread sheets provide adequate benchmarks to assess cargo vehicle design and performance characteristics.

  17. Technical Adequacy of the easyCBM[R] Mathematics Measures: Grades 3-8, 2009-2010 Version. Technical Report #1007

    ERIC Educational Resources Information Center

    Nese, Joseph F. T.; Lai, Cheng-Fei; Anderson, Daniel; Jamgochian, Elisa M.; Kamata, Akihito; Saez, Leilani; Park, Bitnara J.; Alonzo, Julie; Tindal, Gerald

    2010-01-01

    In this technical report, data are presented on the practical utility, reliability, and validity of the easyCBM[R] mathematics (2009-2010 version) measures for students in grades 3-8 within four districts in two states. Analyses include: minimum acceptable within-year growth; minimum acceptable year-end benchmark performance; internal and…

  18. Research Assessment Exercise Results and Research Funding in the United Kingdom: A Comparative Analysis

    ERIC Educational Resources Information Center

    Chatterji, Monojit; Seaman, Paul

    2006-01-01

    A considerable sum of money is allocated to UK universities on the basis of Research Assessment Exercise performance. In this paper we analyse the two main funding models used in the United Kingdom and discuss their strengths and weaknesses. We suggest that the benchmarking used by the two main models have significant weaknesses, and propose an…

  19. Career Readiness in the United States 2015. ACT Insights in Education and Work

    ERIC Educational Resources Information Center

    LeFebvre, Mary

    2015-01-01

    ACT has conducted over 20,000 job analyses for occupations across a diverse array of industries and occupations since 1993. This report highlights the levels of career readiness for various subgroups of ACT Work Keys® examinees in the United States and provides career readiness benchmarks for selected ACT WorkKeys cognitive skills by career…

  20. Bioelectrochemical Systems Workshop:Standardized Analyses, Design Benchmarks, and Reporting

    DTIC Science & Technology

    2012-01-01

    related to the exoelectrogenic biofilm activity, and to investigate whether the community structure is a function of design and operational parameters...where should biofilm samples be collected? The most prevalent methods of community characterization in BES studies have entailed phylogenetic ...of function associated with this genetic marker, and in methods that involve polymerase chain reaction (PCR) amplification the quantitative

  1. Show me the data: advances in multi-model benchmarking, assimilation, and forecasting

    NASA Astrophysics Data System (ADS)

    Dietze, M.; Raiho, A.; Fer, I.; Cowdery, E.; Kooper, R.; Kelly, R.; Shiklomanov, A. N.; Desai, A. R.; Simkins, J.; Gardella, A.; Serbin, S.

    2016-12-01

    Researchers want their data to inform carbon cycle predictions, but there are considerable bottlenecks between data collection and the use of data to calibrate and validate earth system models and inform predictions. This talk highlights recent advancements in the PEcAn project aimed at it making it easier for individual researchers to confront models with their own data: (1) The development of an easily extensible site-scale benchmarking system aimed at ensuring that models capture process rather than just reproducing pattern; (2) Efficient emulator-based Bayesian parameter data assimilation to constrain model parameters; (3) A novel, generalized approach to ensemble data assimilation to estimate carbon pools and fluxes and quantify process error; (4) automated processing and downscaling of CMIP climate scenarios to support forecasts that include driver uncertainty; (5) a large expansion in the number of models supported, with new tools for conducting multi-model and multi-site analyses; and (6) a network-based architecture that allows analyses to be shared with model developers and other collaborators. Application of these methods is illustrated with data across a wide range of time scales, from eddy-covariance to forest inventories to tree rings to paleoecological pollen proxies.

  2. Mechanism-based risk assessment strategy for drug-induced cholestasis using the transcriptional benchmark dose derived by toxicogenomics.

    PubMed

    Kawamoto, Taisuke; Ito, Yuichi; Morita, Osamu; Honda, Hiroshi

    2017-01-01

    Cholestasis is one of the major causes of drug-induced liver injury (DILI), which can result in withdrawal of approved drugs from the market. Early identification of cholestatic drugs is difficult due to the complex mechanisms involved. In order to develop a strategy for mechanism-based risk assessment of cholestatic drugs, we analyzed gene expression data obtained from the livers of rats that had been orally administered with 12 known cholestatic compounds repeatedly for 28 days at three dose levels. Qualitative analyses were performed using two statistical approaches (hierarchical clustering and principle component analysis), in addition to pathway analysis. The transcriptional benchmark dose (tBMD) and tBMD 95% lower limit (tBMDL) were used for quantitative analyses, which revealed three compound sub-groups that produced different types of differential gene expression; these groups of genes were mainly involved in inflammation, cholesterol biosynthesis, and oxidative stress. Furthermore, the tBMDL values for each test compound were in good agreement with the relevant no observed adverse effect level. These results indicate that our novel strategy for drug safety evaluation using mechanism-based classification and tBMDL would facilitate the application of toxicogenomics for risk assessment of cholestatic DILI.

  3. Validation and Verification of Operational Land Analysis Activities at the Air Force Weather Agency

    NASA Technical Reports Server (NTRS)

    Shaw, Michael; Kumar, Sujay V.; Peters-Lidard, Christa D.; Cetola, Jeffrey

    2012-01-01

    The NASA developed Land Information System (LIS) is the Air Force Weather Agency's (AFWA) operational Land Data Assimilation System (LDAS) combining real time precipitation observations and analyses, global forecast model data, vegetation, terrain, and soil parameters with the community Noah land surface model, along with other hydrology module options, to generate profile analyses of global soil moisture, soil temperature, and other important land surface characteristics. (1) A range of satellite data products and surface observations used to generate the land analysis products (2) Global, 1/4 deg spatial resolution (3) Model analysis generated at 3 hours. AFWA recognizes the importance of operational benchmarking and uncertainty characterization for land surface modeling and is developing standard methods, software, and metrics to verify and/or validate LIS output products. To facilitate this and other needs for land analysis activities at AFWA, the Model Evaluation Toolkit (MET) -- a joint product of the National Center for Atmospheric Research Developmental Testbed Center (NCAR DTC), AFWA, and the user community -- and the Land surface Verification Toolkit (LVT), developed at the Goddard Space Flight Center (GSFC), have been adapted to operational benchmarking needs of AFWA's land characterization activities.

  4. Benchmarked analyses of gamma skyshine using MORSE-CGA-PC and the DABL69 cross-section set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reichert, P.T.; Golshani, M.

    1991-01-01

    Design for gamma-ray skyshine is a common consideration for a variety of nuclear and accelerator facilities. Many of these designs can benefit from a more accurate and complete treatment than can be provided by simple skyshine analysis tools. Those methods typically require a number of conservative, simplifying assumptions in modeling the radiation source and shielding geometry. This paper considers the benchmarking of one analytical option. The MORSE-CGA Monte Carlo radiation transport code system provides the capability for detailed treatment of virtually any source and shielding geometry. Unfortunately, the mainframe computer costs of MORSE-CGA analyses can prevent cost-effective application to smallmore » projects. For this reason, the MORSE-CGA system was converted to run on IBM personal computer (PC)-compatible computers using the Intel 80386 or 80486 microprocessors. The DLC-130/DABL69 cross-section set (46n,23g) was chosen as the most suitable, readily available, broad-group library. The most important reason is the relatively high (P{sub 5}) Legendre order of expansion for angular distribution. This is likely to be beneficial in the deep-penetration conditions modeled in some skyshine problems.« less

  5. Initial Neutronics Analyses for HEU to LEU Fuel Conversion of the Transient Reactor Test Facility (TREAT) at the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontogeorgakos, D.; Derstine, K.; Wright, A.

    2013-06-01

    The purpose of the TREAT reactor is to generate large transient neutron pulses in test samples without over-heating the core to simulate fuel assembly accident conditions. The power transients in the present HEU core are inherently self-limiting such that the core prevents itself from overheating even in the event of a reactivity insertion accident. The objective of this study was to support the assessment of the feasibility of the TREAT core conversion based on the present reactor performance metrics and the technical specifications of the HEU core. The LEU fuel assembly studied had the same overall design, materials (UO 2more » particles finely dispersed in graphite) and impurities content as the HEU fuel assembly. The Monte Carlo N–Particle code (MCNP) and the point kinetics code TREKIN were used in the analyses.« less

  6. Motivational Interviewing Support for a Behavioral Health Internet Intervention for Drivers with Type 1 Diabetes

    PubMed Central

    Ingersoll, Karen S.; Banton, Thomas; Gorlin, Eugenia; Vajda, Karen; Singh, Harsimran; Peterson, Ninoska; Gonder-Frederick, Linda; Cox, Daniel J.

    2015-01-01

    While Internet interventions can improve health behaviors, their impact is limited by program adherence. Supporting program adherence through telephone counseling may be useful, but there have been few direct tests of the impact of support. We describe a Telephone Motivational Interviewing (MI) intervention targeting adherence to an Internet intervention for drivers with Type 1 Diabetes, DD.com, and compare completion of intervention benchmarks by those randomized to DD.com plus MI vs. DD.com only. The goal of the pre-intervention MI session was to increase the participant's motivation to complete the Internet intervention and all its assignments, while the goal of the post-treatment MI session was to plan for maintaining changes made during the intervention. Sessions were semi-structured and partially scripted to maximize consistency. MI Fidelity was coded using a standard coding system, the MITI. We examined the effects of MI support vs. no support on number of days from enrollment to program benchmarks. Results show that MI sessions were provided with good fidelity. Users who received MI support completed some program benchmarks such as Core 4 (t176 df= -2.25; p<.03) and 11 of 12 monthly driving diaries significantly sooner, but support did not significantly affect time to intervention completion (t177 df= -1.69; p<. 10) or rates of completion. These data suggest that there is little benefit to therapist guidance for Internet interventions including automated email prompts and other automated minimal supports, but that a booster MI session may enhance collection of follow-up data. PMID:25774342

  7. Policy choices in dementia care-An exploratory analysis of the Alberta continuing care system (ACCS) using system dynamics.

    PubMed

    Cepoiu-Martin, Monica; Bischak, Diane P

    2018-02-01

    The increase in the incidence of dementia in the aging population and the decrease in the availability of informal caregivers put pressure on continuing care systems to care for a growing number of people with disabilities. Policy changes in the continuing care system need to address this shift in the population structure. One of the most effective tools for assessing policies in complex systems is system dynamics. Nevertheless, this method is underused in continuing care capacity planning. A system dynamics model of the Alberta Continuing Care System was developed using stylized data. Sensitivity analyses and policy evaluations were conducted to demonstrate the use of system dynamics modelling in this area of public health planning. We focused our policy exploration on introducing staff/resident benchmarks in both supportive living and long-term care (LTC). The sensitivity analyses presented in this paper help identify leverage points in the system that need to be acknowledged when policy decisions are made. Our policy explorations showed that the deficits of staff increase dramatically when benchmarks are introduced, as expected, but at the end of the simulation period, the difference in deficits of both nurses and health care aids are similar between the 2 scenarios tested. Modifying the benchmarks in LTC only versus in both supportive living and LTC has similar effects on staff deficits in long term, under the assumptions of this particular model. The continuing care system dynamics model can be used to test various policy scenarios, allowing decision makers to visualize the effect of a certain policy choice on different system variables and to compare different policy options. Our exploration illustrates the use of system dynamics models for policy making in complex health care systems. © 2017 John Wiley & Sons, Ltd.

  8. Indicators of AEI applied to the Delaware Estuary.

    PubMed

    Barnthouse, Lawrence W; Heimbuch, Douglas G; Anthony, Vaughn C; Hilborn, Ray W; Myers, Ransom A

    2002-05-18

    We evaluated the impacts of entrainment and impingement at the Salem Generating Station on fish populations and communities in the Delaware Estuary. In the absence of an agreed-upon regulatory definition of "adverse environmental impact" (AEI), we developed three independent benchmarks of AEI based on observed or predicted changes that could threaten the sustainability of a population or the integrity of a community. Our benchmarks of AEI included: (1) disruption of the balanced indigenous community of fish in the vicinity of Salem (the "BIC" analysis); (2) a continued downward trend in the abundance of one or more susceptible fish species (the "Trends" analysis); and (3) occurrence of entrainment/impingement mortality sufficient, in combination with fishing mortality, to jeopardize the future sustainability of one or more populations (the "Stock Jeopardy" analysis). The BIC analysis utilized nearly 30 years of species presence/absence data collected in the immediate vicinity of Salem. The Trends analysis examined three independent data sets that document trends in the abundance of juvenile fish throughout the estuary over the past 20 years. The Stock Jeopardy analysis used two different assessment models to quantify potential long-term impacts of entrainment and impingement on susceptible fish populations. For one of these models, the compensatory capacities of the modeled species were quantified through meta-analysis of spawner-recruit data available for several hundred fish stocks. All three analyses indicated that the fish populations and communities of the Delaware Estuary are healthy and show no evidence of an adverse impact due to Salem. Although the specific models and analyses used at Salem are not applicable to every facility, we believe that a weight of evidence approach that evaluates multiple benchmarks of AEI using both retrospective and predictive methods is the best approach for assessing entrainment and impingement impacts at existing facilities.

  9. Core Noise - Increasing Importance

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.

    2011-01-01

    This presentation is a technical summary of and outlook for NASA-internal and NASA-sponsored external research on core (combustor and turbine) noise funded by the Fundamental Aeronautics Program Subsonic Fixed Wing (SFW) Project. Sections of the presentation cover: the SFW system-level noise metrics for the 2015, 2020, and 2025 timeframes; turbofan design trends and their aeroacoustic implications; the emerging importance of core noise and its relevance to the SFW Reduced-Perceived-Noise Technical Challenge; and the current research activities in the core-noise area, with additional details given about the development of a high-fidelity combustor-noise prediction capability as well as activities supporting the development of improved reduced-order, physics-based models for combustor-noise prediction. The need for benchmark data for validation of high-fidelity and modeling work and the value of a potential future diagnostic facility for testing of core-noise-reduction concepts are indicated. The NASA Fundamental Aeronautics Program has the principal objective of overcoming today's national challenges in air transportation. The SFW Reduced-Perceived-Noise Technical Challenge aims to develop concepts and technologies to dramatically reduce the perceived aircraft noise outside of airport boundaries. This reduction of aircraft noise is critical to enabling the anticipated large increase in future air traffic. Noise generated in the jet engine core, by sources such as the compressor, combustor, and turbine, can be a significant contribution to the overall noise signature at low-power conditions, typical of approach flight. At high engine power during takeoff, jet and fan noise have traditionally dominated over core noise. However, current design trends and expected technological advances in engine-cycle design as well as noise-reduction methods are likely to reduce non-core noise even at engine-power points higher than approach. In addition, future low-emission combustor designs could increase the combustion-noise component. The trend towards high-power-density cores also means that the noise generated in the low-pressure turbine will likely increase. Consequently, the combined result from these emerging changes will be to elevate the overall importance of turbomachinery core noise, which will need to be addressed in order to meet future noise goals.

  10. Block-Parallel Data Analysis with DIY2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Peterka, Tom

    DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less

  11. Characterizing complexity in socio-technical systems: a case study of a SAMU Medical Regulation Center.

    PubMed

    Righi, Angela Weber; Wachs, Priscila; Saurin, Tarcísio Abreu

    2012-01-01

    Complexity theory has been adopted by a number of studies as a benchmark to investigate the performance of socio-technical systems, especially those that are characterized by relevant cognitive work. However, there is little guidance on how to assess, systematically, the extent to which a system is complex. The main objective of this study is to carry out a systematic analysis of a SAMU (Mobile Emergency Medical Service) Medical Regulation Center in Brazil, based on the core characteristics of complex systems presented by previous studies. The assessment was based on direct observations and nine interviews: three of them with regulator of emergencies medical doctor, three with radio operators and three with telephone attendants. The results indicated that, to a great extent, the core characteristics of complexity are magnified) due to basic shortcomings in the design of the work system. Thus, some recommendations are put forward with a view to reducing unnecessary complexity that hinders the performance of the socio-technical system.

  12. The Italian corporate system in a network perspective (1952-1983)

    NASA Astrophysics Data System (ADS)

    Bargigli, L.; Giannetti, R.

    2018-03-01

    We study the Italian network of boards in four benchmark years covering different decades, when important economic structural shifts occurred. We find that the latter did not significantly disturb its structure as a small world. At the same time, we do not find a strong peculiarity of the Italian variety of capitalism and its corporate governance system. Typical properties of small world networks are at levels which are not dissimilar from those of other developed economies. Even the steady decrease of density that we observe is recurrent in many other national systems. The composition of the core of the most connected boards remains also quite stable over time. Among the most central boards we always find those of banks and insurances, as well as those of State Owned Enterprises (SOEs). At the same time, the system underwent two significant dynamic adjustments in the Sixties (nationalization of electrical industry) and Seventies (financial restructuring after the "big inflation") which are revealed by modifications in the core and in the community structure.

  13. Cancer cell profiling by barcoding allows multiplexed protein analysis in fine-needle aspirates.

    PubMed

    Ullal, Adeeti V; Peterson, Vanessa; Agasti, Sarit S; Tuang, Suan; Juric, Dejan; Castro, Cesar M; Weissleder, Ralph

    2014-01-15

    Immunohistochemistry-based clinical diagnoses require invasive core biopsies and use a limited number of protein stains to identify and classify cancers. We introduce a technology that allows analysis of hundreds of proteins from minimally invasive fine-needle aspirates (FNAs), which contain much smaller numbers of cells than core biopsies. The method capitalizes on DNA-barcoded antibody sensing, where barcodes can be photocleaved and digitally detected without any amplification steps. After extensive benchmarking in cell lines, this method showed high reproducibility and achieved single-cell sensitivity. We used this approach to profile ~90 proteins in cells from FNAs and subsequently map patient heterogeneity at the protein level. Additionally, we demonstrate how the method could be used as a clinical tool to identify pathway responses to molecularly targeted drugs and to predict drug response in patient samples. This technique combines specificity with ease of use to offer a new tool for understanding human cancers and designing future clinical trials.

  14. Cancer cell profiling by barcoding allows multiplexed protein analysis in fine needle aspirates

    PubMed Central

    Ullal, Adeeti V.; Peterson, Vanessa; Agasti, Sarit S.; Tuang, Suan; Juric, Dejan; Castro, Cesar M.; Weissleder, Ralph

    2014-01-01

    Immunohistochemistry-based clinical diagnoses require invasive core biopsies and use a limited number of protein stains to identify and classify cancers. Here, we introduce a technology that allows analysis of hundreds of proteins from minimally invasive fine needle aspirates (FNA), which contain much smaller numbers of cells than core biopsies. The method capitalizes on DNA-barcoded antibody sensing where barcodes can be photo-cleaved and digitally detected without any amplification steps. Following extensive benchmarking in cell lines, this method showed high reproducibility and achieved single cell sensitivity. We used this approach to profile ~90 proteins in cells from FNAs and subsequently map patient heterogeneity at the protein level. Additionally, we demonstrate how the method could be used as a clinical tool to identify pathway responses to molecularly targeted drugs and to predict drug response in patient samples. This technique combines specificity with ease of use to offer a new tool for understanding human cancers and designing future clinical trials. PMID:24431113

  15. Spectral Element Method for the Simulation of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo Tibor; Murman, Scott M.

    2013-01-01

    This work uses a discontinuous-Galerkin spectral-element method (DGSEM) to solve the compressible Navier-Stokes equations [1{3]. The inviscid ux is computed using the approximate Riemann solver of Roe [4]. The viscous fluxes are computed using the second form of Bassi and Rebay (BR2) [5] in a manner consistent with the spectral-element approximation. The method of lines with the classical 4th-order explicit Runge-Kutta scheme is used for time integration. Results for polynomial orders up to p = 15 (16th order) are presented. The code is parallelized using the Message Passing Interface (MPI). The computations presented in this work are performed using the Sandy Bridge nodes of the NASA Pleiades supercomputer at NASA Ames Research Center. Each Sandy Bridge node consists of 2 eight-core Intel Xeon E5-2670 processors with a clock speed of 2.6Ghz and 2GB per core memory. On a Sandy Bridge node the Tau Benchmark [6] runs in a time of 7.6s.

  16. Numerical Simulations of Close and Contact Binary Systems Having Bipolytropic Equation of State

    NASA Astrophysics Data System (ADS)

    Kadam, Kundan; Clayton, Geoffrey C.; Motl, Patrick M.; Marcello, Dominic; Frank, Juhan

    2017-01-01

    I present the results of the numerical simulations of the mass transfer in close and contact binary systems with both stars having a bipolytropic (composite polytropic) equation of state. The initial binary systems are obtained by a modifying Hachisu’s self-consistent field technique. Both the stars have fully resolved cores with a molecular weight jump at the core-envelope interface. The initial properties of these simulations are chosen such that they satisfy the mass-radius relation, composition and period of a late W-type contact binary system. The simulations are carried out using two different Eulerian hydrocodes, Flow-ER with a fixed cylindrical grid, and Octo-tiger with an AMR capable cartesian grid. The detailed comparison of the simulations suggests an agreement between the results obtained from the two codes at different resolutions. The set of simulations can be treated as a benchmark, enabling us to reliably simulate mass transfer and merger scenarios of binary systems involving bipolytropic components.

  17. Simulation of X-ray absorption spectra with orthogonality constrained density functional theory.

    PubMed

    Derricotte, Wallace D; Evangelista, Francesco A

    2015-06-14

    Orthogonality constrained density functional theory (OCDFT) [F. A. Evangelista, P. Shushkov and J. C. Tully, J. Phys. Chem. A, 2013, 117, 7378] is a variational time-independent approach for the computation of electronic excited states. In this work we extend OCDFT to compute core-excited states and generalize the original formalism to determine multiple excited states. Benchmark computations on a set of 13 small molecules and 40 excited states show that unshifted OCDFT/B3LYP excitation energies have a mean absolute error of 1.0 eV. Contrary to time-dependent DFT, OCDFT excitation energies for first- and second-row elements are computed with near-uniform accuracy. OCDFT core excitation energies are insensitive to the choice of the functional and the amount of Hartree-Fock exchange. We show that OCDFT is a powerful tool for the assignment of X-ray absorption spectra of large molecules by simulating the gas-phase near-edge spectrum of adenine and thymine.

  18. Coulomb Excitation of Neutron-Rich Zn Isotopes: First Observation of the 21+ State in Zn80

    NASA Astrophysics Data System (ADS)

    van de Walle, J.; Aksouh, F.; Ames, F.; Behrens, T.; Bildstein, V.; Blazhev, A.; Cederkäll, J.; Clément, E.; Cocolios, T. E.; Davinson, T.; Delahaye, P.; Eberth, J.; Ekström, A.; Fedorov, D. V.; Fedosseev, V. N.; Fraile, L. M.; Franchoo, S.; Gernhauser, R.; Georgiev, G.; Habs, D.; Heyde, K.; Huber, G.; Huyse, M.; Ibrahim, F.; Ivanov, O.; Iwanicki, J.; Jolie, J.; Kester, O.; Köster, U.; Kröll, T.; Krücken, R.; Lauer, M.; Lisetskiy, A. F.; Lutter, R.; Marsh, B. A.; Mayet, P.; Niedermaier, O.; Nilsson, T.; Pantea, M.; Perru, O.; Raabe, R.; Reiter, P.; Sawicka, M.; Scheit, H.; Schrieder, G.; Schwalm, D.; Seliverstov, M. D.; Sieber, T.; Sletten, G.; Smirnova, N.; Stanoiu, M.; Stefanescu, I.; Thomas, J.-C.; Valiente-Dobón, J. J.; van Duppen, P.; Verney, D.; Voulot, D.; Warr, N.; Weisshaar, D.; Wenander, F.; Wolf, B. H.; Zielińska, M.

    2007-10-01

    Neutron-rich, radioactive Zn isotopes were investigated at the Radioactive Ion Beam facility REX-ISOLDE (CERN) using low-energy Coulomb excitation. The energy of the 21+ state in Zn78 could be firmly established and for the first time the 2+→01+ transition in Zn80 was observed at 1492(1) keV. B(E2,21+→01+) values were extracted for Zn74,76,78,80 and compared to large scale shell model calculations. With only two protons outside the Z=28 proton core, Zn80 is the lightest N=50 isotone for which spectroscopic information has been obtained to date. Two sets of advanced shell model calculations reproduce the observed B(E2) systematics. The results for N=50 isotones indicate a good N=50 shell closure and a strong Z=28 proton core polarization. The new results serve as benchmarks to establish theoretical models, predicting the nuclear properties of the doubly magic nucleus Ni78.

  19. A phylo-functional core of gut microbiota in healthy young Chinese cohorts across lifestyles, geography and ethnicities.

    PubMed

    Zhang, Jiachao; Guo, Zhuang; Xue, Zhengsheng; Sun, Zhihong; Zhang, Menghui; Wang, Lifeng; Wang, Guoyang; Wang, Fang; Xu, Jie; Cao, Hongfang; Xu, Haiyan; Lv, Qiang; Zhong, Zhi; Chen, Yongfu; Qimuge, Sudu; Menghe, Bilige; Zheng, Yi; Zhao, Liping; Chen, Wei; Zhang, Heping

    2015-09-01

    Structural profiling of healthy human gut microbiota across heterogeneous populations is necessary for benchmarking and characterizing the potential ecosystem services provided by particular gut symbionts for maintaining the health of their hosts. Here we performed a large structural survey of fecal microbiota in 314 healthy young adults, covering 20 rural and urban cohorts from 7 ethnic groups living in 9 provinces throughout China. Canonical analysis of unweighted UniFrac principal coordinates clustered the subjects mainly by their ethnicities/geography and less so by lifestyles. Nine predominant genera, all of which are known to contain short-chain fatty acid producers, co-occurred in all individuals and collectively represented nearly half of the total sequences. Interestingly, species-level compositional profiles within these nine genera still discriminated the subjects according to their ethnicities/geography and lifestyles. Therefore, a phylogenetically diverse core of gut microbiota at the genus level may be commonly shared by distinctive healthy populations as functionally indispensable ecosystem service providers for the hosts.

  20. A phylo-functional core of gut microbiota in healthy young Chinese cohorts across lifestyles, geography and ethnicities

    PubMed Central

    Zhang, Jiachao; Guo, Zhuang; Xue, Zhengsheng; Sun, Zhihong; Zhang, Menghui; Wang, Lifeng; Wang, Guoyang; Wang, Fang; Xu, Jie; Cao, Hongfang; Xu, Haiyan; Lv, Qiang; Zhong, Zhi; Chen, Yongfu; Qimuge, Sudu; Menghe, Bilige; Zheng, Yi; Zhao, Liping; Chen, Wei; Zhang, Heping

    2015-01-01

    Structural profiling of healthy human gut microbiota across heterogeneous populations is necessary for benchmarking and characterizing the potential ecosystem services provided by particular gut symbionts for maintaining the health of their hosts. Here we performed a large structural survey of fecal microbiota in 314 healthy young adults, covering 20 rural and urban cohorts from 7 ethnic groups living in 9 provinces throughout China. Canonical analysis of unweighted UniFrac principal coordinates clustered the subjects mainly by their ethnicities/geography and less so by lifestyles. Nine predominant genera, all of which are known to contain short-chain fatty acid producers, co-occurred in all individuals and collectively represented nearly half of the total sequences. Interestingly, species-level compositional profiles within these nine genera still discriminated the subjects according to their ethnicities/geography and lifestyles. Therefore, a phylogenetically diverse core of gut microbiota at the genus level may be commonly shared by distinctive healthy populations as functionally indispensable ecosystem service providers for the hosts. PMID:25647347

  1. In-core flux sensor evaluations at the ATR critical facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Troy Unruh; Benjamin Chase; Joy Rempe

    2014-09-01

    Flux detector evaluations were completed as part of a joint Idaho State University (ISU) / Idaho National Laboratory (INL) / French Atomic Energy commission (CEA) ATR National Scientific User Facility (ATR NSUF) project to compare the accuracy, response time, and long duration performance of several flux detectors. Special fixturing developed by INL allows real-time flux detectors to be inserted into various ATRC core positions and perform lobe power measurements, axial flux profile measurements, and detector cross-calibrations. Detectors initially evaluated in this program include the French Atomic Energy Commission (CEA)-developed miniature fission chambers; specialized self-powered neutron detectors (SPNDs) developed by themore » Argentinean National Energy Commission (CNEA); specially developed commercial SPNDs from Argonne National Laboratory. As shown in this article, data obtained from this program provides important insights related to flux detector accuracy and resolution for subsequent ATR and CEA experiments and flux data required for bench-marking models in the ATR V&V Upgrade Initiative.« less

  2. Game playing.

    PubMed

    Rosin, Christopher D

    2014-03-01

    Game playing has been a core domain of artificial intelligence research since the beginnings of the field. Game playing provides clearly defined arenas within which computational approaches can be readily compared to human expertise through head-to-head competition and other benchmarks. Game playing research has identified several simple core algorithms that provide successful foundations, with development focused on the challenges of defeating human experts in specific games. Key developments include minimax search in chess, machine learning from self-play in backgammon, and Monte Carlo tree search in Go. These approaches have generalized successfully to additional games. While computers have surpassed human expertise in a wide variety of games, open challenges remain and research focuses on identifying and developing new successful algorithmic foundations. WIREs Cogn Sci 2014, 5:193-205. doi: 10.1002/wcs.1278 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. © 2014 John Wiley & Sons, Ltd.

  3. A "Common" Vision of Instruction? An Analysis of English/Language Arts Professional Development Materials Related to the Common Core State Standards

    ERIC Educational Resources Information Center

    Hodge, Emily; Benko, Susanna L.

    2014-01-01

    The purpose of this article is to describe the stances put forward by a selection of professional development resources interpreting the Common Core State Standards for English Language Arts (ELA) teachers, and to analyse where these resources stand in relation to research in ELA. Specifically, we analyse resources written by English educators…

  4. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking

    PubMed Central

    Kreibich, Heidi; Franco, Guillermo; Marechal, David

    2016-01-01

    Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss–or flood vulnerability–relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents an approach for a quantitative comparison of disparate models via the reduction to the joint input variables of all models. Harmonization of models for benchmarking and comparison requires profound insight into the model structures, mechanisms and underlying assumptions. Possibilities and challenges are discussed that exist in model harmonization and the application of the inventory in a benchmarking framework. PMID:27454604

  5. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking.

    PubMed

    Gerl, Tina; Kreibich, Heidi; Franco, Guillermo; Marechal, David; Schröter, Kai

    2016-01-01

    Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss-or flood vulnerability-relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents an approach for a quantitative comparison of disparate models via the reduction to the joint input variables of all models. Harmonization of models for benchmarking and comparison requires profound insight into the model structures, mechanisms and underlying assumptions. Possibilities and challenges are discussed that exist in model harmonization and the application of the inventory in a benchmarking framework.

  6. Continuous flame aerosol synthesis of carbon-coated nano-LiFePO4 for Li-ion batteries

    PubMed Central

    Waser, Oliver; Büchel, Robert; Hintennach, Andreas; Novák, Petr; Pratsinis, Sotiris E.

    2013-01-01

    Core-shell, nanosized LiFePO4-carbon particles were made in one step by scalable flame aerosol technology at 7 g/h. Core LiFePO4 particles were made in an enclosed flame spray pyrolysis (FSP) unit and were coated in-situ downstream by auto thermal carbonization (pyrolysis) of swirl-fed C2H2 in an O2-controlled atmosphere. The formation of acetylene carbon black (ACB) shell was investigated as a function of the process fuel-oxidant equivalence ratio (EQR). The core-shell morphology was obtained at slightly fuel-rich conditions (1.0 < EQR < 1.07) whereas segregated ACB and LiFePO4 particles were formed at fuel-lean conditions (0.8 < EQR < 1). Post-annealing of core-shell particles in reducing environment (5 vol% H2 in argon) at 700 °C for up to 4 hours established phase pure, monocrystalline LiFePO4 with a crystal size of 65 nm and 30 wt% ACB content. Uncoated LiFePO4 or segregated LiFePO4-ACB grew to 250 nm at these conditions. Annealing at 800 °C induced carbothermal reduction of LiFePO4 to Fe2P by ACB shell consumption that resulted in cavities between carbon shell and core LiFePO4 and even slight LiFePO4 crystal growth but better electrochemical performance. The present carbon-coated LiFePO4 showed superior cycle stability and higher rate capability than the benchmark, commercially available LiFePO4. PMID:23407817

  7. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  8. Benchmark Simulations of the Thermal-Hydraulic Responses during EBR-II Inherent Safety Tests using SAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui; Sumner, Tyler S.

    2016-04-17

    An advanced system analysis tool SAM is being developed for fast-running, improved-fidelity, and whole-plant transient analyses at Argonne National Laboratory under DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. As an important part of code development, companion validation activities are being conducted to ensure the performance and validity of the SAM code. This paper presents the benchmark simulations of two EBR-II tests, SHRT-45R and BOP-302R, whose data are available through the support of DOE-NE’s Advanced Reactor Technology (ART) program. The code predictions of major primary coolant system parameter are compared with the test results. Additionally, the SAS4A/SASSYS-1 code simulationmore » results are also included for a code-to-code comparison.« less

  9. Radiochemical analyses of surface water from U.S. Geological Survey hydrologic bench-mark stations

    USGS Publications Warehouse

    Janzer, V.J.; Saindon, L.G.

    1972-01-01

    The U.S. Geological Survey's program for collecting and analyzing surface-water samples for radiochemical constituents at hydrologic bench-mark stations is described. Analytical methods used during the study are described briefly and data obtained from 55 of the network stations in the United States during the period from 1967 to 1971 are given in tabular form.Concentration values are reported for dissolved uranium, radium, gross alpha and gross beta radioactivity. Values are also given for suspended gross alpha radioactivity in terms of natural uranium. Suspended gross beta radioactivity is expressed both as the equilibrium mixture of strontium-90/yttrium-90 and as cesium-137.Other physical parameters reported which describe the samples include the concentrations of dissolved and suspended solids, the water temperature and stream discharge at the time of the sample collection.

  10. Porting a Hall MHD Code to a Graphic Processing Unit

    NASA Technical Reports Server (NTRS)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  11. Anisn-Dort Neutron-Gamma Flux Intercomparison Exercise for a Simple Testing Model

    NASA Astrophysics Data System (ADS)

    Boehmer, B.; Konheiser, J.; Borodkin, G.; Brodkin, E.; Egorov, A.; Kozhevnikov, A.; Zaritsky, S.; Manturov, G.; Voloschenko, A.

    2003-06-01

    The ability of transport codes ANISN, DORT, ROZ-6, MCNP and TRAMO, as well as nuclear data libraries BUGLE-96, ABBN-93, VITAMIN-B6 and ENDF/B-6 to deliver consistent gamma and neutron flux results was tested in the calculation of a one-dimensional cylindrical model consisting of a homogeneous core and an outer zone with a single material. Model variants with H2O, Fe, Cr and Ni in the outer zones were investigated. The results are compared with MCNP-ENDF/B-6 results. Discrepancies are discussed. The specified test model is proposed as a computational benchmark for testing calculation codes and data libraries.

  12. Mercury contamination of riverine sediments in the vicinity of a mercury cell chlor-alkali plant in Sagua River, Cuba.

    PubMed

    Bolaños-Álvarez, Yoelvis; Alonso-Hernández, Carlos Manuel; Morabito, Roberto; Díaz-Asencio, Misael; Pinto, Valentina; Gómez-Batista, Miguel

    2016-06-01

    Sediment is a great indicator for assessing coastal mercury contamination. The objective of this study was to assess the magnitude of mercury pollution in the sediments of the Sagua River, Cuba, where a mercury-cell chlor-alkali plant has operated since the beginning of the 1980s. Surface sediments and a sediment core were collected in the Sagua River and analyzed for mercury using an Advanced Mercury Analyser (LECO AMA-254). Total mercury concentrations ranged from 0.165 to 97 μg g(-1) dry weight surface sediments. Enrichment Factor (EF), Index of Geoaccumulation (Igeo) and Sediment Quality Guidelines were applied to calculate the degrees of sediment contamination. The EF showed the significant role of anthropogenic mercury inputs in sediments of the Sagua River. The result also determined that in all stations downstream from the chlor-alkali plant effluents, the mercury concentrations in the sediments were higher than the Probable Effect Levels value, indicating a high potential for adverse biological effects. The Igeo index indicated that the sediments in the Sagua River are evaluated as heavily polluted to extremely contaminated and should be remediated as a hazardous material. This study could provide the latest benchmark of mercury pollution and prove beneficial to future pollution studies in relation to monitoring works in sediments from tropical rivers and estuaries. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. [The health economics of attention deficit hyperactivity disorder in Germany. Part 2: Therapeutic options and their cost-effectiveness].

    PubMed

    Schlander, M; Trott, G-E; Schwarz, O

    2010-03-01

    Attention deficit hyperactivity disorder (ADHD) has been associated with a continuous increase of health care utilization and thus expenditures. This raises the issue of cost-effectiveness of health care provided for patients with ADHD. Comparative health economic evaluations generate relevant insights and typically report incremental cost-effectiveness ratios (ICERs) of alternatives versus an established standard. Typically, results of cost-effectiveness analyses (CEAs) are reported in terms of incremental cost-effectiveness ratios (ICERs). International evaluations, as well specific adaptations to Germany, indicate an acceptable to attractive cost-effectiveness--according to currently used international benchmarks--of an intense medication management strategy based on stimulants, primarily methylphenidate, with ICERs ranging from 20,000 EUR to 37,000 EUR per quality-adjusted life year (QALY) gained. Economic modeling studies also suggest cost-effectiveness of long-acting modified-release preparations of methylphenidate, owing to improved treatment compliance associated with simplified once daily administration schemes. Atomoxetine, in contrast, appears economically inferior compared to long-acting stimulants, given its higher acquisition costs and at best equal clinical effectiveness. There are currently no data supporting the cost-effectiveness of psychotherapeutic or behavioral interventions. Economic evaluations, which have been published to date, are generally limited by time horizons of up to 1 year and by their prevailing focus on ADHD core symptom improvement only. Therefore, further research into the cost-effectiveness of ADHD treatment strategies seems warranted.

  14. Analyzing the Core Flight Software (CFS) with SAVE

    NASA Technical Reports Server (NTRS)

    Ganesan, Dharmalingam; Lindvall, Mikael; McComas, David

    2008-01-01

    This viewgraph presentation describes the SAVE tool and it's application to Core Flight Software (CFS). The contents include: 1) Fraunhofer-a short intro; 2) Context of this Collaboration; 3) CFS-Core Flight Software?; 4) The SAVE Tool; 5) Applying SAVE to CFS -A few example analyses; and 6) Goals.

  15. Structural and Sequence Similarity Makes a Significant Impact on Machine-Learning-Based Scoring Functions for Protein-Ligand Interactions.

    PubMed

    Li, Yang; Yang, Jianyi

    2017-04-24

    The prediction of protein-ligand binding affinity has recently been improved remarkably by machine-learning-based scoring functions. For example, using a set of simple descriptors representing the atomic distance counts, the RF-Score improves the Pearson correlation coefficient to about 0.8 on the core set of the PDBbind 2007 database, which is significantly higher than the performance of any conventional scoring function on the same benchmark. A few studies have been made to discuss the performance of machine-learning-based methods, but the reason for this improvement remains unclear. In this study, by systemically controlling the structural and sequence similarity between the training and test proteins of the PDBbind benchmark, we demonstrate that protein structural and sequence similarity makes a significant impact on machine-learning-based methods. After removal of training proteins that are highly similar to the test proteins identified by structure alignment and sequence alignment, machine-learning-based methods trained on the new training sets do not outperform the conventional scoring functions any more. On the contrary, the performance of conventional functions like X-Score is relatively stable no matter what training data are used to fit the weights of its energy terms.

  16. Development of a Computing Cluster At the University of Richmond

    NASA Astrophysics Data System (ADS)

    Carbonneau, J.; Gilfoyle, G. P.; Bunn, E. F.

    2010-11-01

    The University of Richmond has developed a computing cluster to support the massive simulation and data analysis requirements for programs in intermediate-energy nuclear physics, and cosmology. It is a 20-node, 240-core system running Red Hat Enterprise Linux 5. We have built and installed the physics software packages (Geant4, gemc, MADmap...) and developed shell and Perl scripts for running those programs on the remote nodes. The system has a theoretical processing peak of about 2500 GFLOPS. Testing with the High Performance Linpack (HPL) benchmarking program (one of the standard benchmarks used by the TOP500 list of fastest supercomputers) resulted in speeds of over 900 GFLOPS. The difference between the maximum and measured speeds is due to limitations in the communication speed among the nodes; creating a bottleneck for large memory problems. As HPL sends data between nodes, the gigabit Ethernet connection cannot keep up with the processing power. We will show how both the theoretical and actual performance of the cluster compares with other current and past clusters, as well as the cost per GFLOP. We will also examine the scaling of the performance when distributed to increasing numbers of nodes.

  17. Vertical profile, source apportionment, and toxicity of PAHs in sediment cores of a wharf near the coal-based steel refining industrial zone in Kaohsiung, Taiwan.

    PubMed

    Chen, Chih-Feng; Chen, Chiu-Wen; Ju, Yun-Ru; Dong, Cheng-Di

    2016-03-01

    Three sediment cores were collected from a wharf near a coal-based steel refining industrial zone in Kaohsiung, Taiwan. Analyses for 16 polycyclic aromatic hydrocarbons (PAHs) of the US Environmental Protection Agency priority list in the core sediment samples were conducted using gas chromatography-mass spectrometry. The vertical profiles of PAHs in the core sediments were assessed, possible sources and apportionment were identified, and the toxicity risk of the core sediments was determined. The results from the sediment analyses showed that total concentrations of the 16 PAHs varied from 11774 ± 4244 to 16755 ± 4593 ng/g dry weight (dw). Generally, the vertical profiles of the PAHs in the sediment cores exhibited a decreasing trend from the top to the lower levels of the S1 core and an increasing trend of PAHs from the top to the lower levels of the S2 and S3 cores. Among the core sediment samples, the five- and six-ring PAHs were predominantly in the S1 core, ranging from 42 to 54 %, whereas the composition of the PAHs in the S2 and S3 cores were distributed equally across three groups: two- and three-ring, four-ring, and five- and six-ring PAHs. The results indicated that PAH contamination at the site of the S1 core had a different source. The molecular indices and principal component analyses with multivariate linear regression were used to determine the source contributions, with the results showing that the contributions of coal, oil-related, and vehicle sources were 38.6, 35.9, and 25.5 %, respectively. A PAH toxicity assessment using the mean effect range-median quotient (m-ERM-q, 0.59-0.79), benzo[a]pyrene toxicity equivalent (TEQ(carc), 1466-1954 ng TEQ/g dw), and dioxin toxicity equivalent (TEQ(fish), 3036-4174 pg TEQ/g dw) identified the wharf as the most affected area. The results can be used for regular monitoring, and future pollution prevention and management should target the coal-based industries in this region for pollution reduction.

  18. Protein kinases responsible for the phosphorylation of the nuclear egress core complex of human cytomegalovirus.

    PubMed

    Sonntag, Eric; Milbradt, Jens; Svrlanska, Adriana; Strojan, Hanife; Häge, Sigrun; Kraut, Alexandra; Hesse, Anne-Marie; Amin, Bushra; Sonnewald, Uwe; Couté, Yohann; Marschall, Manfred

    2017-10-01

    Nuclear egress of herpesvirus capsids is mediated by a multi-component nuclear egress complex (NEC) assembled by a heterodimer of two essential viral core egress proteins. In the case of human cytomegalovirus (HCMV), this core NEC is defined by the interaction between the membrane-anchored pUL50 and its nuclear cofactor, pUL53. NEC protein phosphorylation is considered to be an important regulatory step, so this study focused on the respective role of viral and cellular protein kinases. Multiply phosphorylated pUL50 varieties were detected by Western blot and Phos-tag analyses as resulting from both viral and cellular kinase activities. In vitro kinase analyses demonstrated that pUL50 is a substrate of both PKCα and CDK1, while pUL53 can also be moderately phosphorylated by CDK1. The use of kinase inhibitors further illustrated the importance of distinct kinases for core NEC phosphorylation. Importantly, mass spectrometry-based proteomic analyses identified five major and nine minor sites of pUL50 phosphorylation. The functional relevance of core NEC phosphorylation was confirmed by various experimental settings, including kinase knock-down/knock-out and confocal imaging, in which it was found that (i) HCMV core NEC proteins are not phosphorylated solely by viral pUL97, but also by cellular kinases; (ii) both PKC and CDK1 phosphorylation are detectable for pUL50; (iii) no impact of PKC phosphorylation on NEC functionality has been identified so far; (iv) nonetheless, CDK1-specific phosphorylation appears to be required for functional core NEC interaction. In summary, our findings provide the first evidence that the HCMV core NEC is phosphorylated by cellular kinases, and that the complex pattern of NEC phosphorylation has functional relevance.

  19. Hydrologic Observatories: Design, Operation, and the Neuse Basin Prototype

    NASA Astrophysics Data System (ADS)

    Reckhow, K.; Band, L.

    2003-12-01

    Hydrologic observatories are conceived as major research facilities that will be available to the full hydrologic community, to facilitate comprehensive, cross-disciplinary and multi-scale measurements necessary to address the current and next generation of critical science and management issues. A network of hydrologic observatories is proposed that both develop national comparable, multidisciplinary data sets and provide study areas to allow scientists, through their own creativity, to make scientific breakthroughs that would be impossible without the proposed observatories. The core objective of an observatory is to improve predictive understanding of the flow paths, fluxes, and residence times of water, sediment and nutrients (the "core data") across a range of spatial and temporal scales across `interfaces'. To assess attainment of this objective, a benchmark will be established in the first year, and evaluated periodically. The benchmark should provide an estimate of prediction uncertainty at points in the stream across scale; the general principle is that predictive understanding must be demonstrated internal to the catchment as well as its outlet. The core data will be needed for practically any hydrologic study, yet absence of these data has been a barrier to larger scale studies in the past. However, advancement of hydrologic science facilitated by the network of hydrologic observatories is expected to focus on a set of science drivers, drawn from the major scientific questions posed by the set of NRC reports and refined into CUAHSI themes. These hypotheses will be tested at all observatories and will be used in the design to ensure the sufficiency of the data set. To make the observatories a national (and international) resource, a key aspect of the operation is the support of remote PI's. This support will include a resident staff of scientists and technicians on the order of 10 FTE's, availability of dormitory, laboratory, workshop space for all scientists, and the awarding of travel support out of observatory funds. The conflicting goals of support for a PI-designed observatory and a network of community-available observatories will be achieved by allocation of resources to assure both goals will be met. It is proposed that these resources be divided into three pools: Core data pool. Data to be collected by the observatory PI's and staff, and where possible, augmented by existing (e.g., USGS) data collection. Design pool. Available to support the designs of observatory PI's. Community pool. Available to non-PI scientists to test cross-observatory hypotheses. Application of these design and operation concepts to the design of the Neuse basin prototype hydrologic observatory is briefly discussed.

  20. Descriptions and preliminary report on sediment cores from the southwest coastal area, Everglades National Park, Florida

    USGS Publications Warehouse

    Wingard, G. Lynn; Cronin, Thomas M.; Holmes, Charles W.; Willard, Debra A.; Budet, Carlos A.; Ortiz, Ruth E.

    2005-01-01

    Sediment cores were collected from five locations in the southwest coastal area of Everglades National Park, Florida, in May 2004 for the purpose of determining the ecosystem history of the area and the impacts of changes in flow through the Shark River Slough. An understanding of natural cycles of change prior to significant human disturbance allows land managers to set realistic performance measures and targets for salinity and other water quality and quantity quality measures. Preliminary examination of the cores indicates significant changes have taken place over the last 1000-2000 years. The cores collected from the inner bays - the most landward bays - are distinctly different from other estuarine sediment cores examined in Florida Bay and Biscayne Bay. Peats in the inner-bay cores from Big Lostmans Bay, Broad River Bay, and Tarpon Bay were deposited at least 1000 years before present (BP) based on radiocarbon analyses. The peats are overlain by poorly sorted organic muds and sands containing species indicative of deposition in a freshwater to very low salinity environment. The Alligator Bay core, the most northern inner-bay core, is almost entirely sand; no detailed faunal analyses or radiometric dating has been completed on this core. The Roberts River core, taken from the mouth of the River where it empties into Whitewater Bay, is lithologically and faunally similar to previously examined cores from Biscayne and Florida Bays; however, the basal unit was deposited ~2000 years before the present based on radiocarbon analyses. A definite trend of increasing salinity over time is seen in the Roberts River core, from sediments representing a terrestrially dominated freshwater environment at the bottom of the core to those representing an estuarine environment with a strong freshwater influence at the top. The changes seen at Roberts River could represent a combination of factors including rising sea-level and changes in freshwater supply, but the timing and extent of the changes needs to be determined. The preliminary information on the cores collected in 2004 will be combined with data from cores collected in July 2005. The 2005 cores were collected along transects moving from the inner bays out towards the coast. These transects, combining information from the 2004 and 2005 cores, will allow us to examine long term trends in freshwater supply, sea-level rise, and potentially the impact of storms on the coastal ecosystem.

  1. Matching Up to the Information Society: An Evaluation of the EU, the EU Accession Countries, Switzerland and the United States. Summary

    ERIC Educational Resources Information Center

    Graafland-Essers, Irma; Cremonini, Leon; Ettedgui, Emile; Botterman, Maarten

    2003-01-01

    This report presents the current understanding of the advancement of the Information Society within the European Union and countries that are up for accession in 2004, and is based on the SIBIS (Statistical Indicators Benchmarking the Information Society) surveys and analyses per SIBIS theme and country. The report is unique in its coherent and…

  2. Matching Up to the Information Society: An Evaluation of the EU, the EU Accession Countries, Switzerland and the United States

    ERIC Educational Resources Information Center

    Graafland-Essers, Irma; Cremonini, Leon; Ettedgui, Emile; Botterman, Maarten

    2003-01-01

    This report presents the current understanding of the advancement of the Information Society within the European Union and countries that are up for accession in 2004, and is based on the SIBIS (Statistical Indicators Benchmarking the Information Society) surveys and analyses per SIBIS theme and country. The report is unique in its coherent and…

  3. Cycle 0(CY1991) NLS trade studies and analyses report. Book 1: Structures and core vehicle

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This report (SR-1: Structures, Trades, and Analysis), documents the Core Tankage Trades and analyses performed in support of the National Launch System (NLS) Cycle 0 preliminary design activities. The report covers trades that were conducted on the Vehicle Assembly, Fwd Skirt, LO2 Tank, Intertank, LH2 Tank, and Aft Skirt of the NLS Core Tankage. For each trade study, a two page executive summary and the detail trade study are provided. The trade studies contain study results, recommended changes to the Cycle 0 Baselines, and suggested follow on tasks to be performed during Cycle 1.

  4. Coupled thermo-chemical boundary conditions in double-diffusive geodynamo models at arbitrary Lewis numbers.

    NASA Astrophysics Data System (ADS)

    Bouffard, M.

    2016-12-01

    Convection in the Earth's outer core is driven by the combination of two buoyancy sources: a thermal source directly related to the Earth's secular cooling, the release of latent heat and possibly the heat generated by radioactive decay, and a compositional source due to the crystallization of the growing inner core which releases light elements into the liquid outer core. The dynamics of fusion/crystallization being dependent on the heat flux distribution, the thermochemical boundary conditions are coupled at the inner core boundary which may affect the dynamo in various ways, particularly if heterogeneous conditions are imposed at one boundary. In addition, the thermal and compositional molecular diffusivities differ by three orders of magnitude. This can produce significant differences in the convective dynamics compared to pure thermal or compositional convection due to the potential occurence of double-diffusive phenomena. Traditionally, temperature and composition have been combined into one single variable called codensity under the assumption that turbulence mixes all physical properties at an "eddy-diffusion" rate. This description does not allow for a proper treatment of the thermochemical coupling and is certainly incorrect within stratified layers in which double-diffusive phenomena can be expected. For a more general and rigorous approach, two distinct transport equations should therefore be solved for temperature and composition. However, the weak compositional diffusivity is technically difficult to handle in current geodynamo codes and requires the use of a semi-Lagrangian description to minimize numerical diffusion. We implemented a "particle-in-cell" method into a geodynamo code to properly describe the compositional field. The code is suitable for High Parallel Computing architectures and was successfully tested on two benchmarks. Following the work by Aubert et al. (2008) we use this new tool to perform dynamo simulations including thermochemical coupling at the inner core boundary as well as exploration of the infinite Lewis number limit to study the effect of a heterogeneous core mantle boundary heat flow on the inner core growth.

  5. Preliminary Physical Stratigraphy and Geophysical Data of the USGS Hope Plantation Core (BE-110), Bertie County, North Carolina

    USGS Publications Warehouse

    Weems, Robert E.; Seefelt, Ellen L.; Wrege, Beth M.; Self-Trail, Jean M.; Prowell, David C.; Durand, Colleen; Cobbs, Eugene F.; McKinney, Kevin C.

    2007-01-01

    Introduction In March and April, 2004, the U.S. Geological Survey (USGS), in cooperation with the North Carolina Geological Survey (NCGS) and the Raleigh Water Resources Discipline (WRD), drilled a stratigraphic test hole and well in Bertie County, North Carolina (fig. 1). The Hope Plantation test hole (BE-110-2004) was cored on the property of Hope Plantation near Windsor, North Carolina. The drill site is located on the Republican 7.5 minute quadradrangle at lat 36?01'58'N., long 78?01'09'W. (decimal degrees 36.0329 and 77.0192) (fig. 2). The altitude of the site is 48 ft above mean sea level as determined by Paulin Precise altimeter. This test hole was continuously cored by Eugene F. Cobbs, III and Kevin C. McKinney (USGS) to a total depth of 1094.5 ft. Later, a ground water observation well was installed with a screened interval between 315-329 feet below land surface (fig. 3). Upper Triassic, Lower Cretaceous, Upper Cretaceous, Tertiary, and Quaternary sediments were recovered from the site. The core is stored at the NCGS Coastal Plain core storage facility in Raleigh, North Carolina. In this report, we provide the initial lithostratigraphic summary recorded at the drill site along with site core photographs, data from the geophysical logger, calcareous nannofossil biostratigraphic correlations (Table 1) and initial hydrogeologic interpretations. The lithostratigraphy from this core can be compared to previous investigations of the Elizabethtown corehole, near Elizabethtown, North Carolina in Bladen County (Self-Trail, Wrege, and others, 2004), the Kure Beach corehole, near Wilmington, North Carolina in New Hanover County (Self-Trail, Prowell, and Christopher, 2004), the Esso #1, Esso #2, Mobil #1 and Mobil #2 cores in the Albermarle and Pamlico Sounds (Zarra, 1989), and the Cape Fear River outcrops in Bladen County (Farrell, 1998; Farrell and others, 2001). This core is the third in a series of planned benchmark coreholes that will be used to elucidate the physical stratigraphy, facies, thickness, and hydrogeology of the Tertiary and Cretaceous Coastal Plain sediments of North Carolina.

  6. Experimental and Theoretical Investigations on Viscosity of Fe-Ni-C Liquids at High Pressures

    NASA Astrophysics Data System (ADS)

    Chen, B.; Lai, X.; Wang, J.; Zhu, F.; Liu, J.; Kono, Y.

    2016-12-01

    Understanding and modeling of Earth's core processes such as geodynamo and heat flow via convection in liquid outer cores hinges on the viscosity of candidate liquid iron alloys under core conditions. Viscosity estimates from various methods of the metallic liquid of the outer core, however, span up to 12 orders of magnitude. Due to experimental challenges, viscosity measurements of iron liquids alloyed with lighter elements are scarce and conducted at conditions far below those expected for the outer core. In this study, we adopt a synergistic approach by integrating experiments at experimentally-achievable conditions with computations up to core conditions. We performed viscosity measurements based on the modified Stokes' floating sphere viscometry method for the Fe-Ni-C liquids at high pressures in a Paris-Edinburgh press at Sector 16 of the Advanced Photon Source, Argonne National Laboratory. Our results show that the addition of 3-5 wt.% carbon to iron-nickel liquids has negligible effect on its viscosity at pressures lower than 5 GPa. The viscosity of the Fe-Ni-C liquids, however, becomes notably higher and increases by a factor of 3 at 5-8 GPa. Similarly, our first-principles molecular dynamics calculations up to Earth's core pressures show a viscosity change in Fe-Ni-C liquids at 5 GPa. The significant change in the viscosity is likely due to a liquid structural transition of the Fe-Ni-C liquids as revealed by our X-ray diffraction measurements and first-principles molecular dynamics calculations. The observed correlation between structure and physical properties of liquids permit stringent benchmark test of the computational liquid models and contribute to a more comprehensive understanding of liquid properties under high pressures. The interplay between experiments and first-principles based modeling is shown to be a practical and effective methodology for studying liquid properties under outer core conditions that are difficult to reach with the current static high-pressure capabilities. The new viscosity data from experiments and computations would provide new insights into the internal dynamics of the outer core.

  7. Benchmarking reference services: step by step.

    PubMed

    Buchanan, H S; Marshall, J G

    1996-01-01

    This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.

  8. Baseline sediment trace metals investigation: Steinhatchee River estuary, Florida, Northeast Gulf of Mexico

    USGS Publications Warehouse

    Trimble, C.A.; Hoenstine, R.W.; Highley, A.B.; Donoghue, J.F.; Ragland, P.C.

    1999-01-01

    This Florida Geological Survey/U.S. Department of the Interior, Minerals Management Service Cooperative Study provides baseline data for major and trace metal concentrations in the sediments of the Steinhatchee River estuary. These data are intended to provide a benchmark for comparison with future metal concentration data measurements. The Steinhatchee River estuary is a relatively pristine bay located within the Big Bend Wildlife Management Area on the North Central Florida Gulf of Mexico coastline. The river flows 55 km through woodlands and planted pines before emptying into the Gulf at Deadman Harbor. Water quality in the estuary is excellent at present. There is minimal development within the watershed. The estuary is part of an extensive system of marshes that formed along the Florida Gulf coast during the Holocene marine transgression. Sediment accretion rate measurements range from 1.4 to 4.1 mm/yr on the basis of lead-210 measurements. Seventy-nine short cores were collected from 66 sample locations, representing four lithofacies: clay- and organic-rich sands, organic-rich sands, clean quartz sands, and oyster bioherms. Samples were analyzed for texture, total organic matter, total carbon, total nitrogen, clay mineralogy, and major and trace-metal content. Following these analyses, metal concentrations were normalized against geochemical reference elements (aluminum and iron) and against total weight percent organic matter. Metals were also normalized granulometrically against total weight percent fines (<0.062 mm). Concentrations were determined by inductively coupled plasma-atomic emission spectrometry (ICP-AES) for all metals except mercury. Mercury concentrations were determined by cold-flameless atomic absorption spectrometry (AAS). Granulometric measurements were made by sieve and pipette analyses. Organic matter was determined by two methods: weight loss upon ignition and elemental analysis (by Carlo-Erba Furnace) of carbon and nitrogen. X-ray diffraction was used to determine clay mineralogy. Trace-metal concentrations were best correlated when normalized with respect to sediment aluminum concentrations. Normalizations indicate that most major and trace-metal concentrations fall within 95% prediction limits of the expected value. This finding suggests that little significant metal contamination occurred within this system prior to 1994 sediment sampling. Exceptions include lead, mercury, copper, zinc, potassium, and phosphorous. Lead and mercury are elements that generally enter this watershed through atmospheric deposition; thus, anomalous levels of these metals are not necessarily associated with activities within the watershed of the Steinhatchee River estuary. Anomalous concentrations of other metals such as zinc, copper, and phosphorous probably do originate within the Steinhatchee watershed. Copper failed to correlate well with any geochemical or granulometric normalizer, and this condition was not limited to a single facies or area within the estuary. This finding may indicate copper contamination in the system. Increased zinc and copper levels may be attributed to marine paints. Phosphorous levels also appeared to be elevated in a few locations in the two marsh facies sampled. This may be due to nutrient loading from two small communities, Jena and Steinhatchee, or from the application of this element in fertilizer to reduce moisture stress to young planted pines on tree farms within the watershed.The Florida Geological Survey/US Department of the Interior, Minerals Management Service Cooperative Study provides baseline data for major and trace metal concentrations in the sediments of the Steinhatchee River estuary. The data are intended to provide a benchmark for comparison with metal concentration data measurements. Seventy nine short cores were collected from 66 sample locations and analyzed. Metal concentrations were normalized against geochemical reference elements and against total weight percen

  9. Estimation of an optimal chemotherapy utilisation rate for cancer: setting an evidence-based benchmark for quality cancer care.

    PubMed

    Jacob, S A; Ng, W L; Do, V

    2015-02-01

    There is wide variation in the proportion of newly diagnosed cancer patients who receive chemotherapy, indicating the need for a benchmark rate of chemotherapy utilisation. This study describes an evidence-based model that estimates the proportion of new cancer patients in whom chemotherapy is indicated at least once (defined as the optimal chemotherapy utilisation rate). The optimal chemotherapy utilisation rate can act as a benchmark for measuring and improving the quality of care. Models of optimal chemotherapy utilisation were constructed for each cancer site based on indications for chemotherapy identified from evidence-based treatment guidelines. Data on the proportion of patient- and tumour-related attributes for which chemotherapy was indicated were obtained, using population-based data where possible. Treatment indications and epidemiological data were merged to calculate the optimal chemotherapy utilisation rate. Monte Carlo simulations and sensitivity analyses were used to assess the effect of controversial chemotherapy indications and variations in epidemiological data on our model. Chemotherapy is indicated at least once in 49.1% (95% confidence interval 48.8-49.6%) of all new cancer patients in Australia. The optimal chemotherapy utilisation rates for individual tumour sites ranged from a low of 13% in thyroid cancers to a high of 94% in myeloma. The optimal chemotherapy utilisation rate can serve as a benchmark for planning chemotherapy services on a population basis. The model can be used to evaluate service delivery by comparing the benchmark rate with patterns of care data. The overall estimate for other countries can be obtained by substituting the relevant distribution of cancer types. It can also be used to predict future chemotherapy workload and can be easily modified to take into account future changes in cancer incidence, presentation stage or chemotherapy indications. Copyright © 2014 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  10. Nuclear Data Needs for Generation IV Nuclear Energy Systems

    NASA Astrophysics Data System (ADS)

    Rullhusen, Peter

    2006-04-01

    Nuclear data needs for generation IV systems. Future of nuclear energy and the role of nuclear data / P. Finck. Nuclear data needs for generation IV nuclear energy systems-summary of U.S. workshop / T. A. Taiwo, H. S. Khalil. Nuclear data needs for the assessment of gen. IV systems / G. Rimpault. Nuclear data needs for generation IV-lessons from benchmarks / S. C. van der Marck, A. Hogenbirk, M. C. Duijvestijn. Core design issues of the supercritical water fast reactor / M. Mori ... [et al.]. GFR core neutronics studies at CEA / J. C. Bosq ... [et al]. Comparative study on different phonon frequency spectra of graphite in GCR / Young-Sik Cho ... [et al.]. Innovative fuel types for minor actinides transmutation / D. Haas, A. Fernandez, J. Somers. The importance of nuclear data in modeling and designing generation IV fast reactors / K. D. Weaver. The GIF and Mexico-"everything is possible" / C. Arrenondo Sánchez -- Benmarks, sensitivity calculations, uncertainties. Sensitivity of advanced reactor and fuel cycle performance parameters to nuclear data uncertainties / G. Aliberti ... [et al.]. Sensitivity and uncertainty study for thermal molten salt reactors / A. Biduad ... [et al.]. Integral reactor physics benchmarks- The International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPHEP) / J. B. Briggs, D. W. Nigg, E. Sartori. Computer model of an error propagation through micro-campaign of fast neutron gas cooled nuclear reactor / E. Ivanov. Combining differential and integral experiments on [symbol] for reducing uncertainties in nuclear data applications / T. Kawano ... [et al.]. Sensitivity of activation cross sections of the Hafnium, Tanatalum and Tungsten stable isotopes to nuclear reaction mechanisms / V. Avrigeanu ... [et al.]. Generating covariance data with nuclear models / A. J. Koning. Sensitivity of Candu-SCWR reactors physics calculations to nuclear data files / K. S. Kozier, G. R. Dyck. The lead cooled fast reactor benchmark BREST-300: analysis with sensitivity method / V. Smirnov ... [et al.]. Sensitivity analysis of neutron cross-sections considered for design and safety studies of LFR and SFR generation IV systems / K. Tucek, J. Carlsson, H. Wider -- Experiments. INL capabilities for nuclear data measurements using the Argonne intense pulsed neutron source facility / J. D. Cole ... [et al.]. Cross-section measurements in the fast neutron energy range / A. Plompen. Recent measurements of neutron capture cross sections for minor actinides by a JNC and Kyoto University Group / H. Harada ... [et al.]. Determination of minor actinides fission cross sections by means of transfer reactions / M. Aiche ... [et al.] -- Evaluated data libraries. Nuclear data services from the NEA / H. Henriksson, Y. Rugama. Nuclear databases for energy applications: an IAEA perspective / R. Capote Noy, A. L. Nichols, A. Trkov. Nuclear data evaluation for generation IV / G. Noguère ... [et al.]. Improved evaluations of neutron-induced reactions on americium isotopes / P. Talou ... [et al.]. Using improved ENDF-based nuclear data for candu reactor calculations / J. Prodea. A comparative study on the graphite-moderated reactors using different evaluated nuclear data / Do Heon Kim ... [et al.].

  11. Trace element fluxes during the last 100 years in sediment near a nuclear power plant

    NASA Astrophysics Data System (ADS)

    Bojórquez-Sánchez, S.; Marmolejo-Rodríguez, A. J.; Ruiz-Fernández, A. C.; Sánchez-González, A.; Sánchez-Cabeza, J. A.; Bojórquez-Leyva, H.; Pérez-Bernal, L. H.

    2017-11-01

    The Salada coastal lagoon is located in Veracruz (Mexico) near the Laguna Verde Nuclear Power Plant (LVNPP). Currently, the lagoon receives the cooling waters used in the LVNPP. To evaluate the fluxes and mobilization of trace elements due to human activities in the area, two sediment cores from the coastal flood plains of Salada Lagoon were analysed. Cores were collected using PVC tubes. Sediments cores were analysed every centimetre for dating (210Pb by alpha detector) and trace metal analysis using ICP-Mass Spectrometry. The dating of both sediment cores covers the period from 1900 to 2013, which includes the construction of the LVNPP (1970's). The Normalized Enrichment Factor shows enrichment of Ag, As and Cr in both sediment cores. These enrichments correspond to the extent of mining activity (which reached a maximum in the 1900's) and to the geological setting of the coastal zone. The profiles of the element fluxes in both sediment cores reflected the construction and operation of the LVNPP; however, the elements content did not show evidence of pollution coming from the LVNPP.

  12. Alignment methods: strategies, challenges, benchmarking, and comparative overview.

    PubMed

    Löytynoja, Ari

    2012-01-01

    Comparative evolutionary analyses of molecular sequences are solely based on the identities and differences detected between homologous characters. Errors in this homology statement, that is errors in the alignment of the sequences, are likely to lead to errors in the downstream analyses. Sequence alignment and phylogenetic inference are tightly connected and many popular alignment programs use the phylogeny to divide the alignment problem into smaller tasks. They then neglect the phylogenetic tree, however, and produce alignments that are not evolutionarily meaningful. The use of phylogeny-aware methods reduces the error but the resulting alignments, with evolutionarily correct representation of homology, can challenge the existing practices and methods for viewing and visualising the sequences. The inter-dependency of alignment and phylogeny can be resolved by joint estimation of the two; methods based on statistical models allow for inferring the alignment parameters from the data and correctly take into account the uncertainty of the solution but remain computationally challenging. Widely used alignment methods are based on heuristic algorithms and unlikely to find globally optimal solutions. The whole concept of one correct alignment for the sequences is questionable, however, as there typically exist vast numbers of alternative, roughly equally good alignments that should also be considered. This uncertainty is hidden by many popular alignment programs and is rarely correctly taken into account in the downstream analyses. The quest for finding and improving the alignment solution is complicated by the lack of suitable measures of alignment goodness. The difficulty of comparing alternative solutions also affects benchmarks of alignment methods and the results strongly depend on the measure used. As the effects of alignment error cannot be predicted, comparing the alignments' performance in downstream analyses is recommended.

  13. Parallel Agent-Based Simulations on Clusters of GPUs and Multi-Core Processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaby, Brandon G; Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    An effective latency-hiding mechanism is presented in the parallelization of agent-based model simulations (ABMS) with millions of agents. The mechanism is designed to accommodate the hierarchical organization as well as heterogeneity of current state-of-the-art parallel computing platforms. We use it to explore the computation vs. communication trade-off continuum available with the deep computational and memory hierarchies of extant platforms and present a novel analytical model of the tradeoff. We describe our implementation and report preliminary performance results on two distinct parallel platforms suitable for ABMS: CUDA threads on multiple, networked graphical processing units (GPUs), and pthreads on multi-core processors. Messagemore » Passing Interface (MPI) is used for inter-GPU as well as inter-socket communication on a cluster of multiple GPUs and multi-core processors. Results indicate the benefits of our latency-hiding scheme, delivering as much as over 100-fold improvement in runtime for certain benchmark ABMS application scenarios with several million agents. This speed improvement is obtained on our system that is already two to three orders of magnitude faster on one GPU than an equivalent CPU-based execution in a popular simulator in Java. Thus, the overall execution of our current work is over four orders of magnitude faster when executed on multiple GPUs.« less

  14. Taming Wild Horses: The Need for Virtual Time-based Scheduling of VMs in Network Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J

    2012-01-01

    The next generation of scalable network simulators employ virtual machines (VMs) to act as high-fidelity models of traffic producer/consumer nodes in simulated networks. However, network simulations could be inaccurate if VMs are not scheduled according to virtual time, especially when many VMs are hosted per simulator core in a multi-core simulator environment. Since VMs are by default free-running, on the outset, it is not clear if, and to what extent, their untamed execution affects the results in simulated scenarios. Here, we provide the first quantitative basis for establishing the need for generalized virtual time scheduling of VMs in network simulators,more » based on an actual prototyped implementations. To exercise breadth, our system is tested with multiple disparate applications: (a) a set of message passing parallel programs, (b) a computer worm propagation phenomenon, and (c) a mobile ad-hoc wireless network simulation. We define and use error metrics and benchmarks in scaled tests to empirically report the poor match of traditional, fairness-based VM scheduling to VM-based network simulation, and also clearly show the better performance of our simulation-specific scheduler, with up to 64 VMs hosted on a 12-core simulator node.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, J. Grant, E-mail: grant.hill@sheffield.ac.uk, E-mail: kipeters@wsu.edu; Peterson, Kirk A., E-mail: grant.hill@sheffield.ac.uk, E-mail: kipeters@wsu.edu

    New correlation consistent basis sets, cc-pVnZ-PP-F12 (n = D, T, Q), for all the post-d main group elements Ga–Rn have been optimized for use in explicitly correlated F12 calculations. The new sets, which include not only orbital basis sets but also the matching auxiliary sets required for density fitting both conventional and F12 integrals, are designed for correlation of valence sp, as well as the outer-core d electrons. The basis sets are constructed for use with the previously published small-core relativistic pseudopotentials of the Stuttgart-Cologne variety. Benchmark explicitly correlated coupled-cluster singles and doubles with perturbative triples [CCSD(T)-F12b] calculations of themore » spectroscopic properties of numerous diatomic molecules involving 4p, 5p, and 6p elements have been carried out and compared to the analogous conventional CCSD(T) results. In general the F12 results obtained with a n-zeta F12 basis set were comparable to conventional aug-cc-pVxZ-PP or aug-cc-pwCVxZ-PP basis set calculations obtained with x = n + 1 or even x = n + 2. The new sets used in CCSD(T)-F12b calculations are particularly efficient at accurately recovering the large correlation effects of the outer-core d electrons.« less

  16. Constraining axion-like-particles with hard X-ray emission from magnetars

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-François; Sinha, Kuver

    2018-06-01

    Axion-like particles (ALPs) produced in the core of a magnetar will convert to photons in the magnetosphere, leading to possible signatures in the hard X-ray band. We perform a detailed calculation of the ALP-to-photon conversion probability in the magnetosphere, recasting the coupled differential equations that describe ALP-photon propagation into a form that is efficient for large scale numerical scans. We show the dependence of the conversion probability on the ALP energy, mass, ALP-photon coupling, magnetar radius, surface magnetic field, and the angle between the magnetic field and direction of propagation. Along the way, we develop an analytic formalism to perform similar calculations in more general n-state oscillation systems. Assuming ALP emission rates from the core that are just subdominant to neutrino emission, we calculate the resulting constraints on the ALP mass versus ALP-photon coupling space, taking SGR 1806-20 as an example. In particular, we take benchmark values for the magnetar radius and core temperature, and constrain the ALP parameter space by the requirement that the luminosity from ALP-to-photon conversion should not exceed the total observed luminosity from the magnetar. The resulting constraints are competitive with constraints from helioscope experiments in the relevant part of ALP parameter space.

  17. Trailing Vortex Measurements in the Wake of a Hovering Rotor Blade with Various Tip Shapes

    NASA Technical Reports Server (NTRS)

    Martin, Preston B.; Leishman, J. Gordon

    2003-01-01

    This work examined the wake aerodynamics of a single helicopter rotor blade with several tip shapes operating on a hover test stand. Velocity field measurements were conducted using three-component laser Doppler velocimetry (LDV). The objective of these measurements was to document the vortex velocity profiles and then extract the core properties, such as the core radius, peak swirl velocity, and axial velocity. The measured test cases covered a wide range of wake-ages and several tip shapes, including rectangular, tapered, swept, and a subwing tip. One of the primary differences shown by the change in tip shape was the wake geometry. The effect of blade taper reduced the initial peak swirl velocity by a significant fraction. It appears that this is accomplished by decreasing the vortex strength for a given blade loading. The subwing measurements showed that the interaction and merging of the subwing and primary vortices created a less coherent vortical structure. A source of vortex core instability is shown to be the ratio of the peak swirl velocity to the axial velocity deficit. The results show that if there is a turbulence producing region of the vortex structure, it will be outside of the core boundary. The LDV measurements were supported by laser light-sheet flow visualization. The results provide several benchmark test cases for future validation of theoretical vortex models, numerical free-wake models, and computational fluid dynamics results.

  18. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  19. Experimental detailed power distribution in a fast spectrum thermionic reactor fuel element at the core/BeO reflector interface region

    NASA Technical Reports Server (NTRS)

    Klann, P. G.; Lantz, E.

    1973-01-01

    A zero-power critical assembly was designed, constructed, and operated for the prupose of conducting a series of benchmark experiments dealing with the physics characteristics of a UN-fueled, Li-7-cooled, Mo-reflected, drum-controlled compact fast reactor for use with a space-power conversion system. The critical assembly was modified to simulate a fast spectrum advanced thermionics reactor by: (1) using BeO as a reflector in place of some of the existing molybdenum, (2) substituting Nb-1Zr tubing for some of the existing Ta tubing, and (3) inserting four full-scale mockups of thermionic type fuel elements near the core and BeO reflector boundary. These mockups were surrounded with a buffer zone having the equivalent thermionic core composition. In addition to measuring the critical mass of this thermionic configuration, a detailed power distribution in one of the thermionic element stages in the mixed spectrum region was measured. A power peak to average ratio of two was observed for this fuel stage at the midplane of the core and adjacent to the reflector. Also, the power on the outer surface adjacent to the BeO was slightly more than a factor of two larger than the power on the inside surface of a 5.08 cm (2.0 in.) high annular fuel segment with a 2.52 cm (0.993 in. ) o.d. and a 1.86 cm (0.731 in.) i.d.

  20. Clustering of Pan- and Core-genome of Lactobacillus provides Novel Evolutionary Insights for Differentiation.

    PubMed

    Inglin, Raffael C; Meile, Leo; Stevens, Marc J A

    2018-04-24

    Bacterial taxonomy aims to classify bacteria based on true evolutionary events and relies on a polyphasic approach that includes phenotypic, genotypic and chemotaxonomic analyses. Until now, complete genomes are largely ignored in taxonomy. The genus Lactobacillus consists of 173 species and many genomes are available to study taxonomy and evolutionary events. We analyzed and clustered 98 completely sequenced genomes of the genus Lactobacillus and 234 draft genomes of 5 different Lactobacillus species, i.e. L. reuteri, L. delbrueckii, L. plantarum, L. rhamnosus and L. helveticus. The core-genome of the genus Lactobacillus contains 266 genes and the pan-genome 20'800 genes. Clustering of the Lactobacillus pan- and core-genome resulted in two highly similar trees. This shows that evolutionary history is traceable in the core-genome and that clustering of the core-genome is sufficient to explore relationships. Clustering of core- and pan-genomes at species' level resulted in similar trees as well. Detailed analyses of the core-genomes showed that the functional class "genetic information processing" is conserved in the core-genome but that "signaling and cellular processes" is not. The latter class encodes functions that are involved in environmental interactions. Evolution of lactobacilli seems therefore directed by the environment. The type species L. delbrueckii was analyzed in detail and its pan-genome based tree contained two major clades whose members contained different genes yet identical functions. In addition, evidence for horizontal gene transfer between strains of L. delbrueckii, L. plantarum, and L. rhamnosus, and between species of the genus Lactobacillus is presented. Our data provide evidence for evolution of some lactobacilli according to a parapatric-like model for species differentiation. Core-genome trees are useful to detect evolutionary relationships in lactobacilli and might be useful in taxonomic analyses. Lactobacillus' evolution is directed by the environment and HGT.

  1. Fast Neutron Spectrum Potassium Worth for Space Power Reactor Design Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Marshall, Margaret A.; Briggs, J. Blair

    2015-03-01

    A variety of critical experiments were constructed of enriched uranium metal (oralloy ) during the 1960s and 1970s at the Oak Ridge Critical Experiments Facility (ORCEF) in support of criticality safety operations at the Y-12 Plant. The purposes of these experiments included the evaluation of storage, casting, and handling limits for the Y-12 Plant and providing data for verification of calculation methods and cross-sections for nuclear criticality safety applications. These included solid cylinders of various diameters, annuli of various inner and outer diameters, two and three interacting cylinders of various diameters, and graphite and polyethylene reflected cylinders and annuli. Ofmore » the hundreds of delayed critical experiments, one was performed that consisted of uranium metal annuli surrounding a potassium-filled, stainless steel can. The outer diameter of the annuli was approximately 13 inches (33.02 cm) with an inner diameter of 7 inches (17.78 cm). The diameter of the stainless steel can was 7 inches (17.78 cm). The critical height of the configurations was approximately 5.6 inches (14.224 cm). The uranium annulus consisted of multiple stacked rings, each with radial thicknesses of 1 inch (2.54 cm) and varying heights. A companion measurement was performed using empty stainless steel cans; the primary purpose of these experiments was to test the fast neutron cross sections of potassium as it was a candidate for coolant in some early space power reactor designs.The experimental measurements were performed on July 11, 1963, by J. T. Mihalczo and M. S. Wyatt (Ref. 1) with additional information in its corresponding logbook. Unreflected and unmoderated experiments with the same set of highly enriched uranium metal parts were performed at the Oak Ridge Critical Experiments Facility in the 1960s and are evaluated in the International Handbook for Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) with the identifier HEU MET FAST 051. Thin graphite reflected (2 inches or less) experiments also using the same set of highly enriched uranium metal parts are evaluated in HEU MET FAST 071. Polyethylene-reflected configurations are evaluated in HEU-MET-FAST-076. A stack of highly enriched metal discs with a thick beryllium top reflector is evaluated in HEU-MET-FAST-069, and two additional highly enriched uranium annuli with beryllium cores are evaluated in HEU-MET-FAST-059. Both detailed and simplified model specifications are provided in this evaluation. Both of these fast neutron spectra assemblies were determined to be acceptable benchmark experiments. The calculated eigenvalues for both the detailed and the simple benchmark models are within ~0.26 % of the benchmark values for Configuration 1 (calculations performed using MCNP6 with ENDF/B-VII.1 neutron cross section data), but under-calculate the benchmark values by ~7s because the uncertainty in the benchmark is very small: ~0.0004 (1s); for Configuration 2, the under-calculation is ~0.31 % and ~8s. Comparison of detailed and simple model calculations for the potassium worth measurement and potassium mass coefficient yield results approximately 70 – 80 % lower (~6s to 10s) than the benchmark values for the various nuclear data libraries utilized. Both the potassium worth and mass coefficient are also deemed to be acceptable benchmark experiment measurements.« less

  2. Core OCD Symptoms: Exploration of Specificity and Relations with Psychopathology

    PubMed Central

    Stasik, Sara M.; Naragon-Gainey, Kristin; Chmielewski, Michael; Watson, David

    2012-01-01

    Obsessive-compulsive disorder (OCD) is a heterogeneous condition, comprised of multiple symptom domains. This study used aggregate composite scales representing three core OCD dimensions (Checking, Cleaning, Rituals), as well as Hoarding, to examine the discriminant validity, diagnostic specificity, and predictive ability of OCD symptom scales. The core OCD scales demonstrated strong patterns of convergent and discriminant validity – suggesting that these dimensions are distinct from other self-reported symptoms – whereas hoarding symptoms correlated just as strongly with OCD and non-OCD symptoms in most analyses. Across analyses, our results indicated that Checking is a particularly strong, specific marker of OCD diagnosis, whereas the specificity of Cleaning and Hoarding to OCD was less strong. Finally, the OCD Checking scale was the only significant predictor of OCD diagnosis in logistic regression analyses. Results are discussed with regard to the importance of assessing OCD symptom dimensions separately and implications for classification. PMID:23026094

  3. Tank 241-T-204, core 188 analytical results for the final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuzum, J.L.

    TANK 241-T-204, CORE 188, ANALYTICAL RESULTS FOR THE FINAL REPORT. This document is the final laboratory report for Tank 241 -T-204. Push mode core segments were removed from Riser 3 between March 27, 1997, and April 11, 1997. Segments were received and extruded at 222-8 Laboratory. Analyses were performed in accordance with Tank 241-T-204 Push Mode Core Sampling and analysis Plan (TRAP) (Winkleman, 1997), Letter of instruction for Core Sample Analysis of Tanks 241-T-201, 241- T-202, 241-T-203, and 241-T-204 (LAY) (Bell, 1997), and Safety Screening Data Qual@ Objective (DO) ODukelow, et al., 1995). None of the subsamples submitted for totalmore » alpha activity (AT) or differential scanning calorimetry (DC) analyses exceeded the notification limits stated in DO. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group and are not considered in this report.« less

  4. High Resolution Continuous Flow Analysis System for Polar Ice Cores

    NASA Astrophysics Data System (ADS)

    Dallmayr, Remi; Azuma, Kumiko; Yamada, Hironobu; Kjær, Helle Astrid; Vallelonga, Paul; Azuma, Nobuhiko; Takata, Morimasa

    2014-05-01

    In the last decades, Continuous Flow Analysis (CFA) technology for ice core analyses has been developed to reconstruct the past changes of the climate system 1), 2). Compared with traditional analyses of discrete samples, a CFA system offers much faster and higher depth resolution analyses. It also generates a decontaminated sample stream without time-consuming sample processing procedure by using the inner area of an ice-core sample.. The CFA system that we have been developing is currently able to continuously measure stable water isotopes 3) and electrolytic conductivity, as well as to collect discrete samples for the both inner and outer areas with variable depth resolutions. Chemistry analyses4) and methane-gas analysis 5) are planned to be added using the continuous water stream system 5). In order to optimize the resolution of the current system with minimal sample volumes necessary for different analyses, our CFA system typically melts an ice core at 1.6 cm/min. Instead of using a wire position encoder with typical 1mm positioning resolution 6), we decided to use a high-accuracy CCD Laser displacement sensor (LKG-G505, Keyence). At the 1.6 cm/min melt rate, the positioning resolution was increased to 0.27mm. Also, the mixing volume that occurs in our open split debubbler is regulated using its weight. The overflow pumping rate is smoothly PID controlled to maintain the weight as low as possible, while keeping a safety buffer of water to avoid air bubbles downstream. To evaluate the system's depth-resolution, we will present the preliminary data of electrolytic conductivity obtained by melting 12 bags of the North Greenland Eemian Ice Drilling (NEEM) ice core. The samples correspond to different climate intervals (Greenland Stadial 21, 22, Greenland Stadial 5, Greenland Interstadial 5, Greenland Interstadial 7, Greenland Stadial 8). We will present results for the Greenland Stadial -8, whose depths and ages are between 1723.7 and 1724.8 meters, and 35.520 to 35.636 kyr b2k 7), respectively. The results show the conductivity measured upstream and downstream of the debubbler. We will calculate the depth resolution of our system and compare it with earlier studies. 1) Bigler at al, "Optimization of High-Resolution Continuous Flow Analysis For Transient Climate Signals in Ice Cores". Environ. Sci. Technol. 2011, 45, 4483-4489 2) Kaufmann et al, "An Improved Continuous Flow Analysis System for High Resolution Field Measurements on Ice Cores". Environmental Environ. Sci. Technol. 2008, 42, 8044-8050 3) Gkinis, V., T. J. Popp, S. J. Johnsen and T, Blunier, 2010: A continuous stream flash evaporator for the calibration of an IR cavity ring down spectrometer for the isotopic analysis of water. Isotopes in Environmental and Health Studies, 46(4), 463-475. 4) McConnell et al, "Continuous ice-core chemical analyses using inductively coupled plasma mass spectrometry. Environ. Sci. Technol. 2002, 36, 7-11 5) Rhodes et al, "Continuous methane measurements from a late Holocene Greenland ice core : Atmospheric and in-situ signals" Earth and Planetary Science Letters. 2013, 368, 9-19 6) Breton et al, "Quantifying Signal Dispersion in a Hybrid Ice Core Melting System". Environ. Sci. Technol. 2012, 46, 11922-11928 7) Rasmussen et al, " A first chronology for the NEEM ice core". Climate of the Past. 2013, 9, 2967--3013

  5. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Ruf, Joe

    1999-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.

  6. Thermodynamic analyses of a biomass-coal co-gasification power generation system.

    PubMed

    Yan, Linbo; Yue, Guangxi; He, Boshu

    2016-04-01

    A novel chemical looping power generation system is presented based on the biomass-coal co-gasification with steam. The effects of different key operation parameters including biomass mass fraction (Rb), steam to carbon mole ratio (Rsc), gasification temperature (Tg) and iron to fuel mole ratio (Rif) on the system performances like energy efficiency (ηe), total energy efficiency (ηte), exergy efficiency (ηex), total exergy efficiency (ηtex) and carbon capture rate (ηcc) are analyzed. A benchmark condition is set, under which ηte, ηtex and ηcc are found to be 39.9%, 37.6% and 96.0%, respectively. Furthermore, detailed energy Sankey diagram and exergy Grassmann diagram are drawn for the entire system operating under the benchmark condition. The energy and exergy efficiencies of the units composing the system are also predicted. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Genetic diversity and structure of core collection of winter mushroom (Flammulina velutipes) developed by genomic SSR markers.

    PubMed

    Liu, Xiao Bin; Li, Jing; Yang, Zhu L

    2018-01-01

    A core collection is a subset of an entire collection that represents as much of the genetic diversity of the entire collection as possible. The establishment of a core collection for crops is practical for efficient management and use of germplasm. However, the establishment of a core collection of mushrooms is still in its infancy, and no established core collection of the economically important species Flammulina velutipes has been reported. We established the first core collection of F. velutipes , containing 32 strains based on 81 genetically different F. veltuipes strains. The allele retention proportion of the core collection for the entire collection was 100%. Moreover, the genetic diversity parameters (the effective number of alleles, Nei's expected heterozygosity, the number of observed heterozygosity, and Shannon's information index) of the core collection showed no significant differences from the entire collection ( p  > 0.01). Thus, the core collection is representative of the genetic diversity of the entire collection. Genetic structure analyses of the core collection revealed that the 32 strains could be clustered into 6 groups, among which groups 1 to 3 were cultivars and groups 4 to 6 were wild strains. The wild strains from different locations harbor their own specific alleles, and were clustered stringently in accordance with their geographic origins. Genetic diversity analyses of the core collection revealed that the wild strains possessed greater genetic diversity than the cultivars. We established the first core collection of F. velutipes in China, which is an important platform for efficient breeding of this mushroom in the future. In addition, the wild strains in the core collection possess favorable agronomic characters and produce unique bioactive compounds, adding value to the platform. More attention should be paid to wild strains in further strain breeding.

  8. Benchmarks for health expenditures, services and outcomes in Africa during the 1990s.

    PubMed Central

    Peters, D. H.; Elmendorf, A. E.; Kandola, K.; Chellaraj, G.

    2000-01-01

    There is limited information on national health expenditures, services, and outcomes in African countries during the 1990s. We intend to make statistical information available for national level comparisons. National level data were collected from numerous international databases, and supplemented by national household surveys and World Bank expenditure reviews. The results were tabulated and analysed in an exploratory fashion to provide benchmarks for groupings of African countries and individual country comparison. There is wide variation in scale and outcome of health care spending between African countries, with poorer countries tending to do worse than wealthier ones. From 1990-96, the median annual per capita government expenditure on health was nearly US$ 6, but averaged US$ 3 in the lowest-income countries, compared to US$ 72 in middle-income countries. Similar trends were found for health services and outcomes. Results from individual countries (particularly Ethiopia, Ghana, Côte d'Ivoire and Gabon) are used to indicate how the data can be used to identify areas of improvement in health system performance. Serious gaps in data, particularly concerning private sector delivery and financing, health service utilization, equity and efficiency measures, hinder more effective health management. Nonetheless, the data are useful for providing benchmarks for performance and for crudely identifying problem areas in health systems for individual countries. PMID:10916913

  9. Development and Validation of a High-Quality Composite Real-World Mortality Endpoint.

    PubMed

    Curtis, Melissa D; Griffith, Sandra D; Tucker, Melisa; Taylor, Michael D; Capra, William B; Carrigan, Gillis; Holzman, Ben; Torres, Aracelis Z; You, Paul; Arnieri, Brandon; Abernethy, Amy P

    2018-05-14

    To create a high-quality electronic health record (EHR)-derived mortality dataset for retrospective and prospective real-world evidence generation. Oncology EHR data, supplemented with external commercial and US Social Security Death Index data, benchmarked to the National Death Index (NDI). We developed a recent, linkable, high-quality mortality variable amalgamated from multiple data sources to supplement EHR data, benchmarked against the highest completeness U.S. mortality data, the NDI. Data quality of the mortality variable version 2.0 is reported here. For advanced non-small-cell lung cancer, sensitivity of mortality information improved from 66 percent in EHR structured data to 91 percent in the composite dataset, with high date agreement compared to the NDI. For advanced melanoma, metastatic colorectal cancer, and metastatic breast cancer, sensitivity of the final variable was 85 to 88 percent. Kaplan-Meier survival analyses showed that improving mortality data completeness minimized overestimation of survival relative to NDI-based estimates. For EHR-derived data to yield reliable real-world evidence, it needs to be of known and sufficiently high quality. Considering the impact of mortality data completeness on survival endpoints, we highlight the importance of data quality assessment and advocate benchmarking to the NDI. © 2018 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.

  10. Listening to the occupants: a Web-based indoor environmental quality survey.

    PubMed

    Zagreus, Leah; Huizenga, Charlie; Arens, Edward; Lehrer, David

    2004-01-01

    Building occupants are a rich source of information about indoor environmental quality and its effect on comfort and productivity. The Center for the Built Environment has developed a Web-based survey and accompanying online reporting tools to quickly and inexpensively gather, process and present this information. The core questions assess occupant satisfaction with the following IEQ areas: office layout, office furnishings, thermal comfort, indoor air quality, lighting, acoustics, and building cleanliness and maintenance. The survey can be used to assess the performance of a building, identify areas needing improvement, and provide useful feedback to designers and operators about specific aspects of building design features and operating strategies. The survey has been extensively tested and refined and has been conducted in more than 70 buildings, creating a rapidly growing database of standardized survey data that is used for benchmarking. We present three case studies that demonstrate different applications of the survey: a pre/post analysis of occupants moving to a new building, a survey used in conjunction with physical measurements to determine how environmental factors affect occupants' perceived comfort and productivity levels, and a benchmarking example of using the survey to establish how new buildings are meeting a client's design objectives. In addition to its use in benchmarking a building's performance against other buildings, the CBE survey can be used as a diagnostic tool to identify specific problems and their sources. Whenever a respondent indicates dissatisfaction with an aspect of building performance, a branching page follows with more detailed questions about the nature of the problem. This systematically collected information provides a good resource for solving indoor environmental problems in the building. By repeating the survey after a problem has been corrected it is also possible to assess the effectiveness of the solution.

  11. Interaction between core protein of classical swine fever virus with cellular IQGAP1 proetin appears essential for virulence in swine

    USDA-ARS?s Scientific Manuscript database

    Here we show that IQGAP1, a cellular protein that plays a pivotal role as a regulator of the cytoskeleton affecting cell adhesion, polarization and migration, interacts with Classical Swine Fever Virus (CSFV) Core protein. Sequence analyses identified a defined set of residues within CSFV Core prote...

  12. What Is the Canon in American Politics? Analyses of Core Graduate Syllabi

    ERIC Educational Resources Information Center

    Diament, Sean M.; Howat, Adam J.; Lacombe, Matthew J.

    2017-01-01

    Many core graduate-level seminars claim to expose students to their discipline's "canon." The contents of this canon, however, can and do differ across departments and instructors. This project employs a survey of core American politics PhD seminar syllabi at highly ranked universities to construct a systematic account of the American…

  13. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  14. Palaeointensity, core thermal conductivity and the unknown age of the inner core

    NASA Astrophysics Data System (ADS)

    Smirnov, Aleksey V.; Tarduno, John A.; Kulakov, Evgeniy V.; McEnroe, Suzanne A.; Bono, Richard K.

    2016-05-01

    Data on the evolution of Earth's magnetic field intensity are important for understanding the geodynamo and planetary evolution. However, the paleomagnetic record in rocks may be adversely affected by many physical processes, which must be taken into account when analysing the palaeointensity database. This is especially important in the light of an ongoing debate regarding core thermal conductivity values, and how these relate to the Precambrian geodynamo. Here, we demonstrate that several data sets in the Precambrian palaeointensity database overestimate the true paleofield strength due to the presence of non-ideal carriers of palaeointensity signals and/or viscous re-magnetizations. When the palaeointensity overestimates are removed, the Precambrian database does not indicate a robust change in geomagnetic field intensity during the Mesoproterozoic. These findings call into question the recent claim that the solid inner core formed in the Mesoproterozoic, hence constraining the thermal conductivity in the core to `moderate' values. Instead, our analyses indicate that the presently available palaeointensity data are insufficient in number and quality to constrain the timing of solid inner core formation, or the outstanding problem of core thermal conductivity. Very young or very old inner core ages (and attendant high or low core thermal conductivity values) are consistent with the presently known history of Earth's field strength. More promising available data sets that reflect long-term core structure are geomagnetic reversal rate and field morphology. The latter suggests changes that may reflect differences in Archean to Proterozoic core stratification, whereas the former suggest an interval of geodynamo hyperactivity at ca. 550 Ma.

  15. Paleoarchean and Cambrian observations of the geodynamo in light of new estimates of core thermal conductivity

    NASA Astrophysics Data System (ADS)

    Tarduno, John; Bono, Richard; Cottrell, Rory

    2015-04-01

    Recent estimates of core thermal conductivity are larger than prior values by a factor of approximately three. These new estimates suggest that the inner core is a relatively young feature, perhaps as young as 500 million years old, and that the core-mantle heat flux required to drive the early dynamo was greater than previously assumed (Nimmo, 2015). Here, we focus on paleomagnetic studies of two key time intervals important for understanding core evolution in light of the revisions of core conductivity values. 1. Hadean to Paleoarchean (4.4-3.4 Ga). Single silicate crystal paleointensity analyses suggest a relatively strong magnetic field at 3.4-3.45 Ga (Tarduno et al., 2010). Paleointenity data from zircons of the Jack Hills (Western Australia) further suggest the presence of a geodynamo between 3.5 and 3.6 Ga (Tarduno and Cottrell, 2014). We will discuss our efforts to test for the absence/presence of the geodynamo in older Eoarchean and Hadean times. 2. Ediacaran to Early Cambrian (~635-530 Ma). Disparate directions seen in some paleomagnetic studies from this time interval have been interpreted as recording inertial interchange true polar wander (IITPW). Recent single silicate paleomagnetic analyses fail to find evidence for IITPW; instead a reversing field overprinted by secondary magnetizations is defined (Bono and Tarduno, 2015). Preliminary analyses suggest the field may have been unusually weak. We will discuss our on-going tests of the hypothesis that this interval represents the time of onset of inner core growth. References: Bono, R.K. & Tarduno, J.A., Geology, in press (2015); Nimmo, F., Treatise Geophys., in press (2015); Tarduno, J.A., et al., Science (2010); Tarduno, J.A. & Cottrell, R.D., AGU Fall Meeting (2014).

  16. Biological and geochemical data of gravity cores from Mobile Bay, Alabama

    USGS Publications Warehouse

    Richwine, Kathryn A.; Marot, Marci; Smith, Christopher G.; Osterman, Lisa E.; Adams, C. Scott

    2013-01-01

    A study was conducted to understand the marine-influenced environments of Mobile Bay, Alabama, by collecting a series of box cores and gravity cores. One gravity core in particular demonstrates a long reference for changing paleoenvironmental parameters in Mobile Bay. Due to lack of abundance of foraminifers and (or) lack of diversity, the benthic foraminiferal data for two of the three gravity cores are not included in the results. The benthic foraminiferal data collected and geochemical analyses in this study provide a baseline for recent changes in the bay.

  17. An Ice Core Melter System for Continuous Major and Trace Chemical Analyses of a New Mt. Logan Summit Ice Core

    NASA Astrophysics Data System (ADS)

    Osterberg, E. C.; Handley, M. J.; Sneed, S. D.; Mayewski, P. A.; Kreutz, K. J.; Fisher, D. A.

    2004-12-01

    The ice core melter system at the University of Maine Climate Change Institute has been recently modified and updated to allow high-resolution (<1-2 cm ice/sample), continuous and coregistered sampling of ice cores, most notably the 2001 Mt. Logan summit ice core (187 m to bedrock), for analyses of 34 trace elements (Sr, Cd, Sb, Cs, Ba, Pb, Bi, U, As, Al, S, Ca, Ti, V, Cr, Mn, Fe, Co, Cu, Zn, REE suite) by inductively coupled plasma mass spectrometry (ICP-MS), 8 major ions (Na+, Ca2+, Mg2+, K+, Cl-, SO42-, NO3-, MSA) by ion chromatography (IC), stable water isotopes (δ 18O, δ D, d) and volcanic tephra. The UMaine continuous melter (UMCoM) system is housed in a dedicated clean room with HEPA filtered air. Standard clean room procedures are employed during melting. A Wagenbach-style continuous melter system has been modified to include a pure Nickel melthead that can be easily dismantled for thorough cleaning. The system allows melting of both ice and firn without wicking of the meltwater into unmelted core. Contrary to ice core melter systems in which the meltwater is directly channeled to online instruments for continuous flow analyses, the UMCoM system collects discrete samples for each chemical analysis under ultraclean conditions. Meltwater from the pristine innermost section of the ice core is split between one fraction collector that accumulates ICP-MS samples in acid pre-cleaned polypropylene vials under a class-100 HEPA clean bench, and a second fraction collector that accumulates IC samples. A third fraction collector accumulates isotope and tephra samples from the potentially contaminated outer portion of the core. This method is advantageous because an archive of each sample remains for subsequent analyses (including trace element isotope ratios), and ICP-MS analytes are scanned for longer intervals and in replicate. Method detection limits, calculated from de-ionized water blanks passed through the entire UMCoM system, are below 10% of average Mt. Logan values. A strong correlation (R2>0.9) between Ca and S concentrations measured on different fractions of the same sample by IC and ICP-MS validates sample coregistration. Preliminary analyses of data from the 2001 Mt. Logan summit ice core confirm subannual resolution sampling and annual scale variability of major and trace elements. Accumulation rate models and isotope data suggest that annual resolution will be possible to 1000-2000 y.b.p., with multi-annual to centennial resolution for the remainder of the Holocene and possibly including the last deglaciation. Dust proxy elements, including REEs, strongly co-vary in time-series and reveal concentration ratio fluctuations interpreted as source region changes. Volcanic eruptions are characterized by elevated concentrations of S, SO42-, Cu, Sb, Zn and other trace elements. Concentrations of potential anthropogenic contaminants are also discussed.

  18. A New Capability for Nuclear Thermal Propulsion Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amiri, Benjamin W.; Nuclear and Radiological Engineering Department, University of Florida, Gainesville, FL 32611; Kapernick, Richard J.

    2007-01-30

    This paper describes a new capability for Nuclear Thermal Propulsion (NTP) design that has been developed, and presents the results of some analyses performed with this design tool. The purpose of the tool is to design to specified mission and material limits, while maximizing system thrust to weight. The head end of the design tool utilizes the ROCket Engine Transient Simulation (ROCETS) code to generate a system design and system design requirements as inputs to the core analysis. ROCETS is a modular system level code which has been used extensively in the liquid rocket engine industry for many years. Themore » core design tool performs high-fidelity reactor core nuclear and thermal-hydraulic design analysis. At the heart of this process are two codes TMSS-NTP and NTPgen, which together greatly automate the analysis, providing the capability to rapidly produce designs that meet all specified requirements while minimizing mass. A PERL based command script, called CORE DESIGNER controls the execution of these two codes, and checks for convergence throughout the process. TMSS-NTP is executed first, to produce a suite of core designs that meet the specified reactor core mechanical, thermal-hydraulic and structural requirements. The suite of designs consists of a set of core layouts and, for each core layout specific designs that span a range of core fuel volumes. NTPgen generates MCNPX models for each of the core designs from TMSS-NTP. Iterative analyses are performed in NTPgen until a reactor design (fuel volume) is identified for each core layout that meets cold and hot operation reactivity requirements and that is zoned to meet a radial core power distribution requirement.« less

  19. Geochemistry of mercury and other constituents in subsurface sediment—Analyses from 2011 and 2012 coring campaigns, Cache Creek Settling Basin, Yolo County, California

    USGS Publications Warehouse

    Arias, Michelle R.; Alpers, Charles N.; Marvin-DiPasquale, Mark C.; Fuller, Christopher C.; Agee, Jennifer L.; Sneed, Michelle; Morita, Andrew Y.; Salas, Antonia

    2017-10-31

    Cache Creek Settling Basin was constructed in 1937 to trap sediment from Cache Creek before delivery to the Yolo Bypass, a flood conveyance for the Sacramento River system that is tributary to the Sacramento–San Joaquin Delta. Sediment management options being considered by stakeholders in the Cache Creek Settling Basin include sediment excavation; however, that could expose sediments containing elevated mercury concentrations from historical mercury mining in the watershed. In cooperation with the California Department of Water Resources, the U.S. Geological Survey undertook sediment coring campaigns in 2011–12 (1) to describe lateral and vertical distributions of mercury concentrations in deposits of sediment in the Cache Creek Settling Basin and (2) to improve constraint of estimates of the rate of sediment deposition in the basin.Sediment cores were collected in the Cache Creek Settling Basin, Yolo County, California, during October 2011 at 10 locations and during August 2012 at 5 other locations. Total core depths ranged from approximately 4.6 to 13.7 meters (15 to 45 feet), with penetration to about 9.1 meters (30 feet) at most locations. Unsplit cores were logged for two geophysical parameters (gamma bulk density and magnetic susceptibility); then, selected cores were split lengthwise. One half of each core was then photographed and archived, and the other half was subsampled. Initial subsamples from the cores (20-centimeter composite samples from five predetermined depths in each profile) were analyzed for total mercury, methylmercury, total reduced sulfur, iron speciation, organic content (as the percentage of weight loss on ignition), and grain-size distribution. Detailed follow-up subsampling (3-centimeter intervals) was done at six locations along an east-west transect in the southern part of the Cache Creek Settling Basin and at one location in the northern part of the basin for analyses of total mercury; organic content; and cesium-137, which was used for dating. This report documents site characteristics; field and laboratory methods; and results of the analyses of each core section and subsample of these sediment cores, including associated quality-assurance and quality-control data.

  20. Validation Data and Model Development for Fuel Assembly Response to Seismic Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bardet, Philippe; Ricciardi, Guillaume

    2016-01-31

    Vibrations are inherently present in nuclear reactors, especially in cores and steam generators of pressurized water reactors (PWR). They can have significant effects on local heat transfer and wear and tear in the reactor and often set safety margins. The simulation of these multiphysics phenomena from first principles requires the coupling of several codes, which is one the most challenging tasks in modern computer simulation. Here an ambitious multiphysics multidisciplinary validation campaign is conducted. It relied on an integrated team of experimentalists and code developers to acquire benchmark and validation data for fluid-structure interaction codes. Data are focused on PWRmore » fuel bundle behavior during seismic transients.« less

  1. Supercomputer simulations of structure formation in the Universe

    NASA Astrophysics Data System (ADS)

    Ishiyama, Tomoaki

    2017-06-01

    We describe the implementation and performance results of our massively parallel MPI†/OpenMP‡ hybrid TreePM code for large-scale cosmological N-body simulations. For domain decomposition, a recursive multi-section algorithm is used and the size of domains are automatically set so that the total calculation time is the same for all processes. We developed a highly-tuned gravity kernel for short-range forces, and a novel communication algorithm for long-range forces. For two trillion particles benchmark simulation, the average performance on the fullsystem of K computer (82,944 nodes, the total number of core is 663,552) is 5.8 Pflops, which corresponds to 55% of the peak speed.

  2. An efficient implementation of semi-numerical computation of the Hartree-Fock exchange on the Intel Phi processor

    NASA Astrophysics Data System (ADS)

    Liu, Fenglai; Kong, Jing

    2018-07-01

    Unique technical challenges and their solutions for implementing semi-numerical Hartree-Fock exchange on the Phil Processor are discussed, especially concerning the single- instruction-multiple-data type of processing and small cache size. Benchmark calculations on a series of buckyball molecules with various Gaussian basis sets on a Phi processor and a six-core CPU show that the Phi processor provides as much as 12 times of speedup with large basis sets compared with the conventional four-center electron repulsion integration approach performed on the CPU. The accuracy of the semi-numerical scheme is also evaluated and found to be comparable to that of the resolution-of-identity approach.

  3. A study of the required Rayleigh number to sustain dynamo with various inner core radius

    NASA Astrophysics Data System (ADS)

    Nishida, Y.; Katoh, Y.; Matsui, H.; Kumamoto, A.

    2017-12-01

    It is widely accepted that the geomagnetic field is sustained by thermal and compositional driven convections of a liquid iron alloy in the outer core. The generation process of the geomagnetic field has been studied by a number of MHD dynamo simulations. Recent studies of the ratio of the Earth's core evolution suggest that the inner solid core radius ri to the outer liquid core radius ro changed from ri/ro = 0 to 0.35 during the last one billion years. There are some studies of dynamo in the early Earth with smaller inner core than the present. Heimpel et al. (2005) revealed the Rayleigh number Ra of the onset of dynamo process as a function of ri/ro from simulation, while paleomagnetic observation shows that the geomagnetic field has been sustained for 3.5 billion years. While Heimpel and Evans (2013) studied dynamo processes taking into account the thermal history of the Earth's interior, there were few cases corresponding to the early Earth. Driscoll (2016) performed a series of dynamo based on a thermal evolution model. Despite a number of dynamo simulations, dynamo process occurring in the interior of the early Earth has not been fully understood because the magnetic Prandtl numbers in these simulations are much larger than that for the actual outer core.In the present study, we performed thermally driven dynamo simulations with different aspect ratio ri/ro = 0.15, 0.25 and 0.35 to evaluate the critical Ra for the thermal convection and required Ra to maintain the dynamo. For this purpose, we performed simulations with various Ra and fixed the other control parameters such as the Ekman, Prandtl, and magnetic Prandtl numbers. For the initial condition and boundary conditions, we followed the dynamo benchmark case 1 by Christensen et al. (2001). The results show that the critical Ra increases with the smaller aspect ratio ri/ro. It is confirmed that larger amplitude of buoyancy is required in the smaller inner core to maintain dynamo.

  4. Ranking of sabotage/tampering avoidance technology alternatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, W.B.; Tabatabai, A.S.; Powers, T.B.

    1986-01-01

    Pacific Northwest Laboratory conducted a study to evaluate alternatives to the design and operation of nuclear power plants, emphasizing a reduction of their vulnerability to sabotage. Estimates of core melt accident frequency during normal operations and from sabotage/tampering events were used to rank the alternatives. Core melt frequency for normal operations was estimated using sensitivity analysis of results of probabilistic risk assessments. Core melt frequency for sabotage/tampering was estimated by developing a model based on probabilistic risk analyses, historic data, engineering judgment, and safeguards analyses of plant locations where core melt events could be initiated. Results indicate the most effectivemore » alternatives focus on large areas of the plant, increase safety system redundancy, and reduce reliance on single locations for mitigation of transients. Less effective options focus on specific areas of the plant, reduce reliance on some plant areas for safe shutdown, and focus on less vulnerable targets.« less

  5. Zirconium(IV) oxide: New coating material for nanoresonators for shell-isolated nanoparticle-enhanced Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Krajczewski, Jan; Abdulrahman, Heman Burhanalden; Kołątaj, Karol; Kudelski, Andrzej

    2018-03-01

    One tool that can be used for determining the structure and composition of surfaces of various materials (even in in situ conditions) is shell-isolated nanoparticle-enhanced Raman spectroscopy (SHINERS). In SHINERS measurements, the surface under investigation is covered with a layer of surface-protected plasmonic nanoparticles, and then the Raman spectrum of the surface analysed is recorded. The plasmonic cores of the used core-shell structures act as electromagnetic nanoresonators, significantly locally enhancing the intensity of the electric field of the incident radiation, leading to a large increase in the efficiency of the generation of the Raman signal from molecules in the close proximity to the deposited SHINERS nanoresonators. A protective layer (from transparent dielectrics such as SiO2, Al2O3 or TiO2) prevents direct interaction between the plasmonic metal and the analysed surface (such interactions may lead to changes in the structure of the surface) and, in the case of plasmonic cores other than gold cores, the dielectric layer increases the chemical stability of the metal core. In this contribution, we show for the first time that core-shell nanoparticles having a silver core (both a solid and hollow one) and a shell of zirconium(IV) oxide are very efficient SHINERS nanoresonators that are significantly more stable in acidic and alkaline media than the silver-silica core-shell structures typically used for SHINERS experiments.

  6. Benchmarking specialty hospitals, a scoping review on theory and practice.

    PubMed

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardiner, W.W.; Barrows, E.S.; Antrim, L.D

    Buttermilk Channel was one of seven waterways that was sampled and evaluated for dredging and sediment disposal. Sediment samples were collected and analyses were conducted on sediment core samples. The evaluation of proposed dredged material from the channel included bulk sediment chemical analyses, chemical analyses of site water and elutriate, water column and benthic acute toxicity tests, and bioaccumulation studies. Individual sediment core samples were analyzed for grain size, moisture content, and total organic carbon. A composite sediment samples, representing the entire area proposed for dredging, was analyzed for bulk density, polynuclear aromatic hydrocarbons, and 1,4-dichlorobenzene. Site water and elutriatemore » were analyzed for metals, pesticides, and PCBs.« less

  8. Analyses of water, bank material, bottom material, and elutriate samples collected near Belzoni, Mississippi (upper Yazoo projects)

    USGS Publications Warehouse

    Brightbill, David B.; Treadway, Joseph B.

    1980-01-01

    Four core-material-sampling sites and one bottom-material site were chosen by the U.S. Army Corps of Engineers to represent areas of proposed dredging activity along a 24.9-mile reach of the upper Yazoo River. Five receiving-water sites also were selected to represent the water that will contact the proposed dredged material. Chemical and physical analyses were performed upon core or bottom material and native-water (receiving-water) samples from these sites as well as upon elutriate samples of the mixture of sediment and receiving water. The results of these analyses are presented without interpertation. (USGS)

  9. Results of chemical and isotopic analyses of sediment and water from alluvium of the Canadian River near a closed municipal landfill, Norman, Oklahoma

    USGS Publications Warehouse

    Breit, George N.; Tuttle, Michele L.W.; Cozzarelli, Isabelle M.; Christenson, Scott C.; Jaeschke, Jeanne B.; Fey, David L.; Berry, Cyrus J.

    2005-01-01

    Results of physical and chemical analyses of sediment and water collected near a closed municipal landfill at Norman, Oklahoma are presented in this report. Sediment analyses are from 40 samples obtained by freeze-shoe coring at 5 sites, and 14 shallow (depth <1.3 m) sediment samples. The sediment was analyzed to determine grain size, the abundance of extractable iron species and the abundances and isotopic compositions of forms of sulfur. Water samples included pore water from the freeze-shoe core, ground water, and surface water. Pore water from 23 intervals of the core was collected and analyzed for major and trace dissolved species. Thirteen ground-water samples obtained from wells within a few meters of the freeze-shoe core sites and one from the landfill were analyzed for major and trace elements as well as the sulfur and oxygen isotope composition of dissolved sulfate. Samples of surface water were collected at 10 sites along the Canadian River from New Mexico to central Oklahoma. These river-water samples were analyzed for major elements, trace elements, and the isotopic composition of dissolved sulfate.

  10. Development of Safety Analysis Code System of Beam Transport and Core for Accelerator Driven System

    NASA Astrophysics Data System (ADS)

    Aizawa, Naoto; Iwasaki, Tomohiko

    2014-06-01

    Safety analysis code system of beam transport and core for accelerator driven system (ADS) is developed for the analyses of beam transients such as the change of the shape and position of incident beam. The code system consists of the beam transport analysis part and the core analysis part. TRACE 3-D is employed in the beam transport analysis part, and the shape and incident position of beam at the target are calculated. In the core analysis part, the neutronics, thermo-hydraulics and cladding failure analyses are performed by the use of ADS dynamic calculation code ADSE on the basis of the external source database calculated by PHITS and the cross section database calculated by SRAC, and the programs of the cladding failure analysis for thermoelastic and creep. By the use of the code system, beam transient analyses are performed for the ADS proposed by Japan Atomic Energy Agency. As a result, the rapid increase of the cladding temperature happens and the plastic deformation is caused in several seconds. In addition, the cladding is evaluated to be failed by creep within a hundred seconds. These results have shown that the beam transients have caused a cladding failure.

  11. All inclusive benchmarking.

    PubMed

    Ellis, Judith

    2006-07-01

    The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.

  12. Fe-FeO and Fe-Fe3C melting relations at Earth's core-mantle boundary conditions: Implications for a volatile-rich or oxygen-rich core

    NASA Astrophysics Data System (ADS)

    Morard, G.; Andrault, D.; Antonangeli, D.; Nakajima, Y.; Auzende, A. L.; Boulard, E.; Cervera, S.; Clark, A.; Lord, O. T.; Siebert, J.; Svitlyk, V.; Garbarino, G.; Mezouar, M.

    2017-09-01

    Eutectic melting temperatures in the Fe-FeO and Fe-Fe3C systems have been determined up to 150 GPa. Melting criteria include observation of a diffuse scattering signal by in situ X-Ray diffraction, and textural characterisation of recovered samples. In addition, compositions of eutectic liquids have been established by combining in situ Rietveld analyses with ex situ chemical analyses. Gathering these new results together with previous reports on Fe-S and Fe-Si systems allow us to discuss the specific effect of each light element (Si, S, O, C) on the melting properties of the outer core. Crystallization temperatures of Si-rich core compositional models are too high to be compatible with the absence of extensive mantle melting at the core-mantle boundary (CMB) and significant amounts of volatile elements such as S and/or C (>5 at%, corresponding to >2 wt%), or a large amount of O (>15 at% corresponding to ∼5 wt%) are required to reduce the crystallisation temperature of the core material below that of a peridotitic lower mantle.

  13. Optimizing the quality of breast cancer care at certified german breast centers: a benchmarking analysis for 2003-2009 with a particular focus on the interdisciplinary specialty of radiation oncology.

    PubMed

    Brucker, Sara Y; Wallwiener, Markus; Kreienberg, Rolf; Jonat, Walter; Beckmann, Matthias W; Bamberg, Michael; Wallwiener, Diethelm; Souchon, Rainer

    2011-02-01

    A voluntary, external, science-based benchmarking program was established in Germany in 2003 to analyze and improve the quality of breast cancer (BC) care. Based on recent data from 2009, we aim to show that such analyses can also be performed for individual interdisciplinary specialties, such as radiation oncology (RO). Breast centers were invited to participate in the benchmarking program. Nine guideline-based quality indicators (QIs) were initially defined, reviewed annually, and modified, expanded, or abandoned accordingly. QI changes over time were analyzed descriptively, with particular emphasis on relevance to radiation oncology. During the 2003-2009 study period, there were marked increases in breast center participation and postoperatively confirmed primary BCs. Starting from 9 process QIs, 15 QIs were developed by 2009 as surrogate indicators of long-term outcome. During 2003-2009, 2/7 RO-relevant QIs (radiotherapy after breast-conserving surgery or after mastectomy) showed considerable increases (from 20 to 85% and 8 to 70%, respectively). Another three, initially high QIs practically reached the required levels. The current data confirm proof-of-concept for the established benchmarking program, which allows participating institutions to be compared and changes in quality of BC care to be tracked over time. Overall, marked QI increases suggest that BC care in Germany improved from 2003-2009. Moreover, it has become possible for the first time to demonstrate improvements in the quality of BC care longitudinally for individual breast centers. In addition, subgroups of relevant QIs can be used to demonstrate the progress achieved, but also the need for further improvement, in specific interdisciplinary specialties.

  14. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, L.C.; Deen, J.R.; Woodruff, W.L.

    1995-02-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  15. A European benchmarking system to evaluate in-hospital mortality rates in acute coronary syndrome: the EURHOBOP project.

    PubMed

    Dégano, Irene R; Subirana, Isaac; Torre, Marina; Grau, María; Vila, Joan; Fusco, Danilo; Kirchberger, Inge; Ferrières, Jean; Malmivaara, Antti; Azevedo, Ana; Meisinger, Christa; Bongard, Vanina; Farmakis, Dimitros; Davoli, Marina; Häkkinen, Unto; Araújo, Carla; Lekakis, John; Elosua, Roberto; Marrugat, Jaume

    2015-03-01

    Hospital performance models in acute myocardial infarction (AMI) are useful to assess patient management. While models are available for individual countries, mainly US, cross-European performance models are lacking. Thus, we aimed to develop a system to benchmark European hospitals in AMI and percutaneous coronary intervention (PCI), based on predicted in-hospital mortality. We used the EURopean HOspital Benchmarking by Outcomes in ACS Processes (EURHOBOP) cohort to develop the models, which included 11,631 AMI patients and 8276 acute coronary syndrome (ACS) patients who underwent PCI. Models were validated with a cohort of 55,955 European ACS patients. Multilevel logistic regression was used to predict in-hospital mortality in European hospitals for AMI and PCI. Administrative and clinical models were constructed with patient- and hospital-level covariates, as well as hospital- and country-based random effects. Internal cross-validation and external validation showed good discrimination at the patient level and good calibration at the hospital level, based on the C-index (0.736-0.819) and the concordance correlation coefficient (55.4%-80.3%). Mortality ratios (MRs) showed excellent concordance between administrative and clinical models (97.5% for AMI and 91.6% for PCI). Exclusion of transfers and hospital stays ≤1day did not affect in-hospital mortality prediction in sensitivity analyses, as shown by MR concordance (80.9%-85.4%). Models were used to develop a benchmarking system to compare in-hospital mortality rates of European hospitals with similar characteristics. The developed system, based on the EURHOBOP models, is a simple and reliable tool to compare in-hospital mortality rates between European hospitals in AMI and PCI. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Seismic assessment of WSDOT bridges with prestressed hollow core piles : part II.

    DOT National Transportation Integrated Search

    2009-12-01

    This report investigates the seismic performance of a reinforced concrete : bridge with prestressed hollow core piles. Both nonlinear static and nonlinear dynamic : analyses were carried out. A three-dimensional spine model of the bridge was : ...

  17. A Qualitative Analysis of the Spontaneous Volunteer Response to the 2013 Sudan Floods: Changing the Paradigm.

    PubMed

    Albahari, Amin; Schultz, Carl H

    2017-06-01

    Introduction While the concept of community resilience is gaining traction, the role of spontaneous volunteers during the initial response to disasters remains controversial. In an attempt to resolve some of the debate, investigators examined the activities of a spontaneous volunteer group called Nafeer after the Sudan floods around the city of Khartoum in August of 2013. Hypothesis Can spontaneous volunteers successfully initiate, coordinate, and deliver sustained assistance immediately after a disaster? This retrospective, descriptive case study involved: (1) interviews with Nafeer members that participated in the disaster response to the Khartoum floods; (2) examination of documents generated during the event; and (3) subsequent benchmarking of their efforts with the Sphere Handbook. Members who agreed to participate were requested to provide all documents in their possession relating to Nafeer. The response by Nafeer was then benchmarked to the Sphere Handbook's six core standards, as well as the 11 minimum standards in essential health services. A total of 11 individuals were interviewed (six from leadership and five from active members). Nafeer's activities included: food provision; delivery of basic health care; environmental sanitation campaigns; efforts to raise awareness; and construction and strengthening of flood barricades. Its use of electronic platforms and social media to collect data and coordinate the organization's response was effective. Nafeer adopted a flat-management structure, dividing itself into 14 committees. A Coordination Committee was in charge of liaising between all committees. The Health and Sanitation Committee supervised two health days which included mobile medical and dentistry clinics supported by a mobile laboratory and pharmacy. The Engineering Committee managed to construct and maintain flood barricades. Nafeer used crowd-sourcing to fund its activities, receiving donations locally and internationally using supporters outside Sudan. Nafeer completely fulfilled three of Sphere's core standards and partially fulfilled the other three, but none of the essential health services standards were fulfilled. Even though the Sphere Handbook was chosen as the best available "gold standard" to benchmark Nafeer's efforts, it showed significant limitations in effectively measuring this group. It appears that independent spontaneous volunteer initiatives, like Nafeer, potentially can improve community resilience and play a significant role in the humanitarian response. Such organizations should be the subject of increased research activity. Relevant bodies should consider issuing separate guidelines supporting spontaneous volunteer organizations. Albahari A , Schultz CH . A qualitative analysis of the spontaneous volunteer response to the 2013 Sudan floods: changing the paradigm. Prehosp Disaster Med. 2017;32(3):240-248.

  18. Long-Term Changes In The Behaviour Of Jakobshavns Isbrae, West Greenland During The Late Quaternary-Holocene

    NASA Astrophysics Data System (ADS)

    O'Cofaigh, C.; Jennings, A.; Moros, M.; Andrews, J. T.; Kilfeather, A.; Dowdeswell, J. A.; Richter, T.

    2008-12-01

    This poster shows the initial results of a joint scientific project to reconstruct the Late Quaternary-Holocene behavior of Jakobshavns Isbrae in central west Greenland, one of the largest ice streams draining the modern Greenland Ice Sheet. The underlying rationale for this research is to determine if recent observed changes to the mass balance of the Greenland Ice Sheet are part of the natural variability in ice-sheet dynamics, or if they relate to anthropogenically-induced climate warming. Key to resolving this question is an understanding of long-term changes in ice sheet behavior during the Late Quaternary and the Holocene. This research will allow assessment of the links between deglaciation and internal and external environmental controls, such as the influence of inflowing Atlantic Water, and will facilitate modelling of the likely future behavior of the GIS. Currently, four marine sediment cores arrayed along a transect from the Disko Bugt Fan to Disko Bay are providing information on changes in sediment flux and sedimentation style, such as abrupt intervals of iceberg-rafting vs. "normal" hemipelagic sedimentation, as well as the paleoceanographic setting and ice sheet-ocean interactions. The cores are being analysed using a variety of proxies including IRD, mineralogy, oxygen isotopes, foraminiferal assemblages, lithofacies analysis and AMS radiocarbon dating. Data are presented from two piston cores from the continental slope at the trough-mouth fan collected during the HE0006 'shakedown' cruise to Baffin Bay and from two gravity cores recovered in 2007 during MS Merian cruise MSM 05/03 to West Greenland. Slope cores contain sequences of laminated facies interpreted as fine-grained turbidites and intervals of massive, bioturbated, hemipelagic mud. The two Merian cores, contributed to this project by the Baltic Sea Research Institute, were collected from the southern entrance to Disko Bugt and the Vaigat channel north of Disko. Radiocarbon dates from the Disko Bugt core show that it contains a full Holocene record of glacial activity and paleoceanography. The poster will present the initial analyses, including radiocarbon dating, XRF compositional data, magnetic susceptibility, lithofacies and IRD analyses determined from x-radiography, foraminiferal analyses and sediment mineralogy. Additional cores and seismic data for this project will be obtained from a cruise on the Canadian research vessel, CSS Hudson in September 2008, and on the British ship, the RRS James Clark Ross in 2009.

  19. Asthma-specific health-related quality of life of people in Great Britain: A national survey.

    PubMed

    Upton, Jane; Lewis, Carine; Humphreys, Emily; Price, David; Walker, Samantha

    2016-11-01

    Although the ultimate goal of asthma treatment is to improve asthma-specific Health-Related Quality-Of-Life (HRQOL), in the UK population this is insufficiently studied. National asthma-specific HRQOL data is needed to inform strategies to address this condition. To benchmark asthma-specific HRQOL in a national survey of adults with asthma, and explore differences in this measure within subsections of the population. We analysed answers to the Marks Asthma Quality-of-Life Questionnaire (AQLQ-M) from a representative sample of 658 adults with asthma. Respondents answered asthma-specific questions to assess control, previous hospital admissions, asthma attacks and an indicator of severity. Higher scores indicate poorer HRQOL (maximum = 60). The highest quintile formed a subgroup 'Poor HRQOL'. Data were weighted to correct for any biases caused by differential non-response. Chi-square analyses were used to determine differences between good and poor quality of life and regression analyses performed to determine what factors are associated with poor HRQOL. The response rate was 49%. AQLQ-M median (IQR) scores were 5 (2-13) for the total sample (poor HRQOL = 21, good HRQOL = 3). Significant differences between good and poor HRQOL were observed in smoking status, SES, employment status and co-morbidities, but no differences were found between age groups. Those with poorly controlled asthma were significantly more likely to have poor HRQOL, ≥1 breathing related hospital admission or ≥1 asthma attack. This article provides benchmarking data on asthma-specific HRQOL. Improved strategies are needed to target interventions towards people experiencing poor HRQOL.

  20. SPOC Benchmark Case: SNRE Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishal Patel; Michael Eades; Claude Russel Joyner II

    The Small Nuclear Rocket Engine (SNRE) was modeled in the Center for Space Nuclear Research’s (CSNR) Space Propulsion Optimization Code (SPOC). SPOC aims to create nuclear thermal propulsion (NTP) geometries quickly to perform parametric studies on design spaces of historic and new NTP designs. The SNRE geometry was modeled in SPOC and a critical core with a reasonable amount of criticality margin was found. The fuel, tie-tubes, reflector, and control drum masses were predicted rather well. These are all very important for neutronics calculations so the active reactor geometries created with SPOC can continue to be trusted. Thermal calculations ofmore » the average and hot fuel channels agreed very well. The specific impulse calculations used historically and in SPOC disagree so mass flow rates and impulses differed. Modeling peripheral and power balance components that do not affect nuclear characteristics of the core is not a feature of SPOC and as such, these components should continue to be designed using other tools. A full paper detailing the available SNRE data and comparisons with SPOC outputs will be submitted as a follow-up to this abstract.« less

  1. Recommendations for training in pediatric psychology: defining core competencies across training levels.

    PubMed

    Palermo, Tonya M; Janicke, David M; McQuaid, Elizabeth L; Mullins, Larry L; Robins, Paul M; Wu, Yelena P

    2014-10-01

    As a field, pediatric psychology has focused considerable efforts on the education and training of students and practitioners. Alongside a broader movement toward competency attainment in professional psychology and within the health professions, the Society of Pediatric Psychology commissioned a Task Force to establish core competencies in pediatric psychology and address the need for contemporary training recommendations.   The Task Force adapted the framework proposed by the Competency Benchmarks Work Group on preparing psychologists for health service practice and defined competencies applicable across training levels ranging from initial practicum training to entry into the professional workforce in pediatric psychology.   Competencies within 6 cluster areas, including science, professionalism, interpersonal, application, education, and systems, and 1 crosscutting cluster, crosscutting knowledge competencies in pediatric psychology, are presented in this report.   Recommendations for the use of, and the further refinement of, these suggested competencies are discussed. © The Author 2014. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swaminarayan, Sriram; Germann, Timothy C; Kadau, Kai

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementationmore » of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.« less

  3. M3D-K Simulations of Beam-Driven Alfven Eigenmodes in ASDEX-U

    NASA Astrophysics Data System (ADS)

    Wang, Ge; Fu, Guoyong; Lauber, Philipp; Schneller, Mirjam

    2013-10-01

    Core-localized Alfven eigenmodes are often observed in neutral beam-heated plasma in ASDEX-U tokamak. In this work, hybrid simulations with the global kinetic/MHD hybrid code M3D-K have been carried out to investigate the linear stability and nonlinear dynamics of beam-driven Alfven eigenmodes using experimental parameters and profiles of an ASDEX-U discharge. The safety factor q profile is weakly reversed with minimum q value about qmin = 3.0. The simulation results show that the n = 3 mode transits from a reversed shear Alfven eigenmode (RSAE) to a core-localized toroidal Alfven eigenmode (TAE) as qmin drops from 3.0 to 2.79, consistent with results from the stability code NOVA as well as the experimental measurement. The M3D-K results are being compared with those of the linear gyrokinetic stability code LIGKA for benchmark. The simulation results will also be compared with the measured mode frequency and mode structure. This work was funded by the Max-Planck/Princeton Center for Plasma Physics.

  4. The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS

    DOE PAGES

    Ward, Andrew; Downar, Thomas J.; Xu, Y.; ...

    2015-04-22

    The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, themore » capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.« less

  5. Physics-based multiscale coupling for full core nuclear reactor simulation

    DOE PAGES

    Gaston, Derek R.; Permann, Cody J.; Peterson, John W.; ...

    2015-10-01

    Numerical simulation of nuclear reactors is a key technology in the quest for improvements in efficiency, safety, and reliability of both existing and future reactor designs. Historically, simulation of an entire reactor was accomplished by linking together multiple existing codes that each simulated a subset of the relevant multiphysics phenomena. Recent advances in the MOOSE (Multiphysics Object Oriented Simulation Environment) framework have enabled a new approach: multiple domain-specific applications, all built on the same software framework, are efficiently linked to create a cohesive application. This is accomplished with a flexible coupling capability that allows for a variety of different datamore » exchanges to occur simultaneously on high performance parallel computational hardware. Examples based on the KAIST-3A benchmark core, as well as a simplified Westinghouse AP-1000 configuration, demonstrate the power of this new framework for tackling—in a coupled, multiscale manner—crucial reactor phenomena such as CRUD-induced power shift and fuel shuffle. 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-SA license« less

  6. Massively parallel algorithm and implementation of RI-MP2 energy calculation for peta-scale many-core supercomputers.

    PubMed

    Katouda, Michio; Naruse, Akira; Hirano, Yukihiko; Nakajima, Takahito

    2016-11-15

    A new parallel algorithm and its implementation for the RI-MP2 energy calculation utilizing peta-flop-class many-core supercomputers are presented. Some improvements from the previous algorithm (J. Chem. Theory Comput. 2013, 9, 5373) have been performed: (1) a dual-level hierarchical parallelization scheme that enables the use of more than 10,000 Message Passing Interface (MPI) processes and (2) a new data communication scheme that reduces network communication overhead. A multi-node and multi-GPU implementation of the present algorithm is presented for calculations on a central processing unit (CPU)/graphics processing unit (GPU) hybrid supercomputer. Benchmark results of the new algorithm and its implementation using the K computer (CPU clustering system) and TSUBAME 2.5 (CPU/GPU hybrid system) demonstrate high efficiency. The peak performance of 3.1 PFLOPS is attained using 80,199 nodes of the K computer. The peak performance of the multi-node and multi-GPU implementation is 514 TFLOPS using 1349 nodes and 4047 GPUs of TSUBAME 2.5. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Re-visiting the tympanic membrane vicinity as core body temperature measurement site

    PubMed Central

    Gan, Chee Wee; Liang, Wenyu

    2017-01-01

    Core body temperature (CBT) is an important and commonly used indicator of human health and endurance performance. A rise in baseline CBT can be attributed to an onset of flu, infection or even thermoregulatory failure when it becomes excessive. Sites which have been used for measurement of CBT include the pulmonary artery, the esophagus, the rectum and the tympanic membrane. Among them, the tympanic membrane is an attractive measurement site for CBT due to its unobtrusive nature and ease of measurement facilitated, especially when continuous CBT measurements are needed for monitoring such as during military, occupational and sporting settings. However, to-date, there are still polarizing views on the suitability of tympanic membrane as a CBT site. This paper will revisit a number of key unresolved issues in the literature and also presents, for the first time, a benchmark of the middle ear temperature against temperature measurements from other sites. Results from experiments carried out on human and primate subjects will be presented to draw a fresh set of insights against the backdrop of hypotheses and controversies. PMID:28414722

  8. Re-visiting the tympanic membrane vicinity as core body temperature measurement site.

    PubMed

    Yeoh, Wui Keat; Lee, Jason Kai Wei; Lim, Hsueh Yee; Gan, Chee Wee; Liang, Wenyu; Tan, Kok Kiong

    2017-01-01

    Core body temperature (CBT) is an important and commonly used indicator of human health and endurance performance. A rise in baseline CBT can be attributed to an onset of flu, infection or even thermoregulatory failure when it becomes excessive. Sites which have been used for measurement of CBT include the pulmonary artery, the esophagus, the rectum and the tympanic membrane. Among them, the tympanic membrane is an attractive measurement site for CBT due to its unobtrusive nature and ease of measurement facilitated, especially when continuous CBT measurements are needed for monitoring such as during military, occupational and sporting settings. However, to-date, there are still polarizing views on the suitability of tympanic membrane as a CBT site. This paper will revisit a number of key unresolved issues in the literature and also presents, for the first time, a benchmark of the middle ear temperature against temperature measurements from other sites. Results from experiments carried out on human and primate subjects will be presented to draw a fresh set of insights against the backdrop of hypotheses and controversies.

  9. An Evaluation of One-Sided and Two-Sided Communication Paradigms on Relaxed-Ordering Interconnect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Khaled Z.; Hargrove, Paul H.; Iancu, Costin

    The Cray Gemini interconnect hardware provides multiple transfer mechanisms and out-of-order message delivery to improve communication throughput. In this paper we quantify the performance of one-sided and two-sided communication paradigms with respect to: 1) the optimal available hardware transfer mechanism, 2) message ordering constraints, 3) per node and per core message concurrency. In addition to using Cray native communication APIs, we use UPC and MPI micro-benchmarks to capture one- and two-sided semantics respectively. Our results indicate that relaxing the message delivery order can improve performance up to 4.6x when compared with strict ordering. When hardware allows it, high-level one-sided programmingmore » models can already take advantage of message reordering. Enforcing the ordering semantics of two-sided communication comes with a performance penalty. Furthermore, we argue that exposing out-of-order delivery at the application level is required for the next-generation programming models. Any ordering constraints in the language specifications reduce communication performance for small messages and increase the number of active cores required for peak throughput.« less

  10. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-05-20

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executedmore » in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.« less

  11. Interlaboratory comparison of immunohistochemical testing for HER2: results of the 2004 and 2005 College of American Pathologists HER2 Immunohistochemistry Tissue Microarray Survey.

    PubMed

    Fitzgibbons, Patrick L; Murphy, Douglas A; Dorfman, David M; Roche, Patrick C; Tubbs, Raymond R

    2006-10-01

    Correct assessment of human epidermal growth factor receptor 2 (HER2) status is essential in managing patients with invasive breast carcinoma, but few data are available on the accuracy of laboratories performing HER2 testing by immunohistochemistry (IHC). To review the results of the 2004 and 2005 College of American Pathologists HER2 Immunohistochemistry Tissue Microarray Survey. The HER2 survey is designed for laboratories performing immunohistochemical staining and interpretation for HER2. The survey uses tissue microarrays, each consisting of ten 3-mm tissue cores obtained from different invasive breast carcinomas. All cases are also analyzed by fluorescence in situ hybridization. Participants receive 8 tissue microarrays (80 cases) with instructions to perform immunostaining for HER2 using the laboratory's standard procedures. The laboratory interprets the stained slides and returns results to the College of American Pathologists for analysis. In 2004 and 2005, a core was considered "graded" when at least 90% of laboratories agreed on the result--negative (0, 1+) versus positive (2+, 3+). This interlaboratory comparison survey included 102 laboratories in 2004 and 141 laboratories in 2005. Of the 160 cases in both surveys, 111 (69%) achieved 90% consensus (graded). All 43 graded cores scored as IHC-positive were fluorescence in situ hybridization-positive, whereas all but 3 of the 68 IHC-negative graded cores were fluorescence in situ hybridization-negative. Ninety-seven (95%) of 102 laboratories in 2004 and 129 (91%) of 141 laboratories in 2005 correctly scored at least 90% of the graded cores. Performance among laboratories performing HER2 IHC in this tissue microarray-based survey was excellent. Cores found to be IHC-positive or IHC-negative by participant consensus can be used as validated benchmarks for interlaboratory comparison, allowing laboratories to assess their performance and determine if improvements are needed.

  12. Overview and Current Status of Analyses of Potential LEU Design Concepts for TREAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connaway, H. M.; Kontogeorgakos, D. C.; Papadias, D. D.

    2015-10-01

    Neutronic and thermal-hydraulic analyses have been performed to evaluate the performance of different low-enriched uranium (LEU) fuel design concepts for the conversion of the Transient Reactor Test Facility (TREAT) from its current high-enriched uranium (HEU) fuel. TREAT is an experimental reactor developed to generate high neutron flux transients for the testing of nuclear fuels. The goal of this work was to identify an LEU design which can maintain the performance of the existing HEU core while continuing to operate safely. A wide variety of design options were considered, with a focus on minimizing peak fuel temperatures and optimizing the powermore » coupling between the TREAT core and test samples. Designs were also evaluated to ensure that they provide sufficient reactivity and shutdown margin for each control rod bank. Analyses were performed using the core loading and experiment configuration of historic M8 Power Calibration experiments (M8CAL). The Monte Carlo code MCNP was utilized for steady-state analyses, and transient calculations were performed with the point kinetics code TREKIN. Thermal analyses were performed with the COMSOL multi-physics code. Using the results of this study, a new LEU Baseline design concept is being established, which will be evaluated in detail in a future report.« less

  13. TomoPhantom, a software package to generate 2D-4D analytical phantoms for CT image reconstruction algorithm benchmarks

    NASA Astrophysics Data System (ADS)

    Kazantsev, Daniil; Pickalov, Valery; Nagella, Srikanth; Pasca, Edoardo; Withers, Philip J.

    2018-01-01

    In the field of computerized tomographic imaging, many novel reconstruction techniques are routinely tested using simplistic numerical phantoms, e.g. the well-known Shepp-Logan phantom. These phantoms cannot sufficiently cover the broad spectrum of applications in CT imaging where, for instance, smooth or piecewise-smooth 3D objects are common. TomoPhantom provides quick access to an external library of modular analytical 2D/3D phantoms with temporal extensions. In TomoPhantom, quite complex phantoms can be built using additive combinations of geometrical objects, such as, Gaussians, parabolas, cones, ellipses, rectangles and volumetric extensions of them. Newly designed phantoms are better suited for benchmarking and testing of different image processing techniques. Specifically, tomographic reconstruction algorithms which employ 2D and 3D scanning geometries, can be rigorously analyzed using the software. TomoPhantom also provides a capability of obtaining analytical tomographic projections which further extends the applicability of software towards more realistic, free from the "inverse crime" testing. All core modules of the package are written in the C-OpenMP language and wrappers for Python and MATLAB are provided to enable easy access. Due to C-based multi-threaded implementation, volumetric phantoms of high spatial resolution can be obtained with computational efficiency.

  14. OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE - A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon Tibbitts; Arnis Judzis

    2001-04-01

    This document details the progress to date on the OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE -- A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING contract for the quarter starting January 2001 through March 2001. Accomplishments to date include the following: (1) On January 9th of 2001, details of the Mud Hammer Drilling Performance Testing Project were presented at a ''kick-off'' meeting held in Morgantown. (2) A preliminary test program was formulated and prepared for presentation at a meeting of the advisory board in Houston on the 8th of February. (3) The meeting was held with the advisorymore » board reviewing the test program in detail. (4) Consensus was achieved and the approved test program was initiated after thorough discussion. (5) This new program outlined the details of the drilling tests as well as scheduling the test program for the weeks of 14th and 21st of May 2001. (6) All the tasks were initiated for a completion to coincide with the test schedule. (7) By the end of March the hardware had been designed and the majority was either being fabricated or completed. (8) The rock was received and cored into cylinders.« less

  15. Scalable Metropolis Monte Carlo for simulation of hard shapes

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.

    2016-07-01

    We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.

  16. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  17. An Experimental Study of Characteristic Combustion-Driven Flow for CFD Validation

    NASA Technical Reports Server (NTRS)

    Santoro, Robert J.

    1997-01-01

    A series of uni-element rocket injector studies were completed to provide benchmark quality data needed to validate computational fluid dynamic models. A shear coaxial injector geometry was selected as the primary injector for study using gaseous hydrogen/oxygen and gaseous hydrogen/liquid oxygen propellants. Emphasis was placed on the use of nonintrusive diagnostic techniques to characterize the flowfields inside an optically-accessible rocket chamber. Measurements of the velocity and species fields were obtained using laser velocimetry and Raman spectroscopy, respectively. Qualitative flame shape information was also obtained using laser-induced fluorescence excited from OH radicals and laser light scattering studies of aluminum oxide particle seeded combusting flows. The gaseous hydrogen/liquid oxygen propellant studies for the shear coaxial injector focused on breakup mechanisms associated with the liquid oxygen jet under subcritical pressure conditions. Laser sheet illumination techniques were used to visualize the core region of the jet and a Phase Doppler Particle Analyzer was utilized for drop velocity, size and size distribution characterization. The results of these studies indicated that the shear coaxial geometry configuration was a relatively poor injector in terms of mixing. The oxygen core was observed to extend well downstream of the injector and a significant fraction of the mixing occurred in the near nozzle region where measurements were not possible to obtain. Detailed velocity and species measurements were obtained to allow CFD model validation and this set of benchmark data represents the most comprehensive data set available to date. As an extension of the investigation, a series of gas/gas injector studies were conducted in support of the X-33 Reusable Launch Vehicle program. A Gas/Gas Injector Technology team was formed consisting of the Marshall Space Flight Center, the NASA Lewis Research Center, Rocketdyne and Penn State. Injector geometries studied under this task included shear and swirl coaxial configurations as well as an impinging jet injector.

  18. An Experimental Study of Characteristic Combustion-Driven Flow for CFD Validation

    NASA Technical Reports Server (NTRS)

    Santoro, Robert J.

    1997-01-01

    A series of uni-element rocket injector studies were completed to provide benchmark quality data needed to validate computational fluid dynamic models. A shear coaxial injector geometry was selected as the primary injector for study using gaseous hydrogen/oxygen and gaseous hydrogen/liquid oxygen propellants. Emphasis was placed on the use of non-intrusive diagnostic techniques to characterize the flowfields inside an optically-accessible rocket chamber. Measurements of the velocity and species fields were obtained using laser velocimetry and Raman spectroscopy, respectively Qualitative flame shape information was also obtained using laser-induced fluorescence excited from OH radicals and laser light scattering studies of aluminum oxide particle seeded combusting flows. The gaseous hydrogen/liquid oxygen propellant studies for the shear coaxial injector focused on breakup mechanisms associated with the liquid oxygen jet under sub-critical pressure conditions. Laser sheet illumination techniques were used to visualize the core region of the jet and a Phase Doppler Particle Analyzer was utilized for drop velocity, size and size distribution characterization. The results of these studies indicated that the shear coaxial geometry configuration was a relatively poor injector in terms of mixing. The oxygen core was observed to extend well downstream of the injector and a significant fraction of the mixing occurred in the near nozzle region where measurements were not possible to obtain Detailed velocity and species measurements were obtained to allow CFD model validation and this set of benchmark data represents the most comprehensive data set available to date As an extension of the investigation, a series of gas/gas injector studies were conducted in support of the X-33 Reusable Launch Vehicle program. A Gas/Gas Injector Technology team was formed consisting of the Marshall Space Flight Center, the NASA Lewis Research Center, Rocketdyne and Penn State. Injector geometries studied under this task included shear and swirl coaxial configurations as well as an impinging jet injector.

  19. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1996 revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Tsao, C.L.

    1996-06-01

    This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less

  20. The impact of short prehospital times on trauma center performance benchmarking: An ecologic study.

    PubMed

    Byrne, James P; Mann, N Clay; Hoeft, Christopher J; Buick, Jason; Karanicolas, Paul; Rizoli, Sandro; Hunt, John P; Nathens, Avery B

    2016-04-01

    Emergency medical service (EMS) prehospital times vary between regions, yet the impact of local prehospital times on trauma center (TC) performance is unknown. To inform external benchmarking efforts, we explored the impact of EMS prehospital times on the risk-adjusted rate of emergency department (ED) death and overall hospital mortality at urban TCs across the United States. We used a novel ecologic study design, linking EMS data from the National EMS Information System to TCs participating in the American College of Surgeons' Trauma Quality Improvement Program (TQIP) by destination zip code. This approach provided EMS times for populations of injured patients transported to TQIP centers. We defined the exposure of interest as the 90th percentile total prehospital time (PHT) for each TC. TCs were then stratified by PHT quartile. Analyses were limited to adult patients with severe blunt or penetrating trauma, transported directly by land to urban TQIP centers. Random-intercept multilevel modeling was used to evaluate the risk-adjusted relationship between PHT quartile and the outcomes of ED death and overall hospital mortality. During the study period, 119,740 patients met inclusion criteria at 113 TCs. ED death occurred in 1% of patients, and overall mortality was 7.2%. Across all centers, the median PHT was 61 minutes (interquartile range, 53-71 minutes). After risk adjustment, TCs in regions with the shortest quartile of PHTs (<53 minutes) had significantly greater odds of ED death compared with those with the longest PHTs (odds ratio, 2.00; 95% confidence interval, 1.43-2.78). However, there was no association between PHT and overall TC mortality. At urban TCs, local EMS prehospital times are a significant predictor of ED death. However, no relationship exists between prehospital time and overall TC risk-adjusted mortality. Therefore, there is no evidence for the inclusion of EMS prehospital time in external benchmarking analyses.

Top