Sample records for medium benchmark suite

  1. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.; Kornreich, D.E.

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less

  2. MoMaS reactive transport benchmark using PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Park, H.

    2017-12-01

    MoMaS benchmark was developed to enhance numerical simulation capability for reactive transport modeling in porous media. The benchmark was published in late September of 2009; it is not taken from a real chemical system, but realistic and numerically challenging tests. PFLOTRAN is a state-of-art massively parallel subsurface flow and reactive transport code that is being used in multiple nuclear waste repository projects at Sandia National Laboratories including Waste Isolation Pilot Plant and Used Fuel Disposition. MoMaS benchmark has three independent tests with easy, medium, and hard chemical complexity. This paper demonstrates how PFLOTRAN is applied to this benchmark exercise and shows results of the easy benchmark test case which includes mixing of aqueous components and surface complexation. Surface complexations consist of monodentate and bidentate reactions which introduces difficulty in defining selectivity coefficient if the reaction applies to a bulk reference volume. The selectivity coefficient becomes porosity dependent for bidentate reaction in heterogeneous porous media. The benchmark is solved by PFLOTRAN with minimal modification to address the issue and unit conversions were made properly to suit PFLOTRAN.

  3. BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology.

    PubMed

    Villaverde, Alejandro F; Henriques, David; Smallbone, Kieran; Bongard, Sophia; Schmid, Joachim; Cicin-Sain, Damjan; Crombach, Anton; Saez-Rodriguez, Julio; Mauch, Klaus; Balsa-Canto, Eva; Mendes, Pedro; Jaeger, Johannes; Banga, Julio R

    2015-02-20

    Dynamic modelling is one of the cornerstones of systems biology. Many research efforts are currently being invested in the development and exploitation of large-scale kinetic models. The associated problems of parameter estimation (model calibration) and optimal experimental design are particularly challenging. The community has already developed many methods and software packages which aim to facilitate these tasks. However, there is a lack of suitable benchmark problems which allow a fair and systematic evaluation and comparison of these contributions. Here we present BioPreDyn-bench, a set of challenging parameter estimation problems which aspire to serve as reference test cases in this area. This set comprises six problems including medium and large-scale kinetic models of the bacterium E. coli, baker's yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The level of description includes metabolism, transcription, signal transduction, and development. For each problem we provide (i) a basic description and formulation, (ii) implementations ready-to-run in several formats, (iii) computational results obtained with specific solvers, (iv) a basic analysis and interpretation. This suite of benchmark problems can be readily used to evaluate and compare parameter estimation methods. Further, it can also be used to build test problems for sensitivity and identifiability analysis, model reduction and optimal experimental design methods. The suite, including codes and documentation, can be freely downloaded from the BioPreDyn-bench website, https://sites.google.com/site/biopredynbenchmarks/ .

  4. Analysis of a benchmark suite to evaluate mixed numeric and symbolic processing

    NASA Technical Reports Server (NTRS)

    Ragharan, Bharathi; Galant, David

    1992-01-01

    The suite of programs that formed the benchmark for a proposed advanced computer is described and analyzed. The features of the processor and its operating system that are tested by the benchmark are discussed. The computer codes and the supporting data for the analysis are given as appendices.

  5. GraphBench

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R.; Hong, Seokyong; Lee, Sangkeun

    2016-06-01

    GraphBench is a benchmark suite for graph pattern mining and graph analysis systems. The benchmark suite is a significant addition to conducting apples-apples comparison of graph analysis software (databases, in-memory tools, triple stores, etc.)

  6. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    PubMed

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  7. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  8. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  9. Evaluation of the ACEC Benchmark Suite for Real-Time Applications

    DTIC Science & Technology

    1990-07-23

    1.0 benchmark suite waSanalyzed with respect to its measuring of Ada real-time features such as tasking, memory management, input/output, scheduling...and delay statement, Chapter 13 features , pragmas, interrupt handling, subprogram overhead, numeric computations etc. For most of the features that...meant for programming real-time systems. The ACEC benchmarks have been analyzed extensively with respect to their measuring of Ada real-time features

  10. Spherical harmonic results for the 3D Kobayashi Benchmark suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, P N; Chang, B; Hanebutte, U R

    1999-03-02

    Spherical harmonic solutions are presented for the Kobayashi benchmark suite. The results were obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL.

  11. Algorithm and Architecture Independent Benchmarking with SEAK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, andmore » weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less

  12. The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool

    EPA Pesticide Factsheets

    Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.

  13. EVA Health and Human Performance Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  14. PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Frederick, J. M.

    2016-12-01

    In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.

  15. FireHose Streaming Benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created inmore » the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.« less

  16. Implementation and verification of global optimization benchmark problems

    NASA Astrophysics Data System (ADS)

    Posypkin, Mikhail; Usov, Alexander

    2017-12-01

    The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  17. The Suite for Embedded Applications and Kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-05-10

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We havedesigned SEAK, a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions to these bottlenecks? and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) andgoal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user blackbox evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informativemore » for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less

  18. EVA Human Health and Performance Benchmarking Study Overview and Development of a Microgravity Protocol

    NASA Technical Reports Server (NTRS)

    Norcross, Jason; Jarvis, Sarah; Bekdash, Omar; Cupples, Scott; Abercromby, Andrew

    2017-01-01

    The primary objective of this study is to develop a protocol to reliably characterize human health and performance metrics for individuals working inside various EVA suits under realistic spaceflight conditions. Expected results and methodologies developed during this study will provide the baseline benchmarking data and protocols with which future EVA suits and suit configurations (e.g., varied pressure, mass, center of gravity [CG]) and different test subject populations (e.g., deconditioned crewmembers) may be reliably assessed and compared. Results may also be used, in conjunction with subsequent testing, to inform fitness-for-duty standards, as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  19. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  20. Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine

    2004-01-01

    We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.

  1. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  2. Machine characterization and benchmark performance prediction

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.

    1988-01-01

    From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.

  3. Revel8or: Model Driven Capacity Planning Tool Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Liming; Liu, Yan; Bui, Ngoc B.

    2007-05-31

    Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less

  4. Data Race Benchmark Collection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Chunhua; Lin, Pei-Hung; Asplund, Joshua

    2017-03-21

    This project is a benchmark suite of Open-MP parallel codes that have been checked for data races. The programs are marked to show which do and do not have races. This allows them to be leveraged while testing and developing race detection tools.

  5. ELAPSE - NASA AMES LISP AND ADA BENCHMARK SUITE: EFFICIENCY OF LISP AND ADA PROCESSING - A SYSTEM EVALUATION

    NASA Technical Reports Server (NTRS)

    Davis, G. J.

    1994-01-01

    One area of research of the Information Sciences Division at NASA Ames Research Center is devoted to the analysis and enhancement of processors and advanced computer architectures, specifically in support of automation and robotic systems. To compare systems' abilities to efficiently process Lisp and Ada, scientists at Ames Research Center have developed a suite of non-parallel benchmarks called ELAPSE. The benchmark suite was designed to test a single computer's efficiency as well as alternate machine comparisons on Lisp, and/or Ada languages. ELAPSE tests the efficiency with which a machine can execute the various routines in each environment. The sample routines are based on numeric and symbolic manipulations and include two-dimensional fast Fourier transformations, Cholesky decomposition and substitution, Gaussian elimination, high-level data processing, and symbol-list references. Also included is a routine based on a Bayesian classification program sorting data into optimized groups. The ELAPSE benchmarks are available for any computer with a validated Ada compiler and/or Common Lisp system. Of the 18 routines that comprise ELAPSE, provided within this package are 14 developed or translated at Ames. The others are readily available through literature. The benchmark that requires the most memory is CHOLESKY.ADA. Under VAX/VMS, CHOLESKY.ADA requires 760K of main memory. ELAPSE is available on either two 5.25 inch 360K MS-DOS format diskettes (standard distribution) or a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The ELAPSE benchmarks were written in 1990. VAX and VMS are trademarks of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.

  6. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  7. Benchmarking gate-based quantum computers

    NASA Astrophysics Data System (ADS)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  8. A note on bound constraints handling for the IEEE CEC'05 benchmark function suite.

    PubMed

    Liao, Tianjun; Molina, Daniel; de Oca, Marco A Montes; Stützle, Thomas

    2014-01-01

    The benchmark functions and some of the algorithms proposed for the special session on real parameter optimization of the 2005 IEEE Congress on Evolutionary Computation (CEC'05) have played and still play an important role in the assessment of the state of the art in continuous optimization. In this article, we show that if bound constraints are not enforced for the final reported solutions, state-of-the-art algorithms produce infeasible best candidate solutions for the majority of functions of the IEEE CEC'05 benchmark function suite. This occurs even though the optima of the CEC'05 functions are within the specified bounds. This phenomenon has important implications on algorithm comparisons, and therefore on algorithm designs. This article's goal is to draw the attention of the community to the fact that some authors might have drawn wrong conclusions from experiments using the CEC'05 problems.

  9. Processor Emulator with Benchmark Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  10. Benefits of e-Learning Benchmarks: Australian Case Studies

    ERIC Educational Resources Information Center

    Choy, Sarojni

    2007-01-01

    In 2004 the Australian Flexible Learning Framework developed a suite of quantitative and qualitative indicators on the uptake, use and impact of e-learning in the Vocational Education and Training (VET) sector. These indicators were used to design items for a survey to gather quantitative data for benchmarking. A series of four surveys gathered…

  11. Benchmark Lisp And Ada Programs

    NASA Technical Reports Server (NTRS)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  12. Evaluation of Graph Pattern Matching Workloads in Graph Analysis Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Seokyong; Lee, Sangkeun; Lim, Seung-Hwan

    2016-01-01

    Graph analysis has emerged as a powerful method for data scientists to represent, integrate, query, and explore heterogeneous data sources. As a result, graph data management and mining became a popular area of research, and led to the development of plethora of systems in recent years. Unfortunately, the number of emerging graph analysis systems and the wide range of applications, coupled with a lack of apples-to-apples comparisons, make it difficult to understand the trade-offs between different systems and the graph operations for which they are designed. A fair comparison of these systems is a challenging task for the following reasons:more » multiple data models, non-standardized serialization formats, various query interfaces to users, and diverse environments they operate in. To address these key challenges, in this paper we present a new benchmark suite by extending the Lehigh University Benchmark (LUBM) to cover the most common capabilities of various graph analysis systems. We provide the design process of the benchmark, which generalizes the workflow for data scientists to conduct the desired graph analysis on different graph analysis systems. Equipped with this extended benchmark suite, we present performance comparison for nine subgraph pattern retrieval operations over six graph analysis systems, namely NetworkX, Neo4j, Jena, Titan, GraphX, and uRiKA. Through the proposed benchmark suite, this study reveals both quantitative and qualitative findings in (1) implications in loading data into each system; (2) challenges in describing graph patterns for each query interface; and (3) different sensitivity of each system to query selectivity. We envision that this study will pave the road for: (i) data scientists to select the suitable graph analysis systems, and (ii) data management system designers to advance graph analysis systems.« less

  13. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather; Robinson, William H.; Rech, Paolo

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  14. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE PAGES

    Quinn, Heather; Robinson, William H.; Rech, Paolo; ...

    2015-12-17

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  15. NAS Grid Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.

  16. Fourier-Accelerated Nodal Solvers (FANS) for homogenization problems

    NASA Astrophysics Data System (ADS)

    Leuschner, Matthias; Fritzen, Felix

    2017-11-01

    Fourier-based homogenization schemes are useful to analyze heterogeneous microstructures represented by 2D or 3D image data. These iterative schemes involve discrete periodic convolutions with global ansatz functions (mostly fundamental solutions). The convolutions are efficiently computed using the fast Fourier transform. FANS operates on nodal variables on regular grids and converges to finite element solutions. Compared to established Fourier-based methods, the number of convolutions is reduced by FANS. Additionally, fast iterations are possible by assembling the stiffness matrix. Due to the related memory requirement, the method is best suited for medium-sized problems. A comparative study involving established Fourier-based homogenization schemes is conducted for a thermal benchmark problem with a closed-form solution. Detailed technical and algorithmic descriptions are given for all methods considered in the comparison. Furthermore, many numerical examples focusing on convergence properties for both thermal and mechanical problems, including also plasticity, are presented.

  17. Toward Scalable Benchmarks for Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1996-01-01

    This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short- and long-term behavior. These benchmarks should be both portable and scalable so they may be used on storage systems from tens of gigabytes to petabytes or more. By developing a standard set of benchmarks that reflect real user workload, we hope to encourage system designers and users to publish performance figures that can be compared with those of other systems. This will allow users to choose the system that best meets their needs and give designers a tool with which they can measure the performance effects of improvements to their systems.

  18. Clomp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gylenhaal, J.; Bronevetsky, G.

    2007-05-25

    CLOMP is the C version of the Livermore OpenMP benchmark deeloped to measure OpenMP overheads and other performance impacts due to threading (like NUMA memory layouts, memory contention, cache effects, etc.) in order to influence future system design. Current best-in-class implementations of OpenMP have overheads at least ten times larger than is required by many of our applications for effective use of OpenMP. This benchmark shows the significant negative performance impact of these relatively large overheads and of other thread effects. The CLOMP benchmark highly configurable to allow a variety of problem sizes and threading effects to be studied andmore » it carefully checks its results to catch many common threading errors. This benchmark is expected to be included as part of the Sequoia Benchmark suite for the Sequoia procurement.« less

  19. A suite of benchmark and challenge problems for enhanced geothermal systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark; Fu, Pengcheng; McClure, Mark

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilitiesmore » to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners. We present the suite of benchmark and challenge problems developed for the GTO-CCS, providing problem descriptions and sample solutions.« less

  20. Evaluation of CHO Benchmarks on the Arria 10 FPGA using Intel FPGA SDK for OpenCL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes themore » FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. Benchmarking of OpenCL-based framework is an effective way for analyzing the performance of system by studying the execution of the benchmark applications. CHO is a suite of benchmark applications that provides support for OpenCL [1]. The authors presented CHO as an OpenCL port of the CHStone benchmark. Using Altera OpenCL (AOCL) compiler to synthesize the benchmark applications, they listed the resource usage and performance of each kernel that can be successfully synthesized by the compiler. In this report, we evaluate the resource usage and performance of the CHO benchmark applications using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board that features an Arria 10 FPGA device. The focus of the report is to have a better understanding of the resource usage and performance of the kernel implementations using Arria-10 FPGA devices compared to Stratix-5 FPGA devices. In addition, we also gain knowledge about the limitations of the current compiler when it fails to synthesize a benchmark application.« less

  1. Present Status and Extensions of the Monte Carlo Performance Benchmark

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  2. Selecting a Benchmark Suite to Profile High-Performance Computing (HPC) Machines

    DTIC Science & Technology

    2014-11-01

    architectures. Machines now contain central processing units (CPUs), graphics processing units (GPUs), and many integrated core ( MIC ) architecture all...evaluate the feasibility and applicability of a new architecture just released to the market . Researchers are often unsure how available resources will...architectures. Having a suite of programs running on different architectures, such as GPUs, MICs , and CPUs, adds complexity and technical challenges

  3. How to Advance TPC Benchmarks with Dependability Aspects

    NASA Astrophysics Data System (ADS)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  4. Dose specification for hippocampal sparing whole brain radiotherapy (HS WBRT): considerations from the UK HIPPO trial QA programme.

    PubMed

    Megias, Daniel; Phillips, Mark; Clifton-Hadley, Laura; Harron, Elizabeth; Eaton, David J; Sanghera, Paul; Whitfield, Gillian

    2017-03-01

    The HIPPO trial is a UK randomized Phase II trial of hippocampal sparing (HS) vs conventional whole-brain radiotherapy after surgical resection or radiosurgery in patients with favourable prognosis with 1-4 brain metastases. Each participating centre completed a planning benchmark case as part of the dedicated radiotherapy trials quality assurance programme (RTQA), promoting the safe and effective delivery of HS intensity-modulated radiotherapy (IMRT) in a multicentre trial setting. Submitted planning benchmark cases were reviewed using visualization for radiotherapy software (VODCA) evaluating plan quality and compliance in relation to the HIPPO radiotherapy planning and delivery guidelines. Comparison of the planning benchmark data highlighted a plan specified using dose to medium as an outlier by comparison with those specified using dose to water. Further evaluation identified that the reported plan statistics for dose to medium were lower as a result of the dose calculated at regions of PTV inclusive of bony cranium being lower relative to brain. Specification of dose to water or medium remains a source of potential ambiguity and it is essential that as part of a multicentre trial, consideration is given to reported differences, particularly in the presence of bone. Evaluation of planning benchmark data as part of an RTQA programme has highlighted an important feature of HS IMRT dosimetry dependent on dose being specified to water or medium, informing the development and undertaking of HS IMRT as part of the HIPPO trial. Advances in knowledge: The potential clinical impact of differences between dose to medium and dose to water are demonstrated for the first time, in the setting of HS whole-brain radiotherapy.

  5. Benchmarking high performance computing architectures with CMS’ skeleton framework

    NASA Astrophysics Data System (ADS)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  6. Effectiveness of Unmanned Surface Vehicles in Anti-submarine Warfare with the Goal of Protecting a High Value Unit

    DTIC Science & Technology

    2015-06-01

    headquarters Services , Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and...are positioned on the outer ASW screen to protect an HVU from submarine attacks. This baseline scenario provides a standardized benchmark on current...are positioned on the outer ASW screen to protect an HVU from submarine attacks. This baseline scenario provides us a standardized benchmark . In the

  7. Comparison of Origin 2000 and Origin 3000 Using NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Turney, Raymond D.

    2001-01-01

    This report describes results of benchmark tests on the Origin 3000 system currently being installed at the NASA Ames National Advanced Supercomputing facility. This machine will ultimately contain 1024 R14K processors. The first part of the system, installed in November, 2000 and named mendel, is an Origin 3000 with 128 R12K processors. For comparison purposes, the tests were also run on lomax, an Origin 2000 with R12K processors. The BT, LU, and SP application benchmarks in the NAS Parallel Benchmark Suite and the kernel benchmark FT were chosen to determine system performance and measure the impact of changes on the machine as it evolves. Having been written to measure performance on Computational Fluid Dynamics applications, these benchmarks are assumed appropriate to represent the NAS workload. Since the NAS runs both message passing (MPI) and shared-memory, compiler directive type codes, both MPI and OpenMP versions of the benchmarks were used. The MPI versions used were the latest official release of the NAS Parallel Benchmarks, version 2.3. The OpenMP versiqns used were PBN3b2, a beta version that is in the process of being released. NPB 2.3 and PBN 3b2 are technically different benchmarks, and NPB results are not directly comparable to PBN results.

  8. Benchmarking high performance computing architectures with CMS’ skeleton framework

    DOE PAGES

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-11-23

    Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less

  9. Benchmarking high performance computing architectures with CMS’ skeleton framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less

  10. Subgroup Benchmark Calculations for the Intra-Pellet Nonuniform Temperature Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Jung, Yeon Sang; Liu, Yuxuan

    A benchmark suite has been developed by Seoul National University (SNU) for intrapellet nonuniform temperature distribution cases based on the practical temperature profiles according to the thermal power levels. Though a new subgroup capability for nonuniform temperature distribution was implemented in MPACT, no validation calculation has been performed for the new capability. This study focuses on bench-marking the new capability through a code-to-code comparison. Two continuous-energy Monte Carlo codes, McCARD and CE-KENO, are engaged in obtaining reference solutions, and the MPACT results are compared to the SNU nTRACER using a similar cross section library and subgroup method to obtain self-shieldedmore » cross sections.« less

  11. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins.

    PubMed

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)(2)-V(2), Modweb were used for the comparison and model generation. Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure.

  12. 75 FR 16712 - Waybill Data Released in Three-Benchmark Rail Rate Proceedings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-02

    ... carrier for the 4 years that correspond with the most recently published Revenue Shortfall Allocation... approach for medium-size rail rate disputes and revising its Three-Benchmark approach for smaller rail rate.... Id. at 246-47. \\1\\ Canadian Pacific Railway Co., Soo Line Railroad Company, Delaware & Hudson Railway...

  13. Test suite for image-based motion estimation of the brain and tongue

    NASA Astrophysics Data System (ADS)

    Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.

    2017-03-01

    Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an "image synthesis" test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head- brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that inconsistent motion can yield "ghost" shear strains, which are a function of slice acquisition viability as opposed to a true physical deformation.

  14. Test Suite for Image-Based Motion Estimation of the Brain and Tongue

    PubMed Central

    Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.

    2017-01-01

    Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an “image synthesis” test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head-brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that inconsistent motion can yield “ghost” shear strains, which are a function of slice acquisition viability as opposed to a true physical deformation. PMID:28781414

  15. Electric load shape benchmarking for small- and medium-sized commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Xuan; Hong, Tianzhen; Chen, Yixing

    Small- and medium-sized commercial buildings owners and utility managers often look for opportunities for energy cost savings through energy efficiency and energy waste minimization. However, they currently lack easy access to low-cost tools that help interpret the massive amount of data needed to improve understanding of their energy use behaviors. Benchmarking is one of the techniques used in energy audits to identify which buildings are priorities for an energy analysis. Traditional energy performance indicators, such as the energy use intensity (annual energy per unit of floor area), consider only the total annual energy consumption, lacking consideration of the fluctuation ofmore » energy use behavior over time, which reveals the time of use information and represents distinct energy use behaviors during different time spans. To fill the gap, this study developed a general statistical method using 24-hour electric load shape benchmarking to compare a building or business/tenant space against peers. Specifically, the study developed new forms of benchmarking metrics and data analysis methods to infer the energy performance of a building based on its load shape. We first performed a data experiment with collected smart meter data using over 2,000 small- and medium-sized businesses in California. We then conducted a cluster analysis of the source data, and determined and interpreted the load shape features and parameters with peer group analysis. Finally, we implemented the load shape benchmarking feature in an open-access web-based toolkit (the Commercial Building Energy Saver) to provide straightforward and practical recommendations to users. The analysis techniques were generic and flexible for future datasets of other building types and in other utility territories.« less

  16. Electric load shape benchmarking for small- and medium-sized commercial buildings

    DOE PAGES

    Luo, Xuan; Hong, Tianzhen; Chen, Yixing; ...

    2017-07-28

    Small- and medium-sized commercial buildings owners and utility managers often look for opportunities for energy cost savings through energy efficiency and energy waste minimization. However, they currently lack easy access to low-cost tools that help interpret the massive amount of data needed to improve understanding of their energy use behaviors. Benchmarking is one of the techniques used in energy audits to identify which buildings are priorities for an energy analysis. Traditional energy performance indicators, such as the energy use intensity (annual energy per unit of floor area), consider only the total annual energy consumption, lacking consideration of the fluctuation ofmore » energy use behavior over time, which reveals the time of use information and represents distinct energy use behaviors during different time spans. To fill the gap, this study developed a general statistical method using 24-hour electric load shape benchmarking to compare a building or business/tenant space against peers. Specifically, the study developed new forms of benchmarking metrics and data analysis methods to infer the energy performance of a building based on its load shape. We first performed a data experiment with collected smart meter data using over 2,000 small- and medium-sized businesses in California. We then conducted a cluster analysis of the source data, and determined and interpreted the load shape features and parameters with peer group analysis. Finally, we implemented the load shape benchmarking feature in an open-access web-based toolkit (the Commercial Building Energy Saver) to provide straightforward and practical recommendations to users. The analysis techniques were generic and flexible for future datasets of other building types and in other utility territories.« less

  17. Benchmarking short sequence mapping tools

    PubMed Central

    2013-01-01

    Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764

  18. Protein-free culture of the human pancreatic cancer cell line, SUIT-2.

    PubMed

    Taniguchi, S; Iwamura, T; Kitamura, N; Yamanari, H; Kojima, A; Hidaka, K; Seguchi, K; Setoguchi, T

    1994-12-01

    A human pancreatic cancer cell line (SUIT-2), usually cultured in serum-supplemented medium (DMEM/FBS), was adapted to protein-free conditions using a 1:1 mixture of DMEM and Ham's F12 medium (DMEM/F12). The cells have been maintained in DMEM/F12 for more than 2 years, with over 50 passages. The SUIT-2 cells grew in DMEM/F12 with a doubling time of 35.7 h, which was similar to that in DMEM/FBS (35.0 h). The cellular morphology was similar in both media. Type IV collagenolytic activity was detected in the conditioned media from cells grown in DMEM/F12. The secretion of CEA and CA19-9 initially decreased in DMEM/F12. CEA was not detected after passage 5 (p5) but the concentration of CA19-9 did not decrease further after the first few serial passages in protein-free medium. Xenografts of SUIT-2 cells cultured in DMEM/F12 remained tumorigenic and could form metastatic tumors in nude mice. In conclusion, SUIT-2 cells grown in protein-free media continued to produce CA19-9 and type IV collagenase in vitro and formed metastatic tumors in vivo.

  19. A 1D radiative transfer benchmark with polarization via doubling and adding

    NASA Astrophysics Data System (ADS)

    Ganapol, B. D.

    2017-11-01

    Highly precise numerical solutions to the radiative transfer equation with polarization present a special challenge. Here, we establish a precise numerical solution to the radiative transfer equation with combined Rayleigh and isotropic scattering in a 1D-slab medium with simple polarization. The 2-Stokes vector solution for the fully discretized radiative transfer equation in space and direction derives from the method of doubling and adding enhanced through convergence acceleration. Updates to benchmark solutions found in the literature to seven places for reflectance and transmittance as well as for angular flux follow. Finally, we conclude with the numerical solution in a partially randomly absorbing heterogeneous medium.

  20. A call for benchmarking transposable element annotation methods.

    PubMed

    Hoen, Douglas R; Hickey, Glenn; Bourque, Guillaume; Casacuberta, Josep; Cordaux, Richard; Feschotte, Cédric; Fiston-Lavier, Anna-Sophie; Hua-Van, Aurélie; Hubley, Robert; Kapusta, Aurélie; Lerat, Emmanuelle; Maumus, Florian; Pollock, David D; Quesneville, Hadi; Smit, Arian; Wheeler, Travis J; Bureau, Thomas E; Blanchette, Mathieu

    2015-01-01

    DNA derived from transposable elements (TEs) constitutes large parts of the genomes of complex eukaryotes, with major impacts not only on genomic research but also on how organisms evolve and function. Although a variety of methods and tools have been developed to detect and annotate TEs, there are as yet no standard benchmarks-that is, no standard way to measure or compare their accuracy. This lack of accuracy assessment calls into question conclusions from a wide range of research that depends explicitly or implicitly on TE annotation. In the absence of standard benchmarks, toolmakers are impeded in improving their tools, annotators cannot properly assess which tools might best suit their needs, and downstream researchers cannot judge how accuracy limitations might impact their studies. We therefore propose that the TE research community create and adopt standard TE annotation benchmarks, and we call for other researchers to join the authors in making this long-overdue effort a success.

  1. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE PAGES

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less

  2. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Javier Ortensi; Sonat Sen

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less

  3. Toward Policy-Relevant Benchmarks for Interpreting Effect Sizes: Combining Effects with Costs

    ERIC Educational Resources Information Center

    Harris, Douglas N.

    2009-01-01

    The common reporting of effect sizes has been an important advance in education research in recent years. However, the benchmarks used to interpret the size of these effects--as small, medium, and large--do little to inform educational administration and policy making because they do not account for program costs. The author proposes an approach…

  4. Heterogeneous Distributed Computing for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  5. DE-NE0008277_PROTEUS final technical report 2018

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enqvist, Andreas

    This project details re-evaluations of experiments of gas-cooled fast reactor (GCFR) core designs performed in the 1970s at the PROTEUS reactor and create a series of International Reactor Physics Experiment Evaluation Project (IRPhEP) benchmarks. Currently there are no gas-cooled fast reactor (GCFR) experiments available in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). These experiments are excellent candidates for reanalysis and development of multiple benchmarks because these experiments provide high-quality integral nuclear data relevant to the validation and refinement of thorium, neptunium, uranium, plutonium, iron, and graphite cross sections. It would be cost prohibitive to reproduce suchmore » a comprehensive suite of experimental data to support any future GCFR endeavors.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H.

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, althoughmore » the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage over vector supercomputers, and, if so, which of the parallel offerings would be most useful in real-world scientific computation. In part to draw attention to some of the performance reporting abuses prevalent at the time, the present author wrote a humorous essay 'Twelve Ways to Fool the Masses,' which described in a light-hearted way a number of the questionable ways in which both vendor marketing people and scientists were inflating and distorting their performance results. All of this underscored the need for an objective and scientifically defensible measure to compare performance on these systems.« less

  7. 77 FR 33167 - Citric Acid and Certain Citrate Salts From the People's Republic of China: Preliminary Results of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-05

    ... to the short- and medium-term rates to convert them to long- term rates using Bloomberg U.S... derivation of the benchmark and discount rates used to value these subsidies is discussed below. Short-Term... inflation-adjusted short-term benchmark rate, we have also excluded any countries with aberrational or...

  8. Methodology and issues of integral experiments selection for nuclear data validation

    NASA Astrophysics Data System (ADS)

    Tatiana, Ivanova; Ivanov, Evgeny; Hill, Ian

    2017-09-01

    Nuclear data validation involves a large suite of Integral Experiments (IEs) for criticality, reactor physics and dosimetry applications. [1] Often benchmarks are taken from international Handbooks. [2, 3] Depending on the application, IEs have different degrees of usefulness in validation, and usually the use of a single benchmark is not advised; indeed, it may lead to erroneous interpretation and results. [1] This work aims at quantifying the importance of benchmarks used in application dependent cross section validation. The approach is based on well-known General Linear Least Squared Method (GLLSM) extended to establish biases and uncertainties for given cross sections (within a given energy interval). The statistical treatment results in a vector of weighting factors for the integral benchmarks. These factors characterize the value added by a benchmark for nuclear data validation for the given application. The methodology is illustrated by one example, selecting benchmarks for 239Pu cross section validation. The studies were performed in the framework of Subgroup 39 (Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files) established at the Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD).

  9. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  10. A large-scale benchmark of gene prioritization methods.

    PubMed

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  11. EVA Suit R and D for Performance Optimization

    NASA Technical Reports Server (NTRS)

    Cowley, Matthew S.; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar

    2014-01-01

    Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. To verify that new suit designs meet requirements, full prototypes must be built and tested with human subjects. However, numerous design iterations will occur before the hardware meets those requirements. Traditional draw-prototype-test paradigms for R&D are prohibitively expensive with today's shrinking Government budgets. Personnel at NASA are developing modern simulation techniques which focus on human-centric designs by creating virtual prototype simulations and fully adjustable physical prototypes of suit hardware. During the R&D design phase, these easily modifiable representations of an EVA suit's hard components will allow designers to think creatively and exhaust design possibilities before they build and test working prototypes with human subjects. It allows scientists to comprehensively benchmark current suit capabilities and limitations for existing suit sizes and sizes that do not exist. This is extremely advantageous and enables comprehensive design down-selections to be made early in the design process, enables the use of human performance as design criteria, and enables designs to target specific populations

  12. Can Humans Fly Action Understanding with Multiple Classes of Actors

    DTIC Science & Technology

    2015-06-08

    recognition using structure from motion point clouds. In European Conference on Computer Vision, 2008. [5] R. Caruana. Multitask learning. Machine Learning...tonomous driving ? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. [12] L. Gorelick, M. Blank

  13. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins

    PubMed Central

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    Aim: This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)2-V2, Modweb were used for the comparison and model generation. Results: Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. Conclusion: This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure. PMID:24023424

  14. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    PubMed

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  15. Spherical Harmonic Solutions to the 3D Kobayashi Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, P.N.; Chang, B.; Hanebutte, U.R.

    1999-12-29

    Spherical harmonic solutions of order 5, 9 and 21 on spatial grids containing up to 3.3 million cells are presented for the Kobayashi benchmark suite. This suite of three problems with simple geometry of pure absorber with large void region was proposed by Professor Kobayashi at an OECD/NEA meeting in 1996. Each of the three problems contains a source, a void and a shield region. Problem 1 can best be described as a box in a box problem, where a source region is surrounded by a square void region which itself is embedded in a square shield region. Problems 2more » and 3 represent a shield with a void duct. Problem 2 having a straight and problem 3 a dog leg shaped duct. A pure absorber and a 50% scattering case are considered for each of the three problems. The solutions have been obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The Ardra code takes advantage of a two-level parallelization strategy, which combines message passing between processing nodes and thread based parallelism amongst processors on each node. All calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL.« less

  16. Verification of MCNP6.2 for Nuclear Criticality Safety Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2017-05-10

    Several suites of verification/validation benchmark problems were run in early 2017 to verify that the new production release of MCNP6.2 performs correctly for nuclear criticality safety applications (NCS). MCNP6.2 results for several NCS validation suites were compared to the results from MCNP6.1 [1] and MCNP6.1.1 [2]. MCNP6.1 is the production version of MCNP® released in 2013, and MCNP6.1.1 is the update released in 2014. MCNP6.2 includes all of the standard features for NCS calculations that have been available for the past 15 years, along with new features for sensitivity-uncertainty based methods for NCS validation [3]. Results from the benchmark suitesmore » were compared with results from previous verification testing [4-8]. Criticality safety analysts should consider testing MCNP6.2 on their particular problems and validation suites. No further development of MCNP5 is planned. MCNP6.1 is now 4 years old, and MCNP6.1.1 is now 3 years old. In general, released versions of MCNP are supported only for about 5 years, due to resource limitations. All future MCNP improvements, bug fixes, user support, and new capabilities are targeted only to MCNP6.2 and beyond.« less

  17. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Proctor, Fred H.

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these benchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  18. NAVO MSRC Navigator. Spring 2006

    DTIC Science & Technology

    2006-01-01

    all of these upgrades are complete, the effective computing power of the NAVO MSRC will be essentially tripled, as measured by sustainable ... performance on the HPCMP benchmark suite. All four of these systems will be configured with two gigabytes of memory per processor, IBM’s “Federation” inter

  19. 77 FR 38288 - Ocean Transportation Intermediary License; Applicants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-27

    ...). Application Type: Name Change. Global Atlantic Logistics LLC (OFF), 1901 SW 31st Avenue, Pembroke Park, FL....gov . Bellcom, Inc. (NVO), 503 Commerce Park Drive, Suite E, Marietta, GA 30060. Officers: Cornelius U... License. Benchmark Worldwide Logistics, Inc. dba Star Ocean Lines (NVO & OFF), 24900 South Route 53...

  20. A new numerical benchmark for variably saturated variable-density flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Guevara, Carlos; Graf, Thomas

    2016-04-01

    In subsurface hydrological systems, spatial and temporal variations in solute concentration and/or temperature may affect fluid density and viscosity. These variations could lead to potentially unstable situations, in which a dense fluid overlies a less dense fluid. These situations could produce instabilities that appear as dense plume fingers migrating downwards counteracted by vertical upwards flow of freshwater (Simmons et al., Transp. Porous Medium, 2002). As a result of unstable variable-density flow, solute transport rates are increased over large distances and times as compared to constant-density flow. The numerical simulation of variable-density flow in saturated and unsaturated media requires corresponding benchmark problems against which a computer model is validated (Diersch and Kolditz, Adv. Water Resour, 2002). Recorded data from a laboratory-scale experiment of variable-density flow and solute transport in saturated and unsaturated porous media (Simmons et al., Transp. Porous Medium, 2002) is used to define a new numerical benchmark. The HydroGeoSphere code (Therrien et al., 2004) coupled with PEST (www.pesthomepage.org) are used to obtain an optimized parameter set capable of adequately representing the data set by Simmons et al., (2002). Fingering in the numerical model is triggered using random hydraulic conductivity fields. Due to the inherent randomness, a large number of simulations were conducted in this study. The optimized benchmark model adequately predicts the plume behavior and the fate of solutes. This benchmark is useful for model verification of variable-density flow problems in saturated and/or unsaturated media.

  1. A suite of exercises for verifying dynamic earthquake rupture codes

    USGS Publications Warehouse

    Harris, Ruth A.; Barall, Michael; Aagaard, Brad T.; Ma, Shuo; Roten, Daniel; Olsen, Kim B.; Duan, Benchun; Liu, Dunyu; Luo, Bin; Bai, Kangchen; Ampuero, Jean-Paul; Kaneko, Yoshihiro; Gabriel, Alice-Agnes; Duru, Kenneth; Ulrich, Thomas; Wollherr, Stephanie; Shi, Zheqiang; Dunham, Eric; Bydlon, Sam; Zhang, Zhenguo; Chen, Xiaofei; Somala, Surendra N.; Pelties, Christian; Tago, Josue; Cruz-Atienza, Victor Manuel; Kozdon, Jeremy; Daub, Eric; Aslam, Khurram; Kase, Yuko; Withers, Kyle; Dalguer, Luis

    2018-01-01

    We describe a set of benchmark exercises that are designed to test if computer codes that simulate dynamic earthquake rupture are working as intended. These types of computer codes are often used to understand how earthquakes operate, and they produce simulation results that include earthquake size, amounts of fault slip, and the patterns of ground shaking and crustal deformation. The benchmark exercises examine a range of features that scientists incorporate in their dynamic earthquake rupture simulations. These include implementations of simple or complex fault geometry, off‐fault rock response to an earthquake, stress conditions, and a variety of formulations for fault friction. Many of the benchmarks were designed to investigate scientific problems at the forefronts of earthquake physics and strong ground motions research. The exercises are freely available on our website for use by the scientific community.

  2. Assessing and benchmarking multiphoton microscopes for biologists

    PubMed Central

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F.

    2017-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs. PMID:24974026

  3. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    NASA Astrophysics Data System (ADS)

    Kahler, A. C.; MacFarlane, R. E.; Mosteller, R. D.; Kiedrowski, B. C.; Frankle, S. C.; Chadwick, M. B.; McKnight, R. D.; Lell, R. M.; Palmiotti, G.; Hiruta, H.; Herman, M.; Arcilla, R.; Mughabghab, S. F.; Sublet, J. C.; Trkov, A.; Trumbull, T. H.; Dunn, M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., "ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data," Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as 236U, 238,242Pu and 241,243Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical eigenvalues and a decreasing trend in calculated eigenvalue for 233U fueled systems as a function of Above-Thermal Fission Fraction remain. The comprehensive nature of this critical benchmark suite and the generally accurate calculated eigenvalues obtained with ENDF/B-VII.1 neutron cross sections support the conclusion that this is the most accurate general purpose ENDF/B cross section library yet released to the technical community.

  4. Making Waves.

    ERIC Educational Resources Information Center

    DeClark, Tom

    2000-01-01

    Presents an activity on waves that addresses the state standards and benchmarks of Michigan. Demonstrates waves and studies wave's medium, motion, and frequency. The activity is designed to address different learning styles. (YDS)

  5. Benchmark Dose Software (BMDS) Development and ...

    EPA Pesticide Factsheets

    This report is intended to provide an overview of beta version 1.0 of the implementation of a model of repeated measures data referred to as the Toxicodiffusion model. The implementation described here represents the first steps towards integration of the Toxicodiffusion model into the EPA benchmark dose software (BMDS). This version runs from within BMDS 2.0 using an option screen for making model selection, as is done for other models in the BMDS 2.0 suite. This report is intended to provide an overview of beta version 1.0 of the implementation of a model of repeated measures data referred to as the Toxicodiffusion model.

  6. Analysing the performance of personal computers based on Intel microprocessors for sequence aligning bioinformatics applications.

    PubMed

    Nair, Pradeep S; John, Eugene B

    2007-01-01

    Aligning specific sequences against a very large number of other sequences is a central aspect of bioinformatics. With the widespread availability of personal computers in biology laboratories, sequence alignment is now often performed locally. This makes it necessary to analyse the performance of personal computers for sequence aligning bioinformatics benchmarks. In this paper, we analyse the performance of a personal computer for the popular BLAST and FASTA sequence alignment suites. Results indicate that these benchmarks have a large number of recurring operations and use memory operations extensively. It seems that the performance can be improved with a bigger L1-cache.

  7. Selecting a Relational Database Management System for Library Automation Systems.

    ERIC Educational Resources Information Center

    Shekhel, Alex; O'Brien, Mike

    1989-01-01

    Describes the evaluation of four relational database management systems (RDBMSs) (Informix Turbo, Oracle 6.0 TPS, Unify 2000 and Relational Technology's Ingres 5.0) to determine which is best suited for library automation. The evaluation criteria used to develop a benchmark specifically designed to test RDBMSs for libraries are discussed. (CLB)

  8. An Open-Source Standard T-Wave Alternans Detector for Benchmarking.

    PubMed

    Khaustov, A; Nemati, S; Clifford, Gd

    2008-09-14

    We describe an open source algorithm suite for T-Wave Alternans (TWA) detection and quantification. The software consists of Matlab implementations of the widely used Spectral Method and Modified Moving Average with libraries to read both WFDB and ASCII data under windows and Linux. The software suite can run in both batch mode and with a provided graphical user interface to aid waveform exploration. Our software suite was calibrated using an open source TWA model, described in a partner paper [1] by Clifford and Sameni. For the PhysioNet/CinC Challenge 2008 we obtained a score of 0.881 for the Spectral Method and 0.400 for the MMA method. However, our objective was not to provide the best TWA detector, but rather a basis for detailed discussion of algorithms.

  9. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  10. Verification and benchmark testing of the NUFT computer code

    NASA Astrophysics Data System (ADS)

    Lee, K. H.; Nitao, J. J.; Kulshrestha, A.

    1993-10-01

    This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.

  11. Ground truth and benchmarks for performance evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.

    2003-09-01

    Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.

  12. Endostatin expression in a pancreatic cell line is modulated by a TNFα-dependent elastase

    PubMed Central

    Brammer, R D; Bramhall, S R; Eggo, M C

    2005-01-01

    Endostatin, an inhibitor of angiogenesis, is a 20 kDa fragment of the basement membrane protein, collagen XVIII. The formation of endostatin relies upon the action of proteases on collagen XVIII. TNFα, produced by activated macrophages, is a multifunctional proinflammatory cytokine with known effects on endothelial function. We postulated that TNFα may modulate the activities of proteases and thus regulate endostatin formation in pancreatic cells. Collagen XVIII/endostatin mRNA was expressed in one pancreatic cell line, SUIT-2, but not in BxPc-3. The 20 kDa endostatin was found in the cell-conditioned medium of SUIT-2 cells. Precursor forms only were found in the cells. Exogenous endostatin was degraded by cellular lysates of SUIT-2 cells. Elastase activity was found in cell extracts but not the cell-conditioned media of SUIT-2 cells. Incubation of SUIT-2 cells with TNFα increased intracellular elastase activity and also increased secretion of endostatin into the medium. We conclude that endostatin is released by SUIT-2 cells and that increases in intracellular elastase, induced by TNFα, are correlated with increased secretion. Endostatin is however susceptible to degradation by intracellular proteases and if tissue injury accompanies inflammation, endostatin may be degraded, allowing angiogenesis to occur. PMID:16234817

  13. NAS Parallel Benchmark Results 11-96. 1.0

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Bailey, David; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The NAS Parallel Benchmarks have been developed at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion. In other words, the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. These results represent the best results that have been reported to us by the vendors for the specific 3 systems listed. In this report, we present new NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, SGI Origin200, and SGI Origin2000. We also report High Performance Fortran (HPF) based NPB results for IBM SP2 Wide Nodes, HP/Convex Exemplar SPP2000, and SGI/CRAY T3D. These results have been submitted by Applied Parallel Research (APR) and Portland Group Inc. (PGI). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks.

  14. Performance effects of irregular communications patterns on massively parallel multiprocessors

    NASA Technical Reports Server (NTRS)

    Saltz, Joel; Petiton, Serge; Berryman, Harry; Rifkin, Adam

    1991-01-01

    A detailed study of the performance effects of irregular communications patterns on the CM-2 was conducted. The communications capabilities of the CM-2 were characterized under a variety of controlled conditions. In the process of carrying out the performance evaluation, extensive use was made of a parameterized synthetic mesh. In addition, timings with unstructured meshes generated for aerodynamic codes and a set of sparse matrices with banded patterns on non-zeroes were performed. This benchmarking suite stresses the communications capabilities of the CM-2 in a range of different ways. Benchmark results demonstrate that it is possible to make effective use of much of the massive concurrency available in the communications network.

  15. Benchmarking and performance analysis of the CM-2. [SIMD computer

    NASA Technical Reports Server (NTRS)

    Myers, David W.; Adams, George B., II

    1988-01-01

    A suite of benchmarking routines testing communication, basic arithmetic operations, and selected kernel algorithms written in LISP and PARIS was developed for the CM-2. Experiment runs are automated via a software framework that sequences individual tests, allowing for unattended overnight operation. Multiple measurements are made and treated statistically to generate well-characterized results from the noisy values given by cm:time. The results obtained provide a comparison with similar, but less extensive, testing done on a CM-1. Tests were chosen to aid the algorithmist in constructing fast, efficient, and correct code on the CM-2, as well as gain insight into what performance criteria are needed when evaluating parallel processing machines.

  16. Performance Against WELCOA's Worksite Health Promotion Benchmarks Across Years Among Selected US Organizations.

    PubMed

    Weaver, GracieLee M; Mendenhall, Brandon N; Hunnicutt, David; Picarella, Ryan; Leffelman, Brittanie; Perko, Michael; Bibeau, Daniel L

    2018-05-01

    The purpose of this study was to quantify the performance of organizations' worksite health promotion (WHP) activities against the benchmarking criteria included in the Well Workplace Checklist (WWC). The Wellness Council of America (WELCOA) developed a tool to assess WHP with its 100-item WWC, which represents WELCOA's 7 performance benchmarks. Workplaces. This study includes a convenience sample of organizations who completed the checklist from 2008 to 2015. The sample size was 4643 entries from US organizations. The WWC includes demographic questions, general questions about WHP programs, and scales to measure the performance against the WELCOA 7 benchmarks. Descriptive analyses of WWC items were completed separately for each year of the study period. The majority of the organizations represented each year were multisite, multishift, medium- to large-sized companies mostly in the services industry. Despite yearly changes in participating organizations, results across the WELCOA 7 benchmark scores were consistent year to year. Across all years, benchmarks that organizations performed the lowest were senior-level support, data collection, and programming; wellness teams and supportive environments were the highest scoring benchmarks. In an era marked with economic swings and health-care reform, it appears that organizations are staying consistent in their performance across these benchmarks. The WWC could be useful for organizations, practitioners, and researchers in assessing the quality of WHP programs.

  17. ED School Climate Surveys (EDSCLS) National Benchmark Study 2016. Appendix D. EDSCLS Pilot Test 2015 Report

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2015

    2015-01-01

    The ED School Climate Surveys (EDSCLS) are a suite of survey instruments being developed for schools, school districts, and states by the U.S. Department of Education's National Center for Education Statistics (NCES). Through the EDSCLS, schools nationwide will have access to survey instruments and a survey platform that will allow for the…

  18. A Diagnostic Assessment of Evolutionary Multiobjective Optimization for Water Resources Systems

    NASA Astrophysics Data System (ADS)

    Reed, P.; Hadka, D.; Herman, J.; Kasprzyk, J.; Kollat, J.

    2012-04-01

    This study contributes a rigorous diagnostic assessment of state-of-the-art multiobjective evolutionary algorithms (MOEAs) and highlights key advances that the water resources field can exploit to better discover the critical tradeoffs constraining our systems. This study provides the most comprehensive diagnostic assessment of MOEAs for water resources to date, exploiting more than 100,000 MOEA runs and trillions of design evaluations. The diagnostic assessment measures the effectiveness, efficiency, reliability, and controllability of ten benchmark MOEAs for a representative suite of water resources applications addressing rainfall-runoff calibration, long-term groundwater monitoring (LTM), and risk-based water supply portfolio planning. The suite of problems encompasses a range of challenging problem properties including (1) many-objective formulations with 4 or more objectives, (2) multi-modality (or false optima), (3) nonlinearity, (4) discreteness, (5) severe constraints, (6) stochastic objectives, and (7) non-separability (also called epistasis). The applications are representative of the dominant problem classes that have shaped the history of MOEAs in water resources and that will be dominant foci in the future. Recommendations are provided for which modern MOEAs should serve as tools and benchmarks in the future water resources literature.

  19. An impatient evolutionary algorithm with probabilistic tabu search for unified solution of some NP-hard problems in graph and set theory via clique finding.

    PubMed

    Guturu, Parthasarathy; Dantu, Ram

    2008-06-01

    Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.

  20. Solidification of a binary alloy: Finite-element, single-domain simulation and new benchmark solutions

    NASA Astrophysics Data System (ADS)

    Le Bars, Michael; Worster, M. Grae

    2006-07-01

    A finite-element simulation of binary alloy solidification based on a single-domain formulation is presented and tested. Resolution of phase change is first checked by comparison with the analytical results of Worster [M.G. Worster, Solidification of an alloy from a cooled boundary, J. Fluid Mech. 167 (1986) 481-501] for purely diffusive solidification. Fluid dynamical processes without phase change are then tested by comparison with previous numerical studies of thermal convection in a pure fluid [G. de Vahl Davis, Natural convection of air in a square cavity: a bench mark numerical solution, Int. J. Numer. Meth. Fluids 3 (1983) 249-264; D.A. Mayne, A.S. Usmani, M. Crapper, h-adaptive finite element solution of high Rayleigh number thermally driven cavity problem, Int. J. Numer. Meth. Heat Fluid Flow 10 (2000) 598-615; D.C. Wan, B.S.V. Patnaik, G.W. Wei, A new benchmark quality solution for the buoyancy driven cavity by discrete singular convolution, Numer. Heat Transf. 40 (2001) 199-228], in a porous medium with a constant porosity [G. Lauriat, V. Prasad, Non-darcian effects on natural convection in a vertical porous enclosure, Int. J. Heat Mass Transf. 32 (1989) 2135-2148; P. Nithiarasu, K.N. Seetharamu, T. Sundararajan, Natural convective heat transfer in an enclosure filled with fluid saturated variable porosity medium, Int. J. Heat Mass Transf. 40 (1997) 3955-3967] and in a mixed liquid-porous medium with a spatially variable porosity [P. Nithiarasu, K.N. Seetharamu, T. Sundararajan, Natural convective heat transfer in an enclosure filled with fluid saturated variable porosity medium, Int. J. Heat Mass Transf. 40 (1997) 3955-3967; N. Zabaras, D. Samanta, A stabilized volume-averaging finite element method for flow in porous media and binary alloy solidification processes, Int. J. Numer. Meth. Eng. 60 (2004) 1103-1138]. Finally, new benchmark solutions for simultaneous flow through both fluid and porous domains and for convective solidification processes are presented, based on the similarity solutions in corner-flow geometries recently obtained by Le Bars and Worster [M. Le Bars, M.G. Worster, Interfacial conditions between a pure fluid and a porous medium: implications for binary alloy solidification, J. Fluid Mech. (in press)]. Good agreement is found for all tests, hence validating our physical and numerical methods. More generally, the computations presented here could now be considered as standard and reliable analytical benchmarks for numerical simulations, specifically and independently testing the different processes underlying binary alloy solidification.

  1. Towards a suite of test cases and a pycomodo library to assess and improve numerical methods in ocean models

    NASA Astrophysics Data System (ADS)

    Garnier, Valérie; Honnorat, Marc; Benshila, Rachid; Boutet, Martial; Cambon, Gildas; Chanut, Jérome; Couvelard, Xavier; Debreu, Laurent; Ducousso, Nicolas; Duhaut, Thomas; Dumas, Franck; Flavoni, Simona; Gouillon, Flavien; Lathuilière, Cyril; Le Boyer, Arnaud; Le Sommer, Julien; Lyard, Florent; Marsaleix, Patrick; Marchesiello, Patrick; Soufflet, Yves

    2016-04-01

    The COMODO group (http://www.comodo-ocean.fr) gathers developers of global and limited-area ocean models (NEMO, ROMS_AGRIF, S, MARS, HYCOM, S-TUGO) with the aim to address well-identified numerical issues. In order to evaluate existing models, to improve numerical approaches and methods or concept (such as effective resolution) to assess the behavior of numerical model in complex hydrodynamical regimes and to propose guidelines for the development of future ocean models, a benchmark suite that covers both idealized test cases dedicated to targeted properties of numerical schemes and more complex test case allowing the evaluation of the kernel coherence is proposed. The benchmark suite is built to study separately, then together, the main components of an ocean model : the continuity and momentum equations, the advection-diffusion of the tracers, the vertical coordinate design and the time stepping algorithms. The test cases are chosen for their simplicity of implementation (analytic initial conditions), for their capacity to focus on a (few) scheme or part of the kernel, for the availability of analytical solutions or accurate diagnoses and lastly to simulate a key oceanic processus in a controlled environment. Idealized test cases allow to verify properties of numerical schemes advection-diffusion of tracers, - upwelling, - lock exchange, - baroclinic vortex, - adiabatic motion along bathymetry, and to put into light numerical issues that remain undetected in realistic configurations - trajectory of barotropic vortex, - interaction current - topography. When complexity in the simulated dynamics grows up, - internal wave, - unstable baroclinic jet, the sharing of the same experimental designs by different existing models is useful to get a measure of the model sensitivity to numerical choices (Soufflet et al., 2016). Lastly, test cases help in understanding the submesoscale influence on the dynamics (Couvelard et al., 2015). Such a benchmark suite is an interesting bed to continue research in numerical approaches as well as an efficient tool to maintain any oceanic code and assure the users a stamped model in a certain range of hydrodynamical regimes. Thanks to a common netCDF format, this suite is completed with a python library that encompasses all the tools and metrics used to assess the efficiency of the numerical methods. References - Couvelard X., F. Dumas, V. Garnier, A.L. Ponte, C. Talandier, A.M. Treguier (2015). Mixed layer formation and restratification in presence of mesoscale and submesoscale turbulence. Ocean Modelling, Vol 96-2, p 243-253. doi:10.1016/j.ocemod.2015.10.004. - Soufflet Y., P. Marchesiello, F. Lemarié, J. Jouanno, X. Capet, L. Debreu , R. Benshila (2016). On effective resolution in ocean models. Ocean Modelling, in press. doi:10.1016/j.ocemod.2015.12.004

  2. Suited versus unsuited analog astronaut performance using the Aouda.X space suit simulator: the DELTA experiment of MARS2013.

    PubMed

    Soucek, Alexander; Ostkamp, Lutz; Paternesi, Roberta

    2015-04-01

    Space suit simulators are used for extravehicular activities (EVAs) during Mars analog missions. Flight planning and EVA productivity require accurate time estimates of activities to be performed with such simulators, such as experiment execution or traverse walking. We present a benchmarking methodology for the Aouda.X space suit simulator of the Austrian Space Forum. By measuring and comparing the times needed to perform a set of 10 test activities with and without Aouda.X, an average time delay was derived in the form of a multiplicative factor. This statistical value (a second-over-second time ratio) is 1.30 and shows that operations in Aouda.X take on average a third longer than the same operations without the suit. We also show that activities predominantly requiring fine motor skills are associated with larger time delays (between 1.17 and 1.59) than those requiring short-distance locomotion or short-term muscle strain (between 1.10 and 1.16). The results of the DELTA experiment performed during the MARS2013 field mission increase analog mission planning reliability and thus EVA efficiency and productivity when using Aouda.X.

  3. Towards unbiased benchmarking of evolutionary and hybrid algorithms for real-valued optimisation

    NASA Astrophysics Data System (ADS)

    MacNish, Cara

    2007-12-01

    Randomised population-based algorithms, such as evolutionary, genetic and swarm-based algorithms, and their hybrids with traditional search techniques, have proven successful and robust on many difficult real-valued optimisation problems. This success, along with the readily applicable nature of these techniques, has led to an explosion in the number of algorithms and variants proposed. In order for the field to advance it is necessary to carry out effective comparative evaluations of these algorithms, and thereby better identify and understand those properties that lead to better performance. This paper discusses the difficulties of providing benchmarking of evolutionary and allied algorithms that is both meaningful and logistically viable. To be meaningful the benchmarking test must give a fair comparison that is free, as far as possible, from biases that favour one style of algorithm over another. To be logistically viable it must overcome the need for pairwise comparison between all the proposed algorithms. To address the first problem, we begin by attempting to identify the biases that are inherent in commonly used benchmarking functions. We then describe a suite of test problems, generated recursively as self-similar or fractal landscapes, designed to overcome these biases. For the second, we describe a server that uses web services to allow researchers to 'plug in' their algorithms, running on their local machines, to a central benchmarking repository.

  4. Benchmarking and Evaluating Unified Memory for OpenMP GPU Offloading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Alok; Li, Lingda; Kong, Martin

    Here, the latest OpenMP standard offers automatic device offloading capabilities which facilitate GPU programming. Despite this, there remain many challenges. One of these is the unified memory feature introduced in recent GPUs. GPUs in current and future HPC systems have enhanced support for unified memory space. In such systems, CPU and GPU can access each other's memory transparently, that is, the data movement is managed automatically by the underlying system software and hardware. Memory over subscription is also possible in these systems. However, there is a significant lack of knowledge about how this mechanism will perform, and how programmers shouldmore » use it. We have modified several benchmarks codes, in the Rodinia benchmark suite, to study the behavior of OpenMP accelerator extensions and have used them to explore the impact of unified memory in an OpenMP context. We moreover modified the open source LLVM compiler to allow OpenMP programs to exploit unified memory. The results of our evaluation reveal that, while the performance of unified memory is comparable with that of normal GPU offloading for benchmarks with little data reuse, it suffers from significant overhead when GPU memory is over subcribed for benchmarks with large amount of data reuse. Based on these results, we provide several guidelines for programmers to achieve better performance with unified memory.« less

  5. How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction

    NASA Astrophysics Data System (ADS)

    Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.

    2015-03-01

    The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.

  6. Suite of Benchmark Tests to Conduct Mesh-Convergence Analysis of Nonlinear and Non-constant Coefficient Transport Codes

    NASA Astrophysics Data System (ADS)

    Zamani, K.; Bombardelli, F. A.

    2014-12-01

    Verification of geophysics codes is imperative to avoid serious academic as well as practical consequences. In case that access to any given source code is not possible, the Method of Manufactured Solution (MMS) cannot be employed in code verification. In contrast, employing the Method of Exact Solution (MES) has several practical advantages. In this research, we first provide four new one-dimensional analytical solutions designed for code verification; these solutions are able to uncover the particular imperfections of the Advection-diffusion-reaction equation, such as nonlinear advection, diffusion or source terms, as well as non-constant coefficient equations. After that, we provide a solution of Burgers' equation in a novel setup. Proposed solutions satisfy the continuity of mass for the ambient flow, which is a crucial factor for coupled hydrodynamics-transport solvers. Then, we use the derived analytical solutions for code verification. To clarify gray-literature issues in the verification of transport codes, we designed a comprehensive test suite to uncover any imperfection in transport solvers via a hierarchical increase in the level of tests' complexity. The test suite includes hundreds of unit tests and system tests to check vis-a-vis the portions of the code. Examples for checking the suite start by testing a simple case of unidirectional advection; then, bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh-convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available, we utilize symmetry. Auxiliary subroutines for automation of the test suite and report generation are designed. All in all, the test package is not only a robust tool for code verification but it also provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport. We also convey our experiences in finding several errors which were not detectable with routine verification techniques.

  7. Biclustering as a method for RNA local multiple sequence alignment.

    PubMed

    Wang, Shu; Gutell, Robin R; Miranker, Daniel P

    2007-12-15

    Biclustering is a clustering method that simultaneously clusters both the domain and range of a relation. A challenge in multiple sequence alignment (MSA) is that the alignment of sequences is often intended to reveal groups of conserved functional subsequences. Simultaneously, the grouping of the sequences can impact the alignment; precisely the kind of dual situation biclustering is intended to address. We define a representation of the MSA problem enabling the application of biclustering algorithms. We develop a computer program for local MSA, BlockMSA, that combines biclustering with divide-and-conquer. BlockMSA simultaneously finds groups of similar sequences and locally aligns subsequences within them. Further alignment is accomplished by dividing both the set of sequences and their contents. The net result is both a multiple sequence alignment and a hierarchical clustering of the sequences. BlockMSA was tested on the subsets of the BRAliBase 2.1 benchmark suite that display high variability and on an extension to that suite to larger problem sizes. Also, alignments were evaluated of two large datasets of current biological interest, T box sequences and Group IC1 Introns. The results were compared with alignments computed by ClustalW, MAFFT, MUCLE and PROBCONS alignment programs using Sum of Pairs (SPS) and Consensus Count. Results for the benchmark suite are sensitive to problem size. On problems of 15 or greater sequences, BlockMSA is consistently the best. On none of the problems in the test suite are there appreciable differences in scores among BlockMSA, MAFFT and PROBCONS. On the T box sequences, BlockMSA does the most faithful job of reproducing known annotations. MAFFT and PROBCONS do not. On the Intron sequences, BlockMSA, MAFFT and MUSCLE are comparable at identifying conserved regions. BlockMSA is implemented in Java. Source code and supplementary datasets are available at http://aug.csres.utexas.edu/msa/

  8. Integrating the Nqueens Algorithm into a Parameterized Benchmark Suite

    DTIC Science & Technology

    2016-02-01

    FOB is a 64-node heterogeneous cluster consisting of 16-IBM dx360M4 nodes, each with one NVIDIA Kepler K20M GPUs and 48-IBM dx360M4 nodes, and each...nodes have 256-GB of memory and an NVIDIA Tesla K40 GPU. More details on Excalibur can be found on the US Army DSRC website.19 Figures 3 and 4 show the

  9. Commercial Building Energy Saver, API

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Piette, Mary; Lee, Sang Hoon

    2015-08-27

    The CBES API provides Application Programming Interface to a suite of functions to improve energy efficiency of buildings, including building energy benchmarking, preliminary retrofit analysis using a pre-simulation database DEEP, and detailed retrofit analysis using energy modeling with the EnergyPlus simulation engine. The CBES API is used to power the LBNL CBES Web App. It can be adopted by third party developers and vendors into their software tools and platforms.

  10. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these banchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  11. The ab-initio density matrix renormalization group in practice.

    PubMed

    Olivares-Amaya, Roberto; Hu, Weifeng; Nakatani, Naoki; Sharma, Sandeep; Yang, Jun; Chan, Garnet Kin-Lic

    2015-01-21

    The ab-initio density matrix renormalization group (DMRG) is a tool that can be applied to a wide variety of interesting problems in quantum chemistry. Here, we examine the density matrix renormalization group from the vantage point of the quantum chemistry user. What kinds of problems is the DMRG well-suited to? What are the largest systems that can be treated at practical cost? What sort of accuracies can be obtained, and how do we reason about the computational difficulty in different molecules? By examining a diverse benchmark set of molecules: π-electron systems, benchmark main-group and transition metal dimers, and the Mn-oxo-salen and Fe-porphine organometallic compounds, we provide some answers to these questions, and show how the density matrix renormalization group is used in practice.

  12. Ontology for Semantic Data Integration in the Domain of IT Benchmarking.

    PubMed

    Pfaff, Matthias; Neubig, Stefan; Krcmar, Helmut

    2018-01-01

    A domain-specific ontology for IT benchmarking has been developed to bridge the gap between a systematic characterization of IT services and their data-based valuation. Since information is generally collected during a benchmark exercise using questionnaires on a broad range of topics, such as employee costs, software licensing costs, and quantities of hardware, it is commonly stored as natural language text; thus, this information is stored in an intrinsically unstructured form. Although these data form the basis for identifying potentials for IT cost reductions, neither a uniform description of any measured parameters nor the relationship between such parameters exists. Hence, this work proposes an ontology for the domain of IT benchmarking, available at https://w3id.org/bmontology. The design of this ontology is based on requirements mainly elicited from a domain analysis, which considers analyzing documents and interviews with representatives from Small- and Medium-Sized Enterprises and Information and Communications Technology companies over the last eight years. The development of the ontology and its main concepts is described in detail (i.e., the conceptualization of benchmarking events, questionnaires, IT services, indicators and their values) together with its alignment with the DOLCE-UltraLite foundational ontology.

  13. Commercial Building Energy Saver, Web App

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Piette, Mary; Lee, Sang Hoon

    The CBES App is a web-based toolkit for use by small businesses and building owners and operators of small and medium size commercial buildings to perform energy benchmarking and retrofit analysis for buildings. The CBES App analyzes the energy performance of user's building for pre-and posto-retrofit, in conjunction with user's input data, to identify recommended retrofit measures, energy savings and economic analysis for the selected measures. The CBES App provides energy benchmarking, including getting an EnergyStar score using EnergyStar API and benchmarking against California peer buildings using the EnergyIQ API. The retrofit analysis includes a preliminary analysis by looking upmore » retrofit measures from a pre-simulated database DEEP, and a detailed analysis creating and running EnergyPlus models to calculate energy savings of retrofit measures. The CBES App builds upon the LBNL CBES API.« less

  14. Successful E-Learning in Small and Medium-Sized Enterprises

    ERIC Educational Resources Information Center

    Paulsen, Morten Flate

    2009-01-01

    So far, e-learning has primarily been used when there are many learners involved. The up-front investments related to e-learning are relatively high, and may be perceived as prohibitive for small and medium-sized enterprises (SMEs). Some e-learning is, however, getting less expensive, and some e-learning models are more suited for small-scale…

  15. Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Jin, Haoqiang; anMey, Dieter; Hatay, Ferhat F.

    2003-01-01

    With the advent of parallel hardware and software technologies users are faced with the challenge to choose a programming paradigm best suited for the underlying computer architecture. With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors (SMP), parallel programming techniques have evolved to support parallelism beyond a single level. Which programming paradigm is the best will depend on the nature of the given problem, the hardware architecture, and the available software. In this study we will compare different programming paradigms for the parallelization of a selected benchmark application on a cluster of SMP nodes. We compare the timings of different implementations of the same CFD benchmark application employing the same numerical algorithm on a cluster of Sun Fire SMP nodes. The rest of the paper is structured as follows: In section 2 we briefly discuss the programming models under consideration. We describe our compute platform in section 3. The different implementations of our benchmark code are described in section 4 and the performance results are presented in section 5. We conclude our study in section 6.

  16. Construct validity and expert benchmarking of the haptic virtual reality dental simulator.

    PubMed

    Suebnukarn, Siriwan; Chaisombat, Monthalee; Kongpunwijit, Thanapohn; Rhienmora, Phattanapon

    2014-10-01

    The aim of this study was to demonstrate construct validation of the haptic virtual reality (VR) dental simulator and to define expert benchmarking criteria for skills assessment. Thirty-four self-selected participants (fourteen novices, fourteen intermediates, and six experts in endodontics) at one dental school performed ten repetitions of three mode tasks of endodontic cavity preparation: easy (mandibular premolar with one canal), medium (maxillary premolar with two canals), and hard (mandibular molar with three canals). The virtual instrument's path length was registered by the simulator. The outcomes were assessed by an expert. The error scores in easy and medium modes accurately distinguished the experts from novices and intermediates at the onset of training, when there was a significant difference between groups (ANOVA, p<0.05). The trend was consistent until trial 5. From trial 6 on, the three groups achieved similar scores. No significant difference was found between groups at the end of training. Error score analysis was not able to distinguish any group at the hard level of training. Instrument path length showed a difference in performance according to groups at the onset of training (ANOVA, p<0.05). This study established construct validity for the haptic VR dental simulator by demonstrating its discriminant capabilities between that of experts and non-experts. The experts' error scores and path length were used to define benchmarking criteria for optimal performance.

  17. Reconstruction, Enhancement, Visualization, and Ergonomic Assessment for Laparoscopic Surgery

    DTIC Science & Technology

    2007-02-01

    support and upgrade of the REVEAL display system and tool suite in the University of Maryland Medical Center’s Simulation Center, (2) stereo video display...technology deployment, (3) stereo probe calibration benchmarks and support tools , (4) the production of research media, (5) baseline results from...endoscope can be used to generate a stereoscopic view for a surgeon, as with the DaVinci robot in use today. In order to use such an endoscope for

  18. Design and development of a prototypical software for semi-automatic generation of test methodologies and security checklists for IT vulnerability assessment in small- and medium-sized enterprises (SME)

    NASA Astrophysics Data System (ADS)

    Möller, Thomas; Bellin, Knut; Creutzburg, Reiner

    2015-03-01

    The aim of this paper is to show the recent progress in the design and prototypical development of a software suite Copra Breeder* for semi-automatic generation of test methodologies and security checklists for IT vulnerability assessment in small and medium-sized enterprises.

  19. Analysis of dosimetry from the H.B. Robinson unit 2 pressure vessel benchmark using RAPTOR-M3G and ALPAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, G.A.

    2011-07-01

    Document available in abstract form only, full text of document follows: The dosimetry from the H. B. Robinson Unit 2 Pressure Vessel Benchmark is analyzed with a suite of Westinghouse-developed codes and data libraries. The radiation transport from the reactor core to the surveillance capsule and ex-vessel locations is performed by RAPTOR-M3G, a parallel deterministic radiation transport code that calculates high-resolution neutron flux information in three dimensions. The cross-section library used in this analysis is the ALPAN library, an Evaluated Nuclear Data File (ENDF)/B-VII.0-based library designed for reactor dosimetry and fluence analysis applications. Dosimetry is evaluated with the industry-standard SNLRMLmore » reactor dosimetry cross-section data library. (authors)« less

  20. An efficient parallel algorithm for matrix-vector multiplication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrickson, B.; Leland, R.; Plimpton, S.

    The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in themore » well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.« less

  1. A Comparison of Methods for Assessing Space Suit Joint Ranges of Motion

    NASA Technical Reports Server (NTRS)

    Aitchison, Lindsay T.

    2012-01-01

    Through the Advanced Exploration Systems (AES) Program, NASA is attempting to use the vast collection of space suit mobility data from 50 years worth of space suit testing to build predictive analysis tools to aid in early architecture decisions for future missions and exploration programs. However, the design engineers must first understand if and how data generated by different methodologies can be compared directly and used in an essentially interchangeable manner. To address this question, the isolated joint range of motion data from two different test series were compared. Both data sets were generated from participants wearing the Mark III Space Suit Technology Demonstrator (MK-III), Waist Entry I-suit (WEI), and minimal clothing. Additionally the two tests shared a common test subject that allowed for within subject comparisons of the methods that greatly reduced the number of variables in play. The tests varied in their methodologies: the Space Suit Comparative Technologies Evaluation used 2-D photogrammetry to analyze isolated ranges of motion while the Constellation space suit benchmarking and requirements development used 3-D motion capture to evaluate both isolated and functional joint ranges of motion. The isolated data from both test series were compared graphically, as percent differences, and by simple statistical analysis. The results indicated that while the methods generate results that are statistically the same (significance level p= 0.01), the differences are significant enough in the practical sense to make direct comparisons ill advised. The concluding recommendations propose direction for how to bridge the data gaps and address future mobility data collection to allow for backward compatibility.

  2. Telescience Resource Kit (TReK)

    NASA Technical Reports Server (NTRS)

    Lippincott, Jeff

    2015-01-01

    Telescience Resource Kit (TReK) is one of the Huntsville Operations Support Center (HOSC) remote operations solutions. It can be used to monitor and control International Space Station (ISS) payloads from anywhere in the world. It is comprised of a suite of software applications and libraries that provide generic data system capabilities and access to HOSC services. The TReK Software has been operational since 2000. A new cross-platform version of TReK is under development. The new software is being released in phases during the 2014-2016 timeframe. The TReK Release 3.x series of software is the original TReK software that has been operational since 2000. This software runs on Windows. It contains capabilities to support traditional telemetry and commanding using CCSDS (Consultative Committee for Space Data Systems) packets. The TReK Release 4.x series of software is the new cross platform software. It runs on Windows and Linux. The new TReK software will support communication using standard IP protocols and traditional telemetry and commanding. All the software listed above is compatible and can be installed and run together on Windows. The new TReK software contains a suite of software that can be used by payload developers on the ground and onboard (TReK Toolkit). TReK Toolkit is a suite of lightweight libraries and utility applications for use onboard and on the ground. TReK Desktop is the full suite of TReK software -most useful on the ground. When TReK Desktop is released, the TReK installation program will provide the option to choose just the TReK Toolkit portion of the software or the full TReK Desktop suite. The ISS program is providing the TReK Toolkit software as a generic flight software capability offered as a standard service to payloads. TReK Software Verification was conducted during the April/May 2015 timeframe. Payload teams using the TReK software onboard can reference the TReK software verification. TReK will be demonstrated on-orbit running on an ISS provided T61p laptop. Target Timeframe: September 2015 -2016. The on-orbit demonstration will collect benchmark metrics, and will be used in the future to provide live demonstrations during ISS Payload Conferences. Benchmark metrics and demonstrations will address the protocols described in SSP 52050-0047 Ku Forward section 3.3.7. (Associated term: CCSDS File Delivery Protocol (CFDP)).

  3. Comparing a Coevolutionary Genetic Algorithm for Multiobjective Optimization

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Kraus, William F.; Haith, Gary L.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We present results from a study comparing a recently developed coevolutionary genetic algorithm (CGA) against a set of evolutionary algorithms using a suite of multiobjective optimization benchmarks. The CGA embodies competitive coevolution and employs a simple, straightforward target population representation and fitness calculation based on developmental theory of learning. Because of these properties, setting up the additional population is trivial making implementation no more difficult than using a standard GA. Empirical results using a suite of two-objective test functions indicate that this CGA performs well at finding solutions on convex, nonconvex, discrete, and deceptive Pareto-optimal fronts, while giving respectable results on a nonuniform optimization. On a multimodal Pareto front, the CGA finds a solution that dominates solutions produced by eight other algorithms, yet the CGA has poor coverage across the Pareto front.

  4. ZPR-6 assembly 7 high {sup 240} PU core : a cylindrical assemby with mixed (PU, U)-oxide fuel and a central high {sup 240} PU zone.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R. M.; Schaefer, R. W.; McKnight, R. D.

    Over a period of 30 years more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited to form the basis for criticality safety benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactormore » physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. The term 'benchmark' in a ZPR program connotes a particularly simple loading aimed at gaining basic reactor physics insight, as opposed to studying a reactor design. In fact, the ZPR-6/7 Benchmark Assembly (Reference 1) had a very simple core unit cell assembled from plates of depleted uranium, sodium, iron oxide, U3O8, and plutonium. The ZPR-6/7 core cell-average composition is typical of the interior region of liquid-metal fast breeder reactors (LMFBRs) of the era. It was one part of the Demonstration Reactor Benchmark Program,a which provided integral experiments characterizing the important features of demonstration-size LMFBRs. As a benchmark, ZPR-6/7 was devoid of many 'real' reactor features, such as simulated control rods and multiple enrichment zones, in its reference form. Those kinds of features were investigated experimentally in variants of the reference ZPR-6/7 or in other critical assemblies in the Demonstration Reactor Benchmark Program.« less

  5. The Role of Breast Size and Areolar Pigmentation in Perceptions of Women's Sexual Attractiveness, Reproductive Health, Sexual Maturity, Maternal Nurturing Abilities, and Age.

    PubMed

    Dixson, Barnaby J; Duncan, Melanie; Dixson, Alan F

    2015-08-01

    Women's breast morphology is thought to have evolved via sexual selection as a signal of maturity, health, and fecundity. While research demonstrates that breast morphology is important in men's judgments of women's attractiveness, it remains to be determined how perceptions might differ when considering a larger suite of mate relevant attributes. Here, we tested how variation in breast size and areolar pigmentation affected perceptions of women's sexual attractiveness, reproductive health, sexual maturity, maternal nurturing abilities, and age. Participants (100 men; 100 women) rated images of female torsos modeled to vary in breast size (very small, small, medium, and large) and areolar pigmentation (light, medium, and dark) for each of the five attributes listed above. Sexual attractiveness ratings increased linearly with breast size, but large breasts were not judged to be significantly more attractive than medium-sized breasts. Small and medium-sized breasts were rated as most attractive if they included light or medium colored areolae, whereas large breasts were more attractive if they had medium or dark areolae. Ratings for perceived age, sexual maturity, and nurturing ability also increased with breast size. Darkening the areolae reduced ratings of the reproductive health of medium and small breasts, whereas it increased ratings for large breasts. There were no significant sex differences in ratings of any of the perceptual measures. These results demonstrate that breast size and areolar pigmentation interact to determine ratings for a suite of sociosexual attributes, each of which may be relevant to mate choice in men and intra-sexual competition in women.

  6. A Flow Solver for Three-Dimensional DRAGON Grids

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Zheng, Yao

    2002-01-01

    DRAGONFLOW code has been developed to solve three-dimensional Navier-Stokes equations over a complex geometry whose flow domain is discretized with the DRAGON grid-a combination of Chimera grid and a collection of unstructured grids. In the DRAGONFLOW suite, both OVERFLOW and USM3D are presented in form of module libraries, and a master module controls the invoking of these individual modules. This report includes essential aspects, programming structures, benchmark tests and numerical simulations.

  7. BIGHORN Computational Fluid Dynamics Theory, Methodology, and Code Verification & Validation Benchmark Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong; Andrs, David; Martineau, Richard Charles

    This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for timemore » integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.« less

  8. DYNA3D/ParaDyn Regression Test Suite Inventory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Jerry I.

    2016-09-01

    The following table constitutes an initial assessment of feature coverage across the regression test suite used for DYNA3D and ParaDyn. It documents the regression test suite at the time of preliminary release 16.1 in September 2016. The columns of the table represent groupings of functionalities, e.g., material models. Each problem in the test suite is represented by a row in the table. All features exercised by the problem are denoted by a check mark (√) in the corresponding column. The definition of “feature” has not been subdivided to its smallest unit of user input, e.g., algorithmic parameters specific to amore » particular type of contact surface. This represents a judgment to provide code developers and users a reasonable impression of feature coverage without expanding the width of the table by several multiples. All regression testing is run in parallel, typically with eight processors, except problems involving features only available in serial mode. Many are strictly regression tests acting as a check that the codes continue to produce adequately repeatable results as development unfolds; compilers change and platforms are replaced. A subset of the tests represents true verification problems that have been checked against analytical or other benchmark solutions. Users are welcomed to submit documented problems for inclusion in the test suite, especially if they are heavily exercising, and dependent upon, features that are currently underrepresented.« less

  9. Enhanced Verification Test Suite for Physics Simulation Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, J R; Brock, J S; Brandon, S T

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest.more » This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater sophistication or other physics regimes (e.g., energetic material response, magneto-hydrodynamics), would represent a scientifically desirable complement to the fundamental test cases discussed in this report. The authors believe that this document can be used to enhance the verification analyses undertaken at the DOE WP Laboratories and, thus, to improve the quality, credibility, and usefulness of the simulation codes that are analyzed with these problems.« less

  10. High-Strength Composite Fabric Tested at Structural Benchmark Test Facility

    NASA Technical Reports Server (NTRS)

    Krause, David L.

    2002-01-01

    Large sheets of ultrahigh strength fabric were put to the test at NASA Glenn Research Center's Structural Benchmark Test Facility. The material was stretched like a snare drum head until the last ounce of strength was reached, when it burst with a cacophonous release of tension. Along the way, the 3-ft square samples were also pulled, warped, tweaked, pinched, and yanked to predict the material's physical reactions to the many loads that it will experience during its proposed use. The material tested was a unique multi-ply composite fabric, reinforced with fibers that had a tensile strength eight times that of common carbon steel. The fiber plies were oriented at 0 and 90 to provide great membrane stiffness, as well as oriented at 45 to provide an unusually high resistance to shear distortion. The fabric's heritage is in astronaut space suits and other NASA programs.

  11. Quantitative phenotyping via deep barcode sequencing.

    PubMed

    Smith, Andrew M; Heisler, Lawrence E; Mellor, Joseph; Kaper, Fiona; Thompson, Michael J; Chee, Mark; Roth, Frederick P; Giaever, Guri; Nislow, Corey

    2009-10-01

    Next-generation DNA sequencing technologies have revolutionized diverse genomics applications, including de novo genome sequencing, SNP detection, chromatin immunoprecipitation, and transcriptome analysis. Here we apply deep sequencing to genome-scale fitness profiling to evaluate yeast strain collections in parallel. This method, Barcode analysis by Sequencing, or "Bar-seq," outperforms the current benchmark barcode microarray assay in terms of both dynamic range and throughput. When applied to a complex chemogenomic assay, Bar-seq quantitatively identifies drug targets, with performance superior to the benchmark microarray assay. We also show that Bar-seq is well-suited for a multiplex format. We completely re-sequenced and re-annotated the yeast deletion collection using deep sequencing, found that approximately 20% of the barcodes and common priming sequences varied from expectation, and used this revised list of barcode sequences to improve data quality. Together, this new assay and analysis routine provide a deep-sequencing-based toolkit for identifying gene-environment interactions on a genome-wide scale.

  12. Benchmarking Various Green Fluorescent Protein Variants in Bacillus subtilis, Streptococcus pneumoniae, and Lactococcus lactis for Live Cell Imaging

    PubMed Central

    Overkamp, Wout; Beilharz, Katrin; Detert Oude Weme, Ruud; Solopova, Ana; Karsens, Harma; Kovács, Ákos T.; Kok, Jan

    2013-01-01

    Green fluorescent protein (GFP) offers efficient ways of visualizing promoter activity and protein localization in vivo, and many different variants are currently available to study bacterial cell biology. Which of these variants is best suited for a certain bacterial strain, goal, or experimental condition is not clear. Here, we have designed and constructed two “superfolder” GFPs with codon adaptation specifically for Bacillus subtilis and Streptococcus pneumoniae and have benchmarked them against five other previously available variants of GFP in B. subtilis, S. pneumoniae, and Lactococcus lactis, using promoter-gfp fusions. Surprisingly, the best-performing GFP under our experimental conditions in B. subtilis was the one codon optimized for S. pneumoniae and vice versa. The data and tools described in this study will be useful for cell biology studies in low-GC-rich Gram-positive bacteria. PMID:23956387

  13. Comparison of the PHISICS/RELAP5-3D Ring and Block Model Results for Phase I of the OECD MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2014-04-01

    The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1,more » a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.« less

  14. [Benchmarking in patient identification: An opportunity to learn].

    PubMed

    Salazar-de-la-Guerra, R M; Santotomás-Pajarrón, A; González-Prieto, V; Menéndez-Fraga, M D; Rocha Hurtado, C

    To perform a benchmarking on the safe identification of hospital patients involved in "Club de las tres C" (Calidez, Calidad y Cuidados) in order to prepare a common procedure for this process. A descriptive study was conducted on the patient identification process in palliative care and stroke units in 5medium-stay hospitals. The following steps were carried out: Data collection from each hospital; organisation and data analysis, and preparation of a common procedure for this process. The data obtained for the safe identification of all stroke patients were: hospital 1 (93%), hospital 2 (93.1%), hospital 3 (100%), and hospital 5 (93.4%), and for the palliative care process: hospital 1 (93%), hospital 2 (92.3%), hospital 3 (92%), hospital 4 (98.3%), and hospital 5 (85.2%). The aim of the study has been accomplished successfully. Benchmarking activities have been developed and knowledge on the patient identification process has been shared. All hospitals had good results. The hospital 3 was best in the ictus identification process. The benchmarking identification is difficult, but, a useful common procedure that collects the best practices has been identified among the 5 hospitals. Copyright © 2017 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  15. The mass storage testing laboratory at GSFC

    NASA Technical Reports Server (NTRS)

    Venkataraman, Ravi; Williams, Joel; Michaud, David; Gu, Heng; Kalluri, Atri; Hariharan, P. C.; Kobler, Ben; Behnke, Jeanne; Peavey, Bernard

    1998-01-01

    Industry-wide benchmarks exist for measuring the performance of processors (SPECmarks), and of database systems (Transaction Processing Council). Despite storage having become the dominant item in computing and IT (Information Technology) budgets, no such common benchmark is available in the mass storage field. Vendors and consultants provide services and tools for capacity planning and sizing, but these do not account for the complete set of metrics needed in today's archives. The availability of automated tape libraries, high-capacity RAID systems, and high- bandwidth interconnectivity between processor and peripherals has led to demands for services which traditional file systems cannot provide. File Storage and Management Systems (FSMS), which began to be marketed in the late 80's, have helped to some extent with large tape libraries, but their use has introduced additional parameters affecting performance. The aim of the Mass Storage Test Laboratory (MSTL) at Goddard Space Flight Center is to develop a test suite that includes not only a comprehensive check list to document a mass storage environment but also benchmark code. Benchmark code is being tested which will provide measurements for both baseline systems, i.e. applications interacting with peripherals through the operating system services, and for combinations involving an FSMS. The benchmarks are written in C, and are easily portable. They are initially being aimed at the UNIX Open Systems world. Measurements are being made using a Sun Ultra 170 Sparc with 256MB memory running Solaris 2.5.1 with the following configuration: 4mm tape stacker on SCSI 2 Fast/Wide; 4GB disk device on SCSI 2 Fast/Wide; and Sony Petaserve on Fast/Wide differential SCSI 2.

  16. Benchmarking worker nodes using LHCb productions and comparing with HEPSpec06

    NASA Astrophysics Data System (ADS)

    Charpentier, P.

    2017-10-01

    In order to estimate the capabilities of a computing slot with limited processing time, it is necessary to know with a rather good precision its “power”. This allows for example pilot jobs to match a task for which the required CPU-work is known, or to define the number of events to be processed knowing the CPU-work per event. Otherwise one always has the risk that the task is aborted because it exceeds the CPU capabilities of the resource. It also allows a better accounting of the consumed resources. The traditional way the CPU power is estimated in WLCG since 2007 is using the HEP-Spec06 benchmark (HS06) suite that was verified at the time to scale properly with a set of typical HEP applications. However, the hardware architecture of processors has evolved, all WLCG experiments moved to using 64-bit applications and use different compilation flags from those advertised for running HS06. It is therefore interesting to check the scaling of HS06 with the HEP applications. For this purpose, we have been using CPU intensive massive simulation productions from the LHCb experiment and compared their event throughput to the HS06 rating of the worker nodes. We also compared it with a much faster benchmark script that is used by the DIRAC framework used by LHCb for evaluating at run time the performance of the worker nodes. This contribution reports on the finding of these comparisons: the main observation is that the scaling with HS06 is no longer fulfilled, while the fast benchmarks have a better scaling but are less precise. One can also clearly see that some hardware or software features when enabled on the worker nodes may enhance their performance beyond expectation from either benchmark, depending on external factors.

  17. Comparative Ergonomic Evaluation of Spacesuit and Space Vehicle Design

    NASA Technical Reports Server (NTRS)

    England, Scott; Cowley, Matthew; Benson, Elizabeth; Harvill, Lauren; Blackledge, Christopher; Perez, Esau; Rajulu, Sudhakar

    2012-01-01

    With the advent of the latest human spaceflight objectives, a series of prototype architectures for a new launch and reentry spacesuit that would be suited to the new mission goals. Four prototype suits were evaluated to compare their performance and enable the selection of the preferred suit components and designs. A consolidated approach to testing was taken: concurrently collecting suit mobility data, seat-suit-vehicle interface clearances, and qualitative assessments of suit performance within the volume of a Multi-Purpose Crew Vehicle mockup. It was necessary to maintain high fidelity in a mockup and use advanced motion-capture technologies in order to achieve the objectives of the study. These seemingly mutually exclusive goals were accommodated with the construction of an optically transparent and fully adjustable frame mockup. The construction of the mockup was such that it could be dimensionally validated rapidly with the motioncapture system. This paper describes the method used to create a space vehicle mockup compatible with use of an optical motion-capture system, the consolidated approach for evaluating spacesuits in action, and a way to use the complex data set resulting from a limited number of test subjects to generate hardware requirements for an entire population. Kinematics, hardware clearance, anthropometry (suited and unsuited), and subjective feedback data were recorded on 15 unsuited and 5 suited subjects. Unsuited subjects were selected chiefly based on their anthropometry in an attempt to find subjects who fell within predefined criteria for medium male, large male, and small female subjects. The suited subjects were selected as a subset of the unsuited medium male subjects and were tested in both unpressurized and pressurized conditions. The prototype spacesuits were each fabricated in a single size to accommodate an approximately average-sized male, so select findings from the suit testing were systematically extrapolated to the extremes of the population to anticipate likely problem areas. This extrapolation was achieved by first comparing suited subjects performance with their unsuited performance, and then applying the results to the entire range of the population. The use of a transparent space vehicle mockup enabled the collection of large amounts of data during human-in-the-loop testing. Mobility data revealed that most of the tested spacesuits had sufficient ranges of motion for the selected tasks to be performed successfully. A suited subject's inability to perform a task most often stemmed from a combination of poor field of view in a seated position, poor dexterity of the pressurized gloves, or from suit/vehicle interface issues. Seat ingress and egress testing showed that problems with anthropometric accommodation did not exclusively occur with the largest or smallest subjects, but also with specific combinations of measurements that led to narrower seat ingress/egress clearance.

  18. Toward an Understanding of People Management Issues in SMEs: a South-Eastern European Perspective

    ERIC Educational Resources Information Center

    Szamosi, Leslie T.; Duxbury, Linda; Higgins, Chris

    2004-01-01

    The focus of this paper is on developing an understanding, and benchmarking, human resource management HRM issues in small and medium enterprises SMEs in South-Eastern Europe. The importance of SMEs in helping transition-based economies develop is critical, but at the same time the research indicates that the movement toward westernized business…

  19. SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T; Finlay, J; Mesina, C

    Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axismore » ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.« less

  20. The Design and Construction of a Long-Distance Atmospheric Propagation Test Chamber

    DTIC Science & Technology

    2015-06-01

    supply, a gain medium where the light is generated and amplified, and an optical cavity consisting of one partially reflecting mirror and one fully... reflecting mirror [6]. There is also a pump stored in the optical cavity that excites electrons in the gain medium, leading to spontaneous and...Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management

  1. A performance study of the time-varying cache behavior: a study on APEX, Mantevo, NAS, and PARSEC

    DOE PAGES

    Siddique, Nafiul A.; Grubel, Patricia A.; Badawy, Abdel-Hameed A.; ...

    2017-09-20

    Cache has long been used to minimize the latency of main memory accesses by storing frequently used data near the processor. Processor performance depends on the underlying cache performance. Therefore, significant research has been done to identify the most crucial metrics of cache performance. Although the majority of research focuses on measuring cache hit rates and data movement as the primary cache performance metrics, cache utilization is significantly important. We investigate the application’s locality using cache utilization metrics. In addition, we present cache utilization and traditional cache performance metrics as the program progresses providing detailed insights into the dynamic applicationmore » behavior on parallel applications from four benchmark suites running on multiple cores. We explore cache utilization for APEX, Mantevo, NAS, and PARSEC, mostly scientific benchmark suites. Our results indicate that 40% of the data bytes in a cache line are accessed at least once before line eviction. Also, on average a byte is accessed two times before the cache line is evicted for these applications. Moreover, we present runtime cache utilization, as well as, conventional performance metrics that illustrate a holistic understanding of cache behavior. To facilitate this research, we build a memory simulator incorporated into the Structural Simulation Toolkit (Rodrigues et al. in SIGMETRICS Perform Eval Rev 38(4):37–42, 2011). Finally, our results suggest that variable cache line size can result in better performance and can also conserve power.« less

  2. A performance study of the time-varying cache behavior: a study on APEX, Mantevo, NAS, and PARSEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siddique, Nafiul A.; Grubel, Patricia A.; Badawy, Abdel-Hameed A.

    Cache has long been used to minimize the latency of main memory accesses by storing frequently used data near the processor. Processor performance depends on the underlying cache performance. Therefore, significant research has been done to identify the most crucial metrics of cache performance. Although the majority of research focuses on measuring cache hit rates and data movement as the primary cache performance metrics, cache utilization is significantly important. We investigate the application’s locality using cache utilization metrics. In addition, we present cache utilization and traditional cache performance metrics as the program progresses providing detailed insights into the dynamic applicationmore » behavior on parallel applications from four benchmark suites running on multiple cores. We explore cache utilization for APEX, Mantevo, NAS, and PARSEC, mostly scientific benchmark suites. Our results indicate that 40% of the data bytes in a cache line are accessed at least once before line eviction. Also, on average a byte is accessed two times before the cache line is evicted for these applications. Moreover, we present runtime cache utilization, as well as, conventional performance metrics that illustrate a holistic understanding of cache behavior. To facilitate this research, we build a memory simulator incorporated into the Structural Simulation Toolkit (Rodrigues et al. in SIGMETRICS Perform Eval Rev 38(4):37–42, 2011). Finally, our results suggest that variable cache line size can result in better performance and can also conserve power.« less

  3. Chebyshev collocation spectral method for one-dimensional radiative heat transfer in linearly anisotropic-scattering cylindrical medium

    NASA Astrophysics Data System (ADS)

    Zhou, Rui-Rui; Li, Ben-Wen

    2017-03-01

    In this study, the Chebyshev collocation spectral method (CCSM) is developed to solve the radiative integro-differential transfer equation (RIDTE) for one-dimensional absorbing, emitting and linearly anisotropic-scattering cylindrical medium. The general form of quadrature formulas for Chebyshev collocation points is deduced. These formulas are proved to have the same accuracy as the Gauss-Legendre quadrature formula (GLQF) for the F-function (geometric function) in the RIDTE. The explicit expressions of the Lagrange basis polynomials and the differentiation matrices for Chebyshev collocation points are also given. These expressions are necessary for solving an integro-differential equation by the CCSM. Since the integrand in the RIDTE is continuous but non-smooth, it is treated by the segments integration method (SIM). The derivative terms in the RIDTE are carried out to improve the accuracy near the origin. In this way, a fourth order accuracy is achieved by the CCSM for the RIDTE, whereas it's only a second order one by the finite difference method (FDM). Several benchmark problems (BPs) with various combinations of optical thickness, medium temperature distribution, degree of anisotropy, and scattering albedo are solved. The results show that present CCSM is efficient to obtain high accurate results, especially for the optically thin medium. The solutions rounded to seven significant digits are given in tabular form, and show excellent agreement with the published data. Finally, the solutions of RIDTE are used as benchmarks for the solution of radiative integral transfer equations (RITEs) presented by Sutton and Chen (JQSRT 84 (2004) 65-103). A non-uniform grid refined near the wall is advised to improve the accuracy of RITEs solutions.

  4. Nonlocal elasticity and shear deformation effects on thermal buckling of a CNT embedded in a viscoelastic medium

    NASA Astrophysics Data System (ADS)

    Zenkour, A. M.

    2018-05-01

    The thermal buckling analysis of carbon nanotubes embedded in a visco-Pasternak's medium is investigated. The Eringen's nonlocal elasticity theory, in conjunction with the first-order Donnell's shell theory, is used for this purpose. The surrounding medium is considered as a three-parameter viscoelastic foundation model, Winkler-Pasternak's model as well as a viscous damping coefficient. The governing equilibrium equations are obtained and solved for carbon nanotubes subjected to different thermal and mechanical loads. The effects of nonlocal parameter, radius and length of nanotube, and the three foundation parameters on the thermal buckling of the nanotube are studied. Sample critical buckling loads are reported and graphically illustrated to check the validity of the present results and to present benchmarks for future comparisons.

  5. Development and Testing of Neutron Cross Section Covariance Data for SCALE 6.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Williams, Mark L; Wiarda, Dorothea

    2015-01-01

    Neutron cross-section covariance data are essential for many sensitivity/uncertainty and uncertainty quantification assessments performed both within the TSUNAMI suite and more broadly throughout the SCALE code system. The release of ENDF/B-VII.1 included a more complete set of neutron cross-section covariance data: these data form the basis for a new cross-section covariance library to be released in SCALE 6.2. A range of testing is conducted to investigate the properties of these covariance data and ensure that the data are reasonable. These tests include examination of the uncertainty in critical experiment benchmark model k eff values due to nuclear data uncertainties, asmore » well as similarity assessments of irradiated pressurized water reactor (PWR) and boiling water reactor (BWR) fuel with suites of critical experiments. The contents of the new covariance library, the testing performed, and the behavior of the new covariance data are described in this paper. The neutron cross-section covariances can be combined with a sensitivity data file generated using the TSUNAMI suite of codes within SCALE to determine the uncertainty in system k eff caused by nuclear data uncertainties. The Verified, Archived Library of Inputs and Data (VALID) maintained at Oak Ridge National Laboratory (ORNL) contains over 400 critical experiment benchmark models, and sensitivity data are generated for each of these models. The nuclear data uncertainty in k eff is generated for each experiment, and the resulting uncertainties are tabulated and compared to the differences in measured and calculated results. The magnitude of the uncertainty for categories of nuclides (such as actinides, fission products, and structural materials) is calculated for irradiated PWR and BWR fuel to quantify the effect of covariance library changes between the SCALE 6.1 and 6.2 libraries. One of the primary applications of sensitivity/uncertainty methods within SCALE is the assessment of similarities between benchmark experiments and safety applications. This is described by a c k value for each experiment with each application. Several studies have analyzed typical c k values for a range of critical experiments compared with hypothetical irradiated fuel applications. The c k value is sensitive to the cross-section covariance data because the contribution of each nuclide is influenced by its uncertainty; large uncertainties indicate more likely bias sources and are thus given more weight. Changes in c k values resulting from different covariance data can be used to examine and assess underlying data changes. These comparisons are performed for PWR and BWR fuel in storage and transportation systems.« less

  6. Interoperability and complementarity of simulation tools for beamline design in the OASYS environment

    NASA Astrophysics Data System (ADS)

    Rebuffi, Luca; Sanchez del Rio, Manuel

    2017-08-01

    In the next years most of the major synchrotron radiation facilities around the world will upgrade to 4th-generation Diffraction Limited Storage Rings using multi-bend-achromat technology. Moreover, several Free Electron Lasers are ready-to-go or in phase of completion. These events represent a huge challenge for the optics physicists responsible of designing and calculating optical systems capable to exploit the revolutionary characteristics of the new photon beams. Reliable and robust beamline design is nowadays based on sophisticated computer simulations only possible by lumping together different simulation tools. The OASYS (OrAnge SYnchrotron Suite) suite drives several simulation tools providing new mechanisms of interoperability and communication within the same software environment. OASYS has been successfully used during the conceptual design of many beamline and optical designs for the ESRF and Elettra- Sincrotrone Trieste upgrades. Some examples are presented showing comparisons and benchmarking of simulations against calculated and experimental data.

  7. SolCalc: A Suite for the Calculation and the Display of Magnetic Fields Generated by Solenoid Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopes, M. L.

    2014-07-01

    SolCalc is a software suite that computes and displays magnetic fields generated by a three dimensional (3D) solenoid system. Examples of such systems are the Mu2e magnet system and Helical Solenoids for muon cooling systems. SolCalc was originally coded in Matlab, and later upgraded to a compiled version (called MEX) to improve solving speed. Matlab was chosen because its graphical capabilities represent an attractive feature over other computer languages. Solenoid geometries can be created using any text editor or spread sheets and can be displayed dynamically in 3D. Fields are computed from any given list of coordinates. The field distributionmore » on the surfaces of the coils can be displayed as well. SolCalc was benchmarked against a well-known commercial software for speed and accuracy and the results compared favorably.« less

  8. Analysis of operational requirements for medium density air transportation. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The medium density air travel market was studied to determine the aircraft design and operational requirements. The impact of operational characteristics on the air travel system and the economic viability of the study aircraft were also evaluated. Medium density is defined in terms of numbers of people transported (20 to 500 passengers per day on round trip routes), and frequency of service ( a minumium of two and maximum of eight round trips per day) for 10 regional carriers. The operational characteristics of aircraft best suited to serve the medium density air transportation market are determined and a basepoint aircraft is designed from which tradeoff studies and parametric variations could be conducted. The impact of selected aircraft on the medium density market, economics, and operations is ascertained. Research and technology objectives for future programs in medium density air transportation are identified and ranked.

  9. CASE_ATTI: An Algorithm-Level Testbed for Multi-Sensor Data Fusion

    DTIC Science & Technology

    1995-05-01

    Illumination Radar (STIR) control console, the SPS- 49 long-range radar, the Sea Giraffe medium-range radar and their associated CCS software modules. The...The current A WW sensor suite of the CPF comprises the SPS-49 long range 2-D radar, the Sea Giraffe medium range 2-D radar, the CANEWS ESM and the...and Sea Giraffe . . This represents an original novelty of our simulation environment. Conventional radar simulations such as CARPET are not fully

  10. A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator

    DOE PAGES

    Engelmann, Christian; Naughton, III, Thomas J.

    2016-03-22

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less

  11. RELAP5-3D Results for Phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2012-06-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2.« less

  12. RELAP5-3D results for phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, G.; Epiney, A. S.

    2012-07-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2. (authors)« less

  13. Operating Room Efficiency before and after Entrance in a Benchmarking Program for Surgical Process Data.

    PubMed

    Pedron, Sara; Winter, Vera; Oppel, Eva-Maria; Bialas, Enno

    2017-08-23

    Operating room (OR) efficiency continues to be a high priority for hospitals. In this context the concept of benchmarking has gained increasing importance as a means to improve OR performance. The aim of this study was to investigate whether and how participation in a benchmarking and reporting program for surgical process data was associated with a change in OR efficiency, measured through raw utilization, turnover times, and first-case tardiness. The main analysis is based on panel data from 202 surgical departments in German hospitals, which were derived from the largest database for surgical process data in Germany. Panel regression modelling was applied. Results revealed no clear and univocal trend of participation in a benchmarking and reporting program for surgical process data. The largest trend was observed for first-case tardiness. In contrast to expectations, turnover times showed a generally increasing trend during participation. For raw utilization no clear and statistically significant trend could be evidenced. Subgroup analyses revealed differences in effects across different hospital types and department specialties. Participation in a benchmarking and reporting program and thus the availability of reliable, timely and detailed analysis tools to support the OR management seemed to be correlated especially with an increase in the timeliness of staff members regarding first-case starts. The increasing trend in turnover time revealed the absence of effective strategies to improve this aspect of OR efficiency in German hospitals and could have meaningful consequences for the medium- and long-run capacity planning in the OR.

  14. VMEbus based computer and real-time UNIX as infrastructure of DAQ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yasu, Y.; Fujii, H.; Nomachi, M.

    1994-12-31

    This paper describes what the authors have constructed as the infrastructure of data acquisition system (DAQ). The paper reports recent developments concerned with HP VME board computer with LynxOS (HP742rt/HP-RT) and Alpha/OSF1 with VMEbus adapter. The paper also reports current status of developing a Benchmark Suite for Data Acquisition (DAQBENCH) for measuring not only the performance of VME/CAMAC access but also that of the context switching, the inter-process communications and so on, for various computers including Workstation-based systems and VME board computers.

  15. plasmaFoam: An OpenFOAM framework for computational plasma physics and chemistry

    NASA Astrophysics Data System (ADS)

    Venkattraman, Ayyaswamy; Verma, Abhishek Kumar

    2016-09-01

    As emphasized in the 2012 Roadmap for low temperature plasmas (LTP), scientific computing has emerged as an essential tool for the investigation and prediction of the fundamental physical and chemical processes associated with these systems. While several in-house and commercial codes exist, with each having its own advantages and disadvantages, a common framework that can be developed by researchers from all over the world will likely accelerate the impact of computational studies on advances in low-temperature plasma physics and chemistry. In this regard, we present a finite volume computational toolbox to perform high-fidelity simulations of LTP systems. This framework, primarily based on the OpenFOAM solver suite, allows us to enhance our understanding of multiscale plasma phenomenon by performing massively parallel, three-dimensional simulations on unstructured meshes using well-established high performance computing tools that are widely used in the computational fluid dynamics community. In this talk, we will present preliminary results obtained using the OpenFOAM-based solver suite with benchmark three-dimensional simulations of microplasma devices including both dielectric and plasma regions. We will also discuss the future outlook for the solver suite.

  16. Improved connection details for adjacent prestressed bridge beams.

    DOT National Transportation Integrated Search

    2015-03-01

    Bridges with adjacent box beams and voided slabs are simply and rapidly constructed, and are well suited to : short to medium spans. The traditional connection between the adjacent members is a shear key lled with a : conventional non-shrink grout...

  17. Words from "Sesame Street": Learning Vocabulary While Viewing.

    ERIC Educational Resources Information Center

    Rice, Mabel L.; And Others

    1990-01-01

    Suggests that the content and presentation formats of "Sesame Street" are well suited to preschoolers' vocabulary development, independently of parent education, family size, child gender, and parental attitudes. Findings also suggest the feasibility of tutorial uses of the video medium. (RH)

  18. A concept for universal pliers

    NASA Technical Reports Server (NTRS)

    Neal, E. T.

    1972-01-01

    By modification in existing design, pliers can be made to have one pair of handles that will accept number of different jaws. Concept is useful for light to medium duty service. Complete set of jaws may be made to suit specific hobbies or applications.

  19. Quantitative phenotyping via deep barcode sequencing

    PubMed Central

    Smith, Andrew M.; Heisler, Lawrence E.; Mellor, Joseph; Kaper, Fiona; Thompson, Michael J.; Chee, Mark; Roth, Frederick P.; Giaever, Guri; Nislow, Corey

    2009-01-01

    Next-generation DNA sequencing technologies have revolutionized diverse genomics applications, including de novo genome sequencing, SNP detection, chromatin immunoprecipitation, and transcriptome analysis. Here we apply deep sequencing to genome-scale fitness profiling to evaluate yeast strain collections in parallel. This method, Barcode analysis by Sequencing, or “Bar-seq,” outperforms the current benchmark barcode microarray assay in terms of both dynamic range and throughput. When applied to a complex chemogenomic assay, Bar-seq quantitatively identifies drug targets, with performance superior to the benchmark microarray assay. We also show that Bar-seq is well-suited for a multiplex format. We completely re-sequenced and re-annotated the yeast deletion collection using deep sequencing, found that ∼20% of the barcodes and common priming sequences varied from expectation, and used this revised list of barcode sequences to improve data quality. Together, this new assay and analysis routine provide a deep-sequencing-based toolkit for identifying gene–environment interactions on a genome-wide scale. PMID:19622793

  20. Solar energy storage via liquid filled cans - Test data and analysis

    NASA Technical Reports Server (NTRS)

    Saha, H.

    1978-01-01

    This paper describes the design of a solar thermal storage test facility with water-filled metal cans as heat storage medium and also presents some preliminary tests results and analysis. This combination of solid and liquid mediums shows unique heat transfer and heat contents characteristics and will be well suited for use with solar air systems for space and hot water heating. The trends of the test results acquired thus far are representative of the test bed characteristics while operating in the various modes.

  1. Regression Tree-Based Methodology for Customizing Building Energy Benchmarks to Individual Commercial Buildings

    NASA Astrophysics Data System (ADS)

    Kaskhedikar, Apoorva Prakash

    According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI pertinent to the building type. The ability to identify and rank the important variables is of great importance in practical implementation of the benchmarking tools which rely on query-based building and HVAC variable filters specified by the user.

  2. Comparison of the PHISICS/RELAP5-3D ring and block model results for phase I of the OECD/NEA MHTGR-350 benchmark

    DOE PAGES

    Strydom, G.; Epiney, A. S.; Alfonsi, Andrea; ...

    2015-12-02

    The PHISICS code system has been under development at INL since 2010. It consists of several modules providing improved coupled core simulation capability: INSTANT (3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and modules performing criticality searches, fuel shuffling and generalized perturbation. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D was finalized in 2013, and as part of the verification and validation effort the first phase of the OECD/NEA MHTGR-350 Benchmark has now been completed. The theoretical basis and latest development status of the coupled PHISICS/RELAP5-3D tool are described in more detailmore » in a concurrent paper. This paper provides an overview of the OECD/NEA MHTGR-350 Benchmark and presents the results of Exercises 2 and 3 defined for Phase I. Exercise 2 required the modelling of a stand-alone thermal fluids solution at End of Equilibrium Cycle for the Modular High Temperature Reactor (MHTGR). The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 required a coupled neutronics and thermal fluids solution, and the PHISICS/RELAP5-3D code suite was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of results obtained with the traditional RELAP5-3D “ring” model approach against a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity that can be obtained by this “block” model is illustrated with comparison results on the temperature, power density and flux distributions. Furthermore, it is shown that the ring model leads to significantly lower fuel temperatures (up to 10%) when compared with the higher fidelity block model, and that the additional model development and run-time efforts are worth the gains obtained in the improved spatial temperature and flux distributions.« less

  3. Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves

    NASA Technical Reports Server (NTRS)

    Aitchison, Lindsay; McFarland, Shane

    2012-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test subjects representing the design ]to hand anthropometry completed range of motion, grip/pinch strength, dexterity, and fit evaluations for each glove design in both the unpressurized and pressurized conditions. This paper provides a comparison of the test results along with a detailed description of hardware and test methodologies used.

  4. Solid-solid phase change thermal storage application to space-suit battery pack

    NASA Astrophysics Data System (ADS)

    Son, Chang H.; Morehouse, Jeffrey H.

    1989-01-01

    High cell temperatures are seen as the primary safety problem in the Li-BCX space battery. The exothermic heat from the chemical reactions could raise the temperature of the lithium electrode above the melting temperature. Also, high temperature causes the cell efficiency to decrease. Solid-solid phase-change materials were used as a thermal storage medium to lower this battery cell temperature by utilizing their phase-change (latent heat storage) characteristics. Solid-solid phase-change materials focused on in this study are neopentyl glycol and pentaglycerine. Because of their favorable phase-change characteristics, these materials appear appropriate for space-suit battery pack use. The results of testing various materials are reported as thermophysical property values, and the space-suit battery operating temperature is discussed in terms of these property results.

  5. Very High Specific Energy, Medium Power Li/CFx Primary Battery for Launchers and Space Probes

    NASA Astrophysics Data System (ADS)

    Brochard, Paul; Godillot, Gerome; Peres, Jean Paul; Corbin, Julien; Espinosa, Amaya

    2014-08-01

    Benchmark with existing technologies shows the advantages of the lithium-fluorinated carbon (Li/CFx) technology for use aboard future launchers in terms of a low Total Cost of Ownership (TCO), especially for high energy demanding missions such as re-ignitable upper stages for long GTO+ missions and probes for deep space exploration.This paper presents the new results obtained on this chemistry in terms of electrical and climatic performances, abuse tests and life tests. Studies - co-financed between CNES and Saft - looked at a pure CFx version with a specific energy up to 500 Wh/kg along with a medium power of 80 to 100 W/kg.

  6. New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)

    NASA Astrophysics Data System (ADS)

    Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.

    2017-09-01

    Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by adjustment of the original ENDF format file.

  7. TRUST. I. A 3D externally illuminated slab benchmark for dust radiative transfer

    NASA Astrophysics Data System (ADS)

    Gordon, K. D.; Baes, M.; Bianchi, S.; Camps, P.; Juvela, M.; Kuiper, R.; Lunttila, T.; Misselt, K. A.; Natale, G.; Robitaille, T.; Steinacker, J.

    2017-07-01

    Context. The radiative transport of photons through arbitrary three-dimensional (3D) structures of dust is a challenging problem due to the anisotropic scattering of dust grains and strong coupling between different spatial regions. The radiative transfer problem in 3D is solved using Monte Carlo or Ray Tracing techniques as no full analytic solution exists for the true 3D structures. Aims: We provide the first 3D dust radiative transfer benchmark composed of a slab of dust with uniform density externally illuminated by a star. This simple 3D benchmark is explicitly formulated to provide tests of the different components of the radiative transfer problem including dust absorption, scattering, and emission. Methods: The details of the external star, the slab itself, and the dust properties are provided. This benchmark includes models with a range of dust optical depths fully probing cases that are optically thin at all wavelengths to optically thick at most wavelengths. The dust properties adopted are characteristic of the diffuse Milky Way interstellar medium. This benchmark includes solutions for the full dust emission including single photon (stochastic) heating as well as two simplifying approximations: One where all grains are considered in equilibrium with the radiation field and one where the emission is from a single effective grain with size-distribution-averaged properties. A total of six Monte Carlo codes and one Ray Tracing code provide solutions to this benchmark. Results: The solution to this benchmark is given as global spectral energy distributions (SEDs) and images at select diagnostic wavelengths from the ultraviolet through the infrared. Comparison of the results revealed that the global SEDs are consistent on average to a few percent for all but the scattered stellar flux at very high optical depths. The image results are consistent within 10%, again except for the stellar scattered flux at very high optical depths. The lack of agreement between different codes of the scattered flux at high optical depths is quantified for the first time. Convergence tests using one of the Monte Carlo codes illustrate the sensitivity of the solutions to various model parameters. Conclusions: We provide the first 3D dust radiative transfer benchmark and validate the accuracy of this benchmark through comparisons between multiple independent codes and detailed convergence tests.

  8. Populations of striatal medium spiny neurons encode vibrotactile frequency in rats: modulation by slow wave oscillations

    PubMed Central

    Hawking, Thomas G.

    2013-01-01

    Dorsolateral striatum (DLS) is implicated in tactile perception and receives strong projections from somatosensory cortex. However, the sensory representations encoded by striatal projection neurons are not well understood. Here we characterized the contribution of DLS to the encoding of vibrotactile information in rats by assessing striatal responses to precise frequency stimuli delivered to a single vibrissa. We applied stimuli in a frequency range (45–90 Hz) that evokes discriminable percepts and carries most of the power of vibrissa vibration elicited by a range of complex fine textures. Both medium spiny neurons and evoked potentials showed tactile responses that were modulated by slow wave oscillations. Furthermore, medium spiny neuron population responses represented stimulus frequency on par with previously reported behavioral benchmarks. Our results suggest that striatum encodes frequency information of vibrotactile stimuli which is dynamically modulated by ongoing brain state. PMID:23114217

  9. Personal Cooling for Extra-Vehicular Activities on Mars

    NASA Technical Reports Server (NTRS)

    Pu, Zhengxiang; Kapat, Jay; Chow, Louis; Recio, Jose; Rini, Dan; Trevino, Luis

    2004-01-01

    Extra-vehicular activities (EVA) on Mars will require suits with sophisticated thermal control systems so that astronauts can work comfortably for extended periods of time. Any use of consumables such as water that cannot be easily replaced should be of particular concern. In this aspect the EVA suits for Mars environment need to be different from the current Space Shuttle Extra Vehicular Mobility Units (EMU) that depend on water sublimation into space for removing heat from suits. Moreover, Mars environment is quite different from what a typical EMU may be exposed to. These variations call for careful analysis and innovative engineering for design and fabrication of an appropriate thermal control system. This paper presents a thermal analysis of astronaut suits for EVA with medium metabolic intensity under a typical hot and a nominal cold environment on Mars. The paper also describes possible options that would allow conservation of water with low usage of electrical power. The paper then presents the conceptual design of a portable cooling unit for one such solution.

  10. Pesticide Occurrence and Distribution in the Lower Clackamas River Basin, Oregon, 2000-2005

    USGS Publications Warehouse

    Carpenter, Kurt D.; Sobieszczyk, Steven; Arnsberg, Andrew J.; Rinella, Frank A.

    2008-01-01

    Pesticide occurrence and distribution in the lower Clackamas River basin was evaluated in 2000?2005, when 119 water samples were analyzed for a suite of 86?198 dissolved pesticides. Sampling included the lower-basin tributaries and the Clackamas River mainstem, along with paired samples of pre- and post-treatment drinking water (source and finished water) from one of four drinking water-treatment plants that draw water from the lower river. Most of the sampling in the tributaries occurred during storms, whereas most of the source and finished water samples from the study drinking-water treatment plant were obtained at regular intervals, and targeted one storm event in 2005. In all, 63 pesticide compounds were detected, including 33 herbicides, 15 insecticides, 6 fungicides, and 9 pesticide degradation products. Atrazine and simazine were detected in about half of samples, and atrazine and one of its degradates (deethylatrazine) were detected together in 30 percent of samples. Other high-use herbicides such as glyphosate, triclopyr, 2,4-D, and metolachlor also were frequently detected, particularly in the lower-basin tributaries. Pesticides were detected in all eight of the lower-basin tributaries sampled, and were also frequently detected in the lower Clackamas River. Although pesticides were detected in all of the lower basin tributaries, the highest pesticide loads (amounts) were found in Deep and Rock Creeks. These medium-sized streams drain a mix of agricultural land (row crops and nurseries), pastureland, and rural residential areas. The highest pesticide loads were found in Rock Creek at 172nd Avenue and in two Deep Creek tributaries, North Fork Deep and Noyer Creeks, where 15?18 pesticides were detected. Pesticide yields (loads per unit area) were highest in Cow and Carli Creeks, two small streams that drain the highly urban and industrial northwestern part of the lower basin. Other sites having relatively high pesticide yields included middle Rock Creek and upper Noyer Creek, which drain basins having nurseries, pasture, and rural residential land. Some concentrations of insecticides (diazinon, chlorpyrifos, azinphos-methyl, and p,p?-DDE) exceeded U.S. Environmental Protection Agency (USEPA) aquatic-life benchmarks in Carli, Sieben, Rock, Noyer, Doane, and North Fork Deep Creeks. One azinphos-methyl concentration in Doane Creek (0.21 micrograms per liter [?g/L]) exceeded Federal and State of Oregon benchmarks for the protection of fish and benthic invertebrates. Concentrations of several other pesticide compounds exceeded non-USEPA benchmarks. Twenty-six pesticides or degradates were detected in the Clackamas River mainstem, typically at much lower concentrations than those detected in the lower-basin tributaries. At least 1 pesticide was detected in 65 percent of 34 samples collected from the Clackamas River, with an average of 2?3 pesticides per sample. Pesticides were detected in 9 (or 60 percent) of the 15 finished water samples collected from the study water-treatment plant during 2003?2005. These included 10 herbicides, 1 insecticide, 1 fungicide, 1 insect repellent, and 2 pesticide degradates. The herbicides diuron and simazine were the most frequently detected (four times each during the study), at concentrations far below human-health benchmarks?USEPA Maximum Contaminant Levels or U.S. Geological Survey human Health-Based Screening Levels (HBSLs). The highest pesticide concentration in finished drinking water was 0.18 ?g/L of diuron, which was 11 times lower than its low HBSL benchmark. Although 0?2 pesticides were detected in most finished water samples, 9 and 6 pesticides were detected in 2 storm-associated samples from May and September 2005, respectively. Three of the unregulated compounds detected in finished drinking water (diazinon-oxon, deethylatrazine [CIAT], and N, N-diethyl-m-toluamide [DEET]) do not have human-health benchmarks available for comparison. Although most of the 51 curren

  11. AMS Prototyping Activities

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott

    2008-01-01

    This slide presentation reviews the activity around the Asynchronous Message Service (AMS) prototype. An AMS reference implementation has been available since late 2005. It is aimed at supporting message exchange both in on-board environments and over space links. The implementation incoroporates all mandatory elements of the draft recommendation from July 2007: (1) MAMS, AMS, and RAMS protocols. (2) Failover, heartbeats, resync. (3) "Hooks" for security, but no cipher suites included in the distribution. The performance is reviewed, and a Benchmark latency test over VxWorks Message Queues is shown as histograms of a count vs microseconds per 1000-byte message

  12. VHSIC Hardware Description Language (VHDL) Benchmark Suite

    DTIC Science & Technology

    1990-10-01

    T7 prit.iy label iTM Architecture label I flO inch Label I TIC ) P rae s Label I W Conkgurstion Spec. 1 21 Appendix B. Test Descriptions, Shell Code...Siensls R Accnss Operaist s Iov (soc 3 3 & 7 3 61 $ File I/0 S1 Reed S2 Write T Label Site TI Signal TIA Archi~ecture TIE Block TIC Port T2 VariableI...Access Operations I (sec 3 3 & 7.3 61 1 S FI I/0 1 Sl Read 52 Write T Label SreI TI Signal TIA Architeclt TIR Block TIC

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold H. Kritz

    PTRANSP, which is the predictive version of the TRANSP code, was developed in a collaborative effort involving the Princeton Plasma Physics Laboratory, General Atomics Corporation, Lawrence Livermore National Laboratory, and Lehigh University. The PTRANSP/TRANSP suite of codes is the premier integrated tokamak modeling software in the United States. A production service for PTRANSP/TRANSP simulations is maintained at the Princeton Plasma Physics Laboratory; the server has a simple command line client interface and is subscribed to by about 100 researchers from tokamak projects in the US, Europe, and Asia. This service produced nearly 13000 PTRANSP/TRANSP simulations in the four year periodmore » FY 2005 through FY 2008. Major archives of TRANSP results are maintained at PPPL, MIT, General Atomics, and JET. Recent utilization, counting experimental analysis simulations as well as predictive simulations, more than doubled from slightly over 2000 simulations per year in FY 2005 and FY 2006 to over 4300 simulations per year in FY 2007 and FY 2008. PTRANSP predictive simulations applied to ITER increased eight fold from 30 simulations per year in FY 2005 and FY 2006 to 240 simulations per year in FY 2007 and FY 2008, accounting for more than half of combined PTRANSP/TRANSP service CPU resource utilization in FY 2008. PTRANSP studies focused on ITER played a key role in journal articles. Examples of validation studies carried out for momentum transport in PTRANSP simulations were presented at the 2008 IAEA conference. The increase in number of PTRANSP simulations has continued (more than 7000 TRANSP/PTRANSP simulations in 2010) and results of PTRANSP simulations appear in conference proceedings, for example the 2010 IAEA conference, and in peer reviewed papers. PTRANSP provides a bridge to the Fusion Simulation Program (FSP) and to the future of integrated modeling. Through years of widespread usage, each of the many parts of the PTRANSP suite of codes has been thoroughly validated against experimental data and benchmarked against other codes. At the same time, architectural modernizations are improving the modularity of the PTRANSP code base. The NUBEAM neutral beam and fusion products fast ion model, the Plasma State data repository (developed originally in the SWIM SciDAC project and adapted for use in PTRANSP), and other components are already shared with the SWIM, FACETS, and CPES SciDAC FSP prototype projects. Thus, the PTRANSP code is already serving as a bridge between our present integrated modeling capability and future capability. As the Fusion Simulation Program builds toward the facility currently available in the PTRANSP suite of codes, early versions of the FSP core plasma model will need to be benchmarked against the PTRANSP simulations. This will be necessary to build user confidence in FSP, but this benchmarking can only be done if PTRANSP itself is maintained and developed.« less

  14. Publishing an "imej" Journal for Computer-Enhanced Learning.

    ERIC Educational Resources Information Center

    Burg, Jennifer; Wong, Yue-Ling; Pfeifer, Dan; Boyle, Anne; Yip, Ching-Wan

    Interactive multimedia electronic journals, or IMEJ journals, are a publication medium particularly suited for research in computer-enhanced learning. This paper describes the challenges and potential rewards in publishing such a journal; presents ideas for design and layout; and discusses issues of collaboration, copyrighting, and archiving that…

  15. Numerical Analysis of Stochastic Dynamical Systems in the Medium-Frequency Range

    DTIC Science & Technology

    2003-02-01

    frequency vibration analysis such as the statistical energy analysis (SEA), the traditional modal analysis (well-suited for high and low: frequency...that the first few structural normal modes primarily constitute the total response. In the higher frequency range, the statistical energy analysis (SEA

  16. Development and benchmarking of TASSER(iter) for the iterative improvement of protein structure predictions.

    PubMed

    Lee, Seung Yup; Skolnick, Jeffrey

    2007-07-01

    To improve the accuracy of TASSER models especially in the limit where threading provided template alignments are of poor quality, we have developed the TASSER(iter) algorithm which uses the templates and contact restraints from TASSER generated models for iterative structure refinement. We apply TASSER(iter) to a large benchmark set of 2,773 nonhomologous single domain proteins that are < or = 200 in length and that cover the PDB at the level of 35% pairwise sequence identity. Overall, TASSER(iter) models have a smaller global average RMSD of 5.48 A compared to 5.81 A RMSD of the original TASSER models. Classifying the targets by the level of prediction difficulty (where Easy targets have a good template with a corresponding good threading alignment, Medium targets have a good template but a poor alignment, and Hard targets have an incorrectly identified template), TASSER(iter) (TASSER) models have an average RMSD of 4.15 A (4.35 A) for the Easy set and 9.05 A (9.52 A) for the Hard set. The largest reduction of average RMSD is for the Medium set where the TASSER(iter) models have an average global RMSD of 5.67 A compared to 6.72 A of the TASSER models. Seventy percent of the Medium set TASSER(iter) models have a smaller RMSD than the TASSER models, while 63% of the Easy and 60% of the Hard TASSER models are improved by TASSER(iter). For the foldable cases, where the targets have a RMSD to the native <6.5 A, TASSER(iter) shows obvious improvement over TASSER models: For the Medium set, it improves the success rate from 57.0 to 67.2%, followed by the Hard targets where the success rate improves from 32.0 to 34.8%, with the smallest improvement in the Easy targets from 82.6 to 84.0%. These results suggest that TASSER(iter) can provide more reliable predictions for targets of Medium difficulty, a range that had resisted improvement in the quality of protein structure predictions. 2007 Wiley-Liss, Inc.

  17. pyRMSD: a Python package for efficient pairwise RMSD matrix calculation and handling.

    PubMed

    Gil, Víctor A; Guallar, Víctor

    2013-09-15

    We introduce pyRMSD, an open source standalone Python package that aims at offering an integrative and efficient way of performing Root Mean Square Deviation (RMSD)-related calculations of large sets of structures. It is specially tuned to do fast collective RMSD calculations, as pairwise RMSD matrices, implementing up to three well-known superposition algorithms. pyRMSD provides its own symmetric distance matrix class that, besides the fact that it can be used as a regular matrix, helps to save memory and increases memory access speed. This last feature can dramatically improve the overall performance of any Python algorithm using it. In addition, its extensibility, testing suites and documentation make it a good choice to those in need of a workbench for developing or testing new algorithms. The source code (under MIT license), installer, test suites and benchmarks can be found at https://pele.bsc.es/ under the tools section. victor.guallar@bsc.es Supplementary data are available at Bioinformatics online.

  18. Analyzing large-scale spiking neural data with HRLAnalysis™

    PubMed Central

    Thibeault, Corey M.; O'Brien, Michael J.; Srinivasa, Narayan

    2014-01-01

    The additional capabilities provided by high-performance neural simulation environments and modern computing hardware has allowed for the modeling of increasingly larger spiking neural networks. This is important for exploring more anatomically detailed networks but the corresponding accumulation in data can make analyzing the results of these simulations difficult. This is further compounded by the fact that many existing analysis packages were not developed with large spiking data sets in mind. Presented here is a software suite developed to not only process the increased amount of spike-train data in a reasonable amount of time, but also provide a user friendly Python interface. We describe the design considerations, implementation and features of the HRLAnalysis™ suite. In addition, performance benchmarks demonstrating the speedup of this design compared to a published Python implementation are also presented. The result is a high-performance analysis toolkit that is not only usable and readily extensible, but also straightforward to interface with existing Python modules. PMID:24634655

  19. Performance Metrics for Monitoring Parallel Program Executions

    NASA Technical Reports Server (NTRS)

    Sarukkai, Sekkar R.; Gotwais, Jacob K.; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    Existing tools for debugging performance of parallel programs either provide graphical representations of program execution or profiles of program executions. However, for performance debugging tools to be useful, such information has to be augmented with information that highlights the cause of poor program performance. Identifying the cause of poor performance necessitates the need for not only determining the significance of various performance problems on the execution time of the program, but also needs to consider the effect of interprocessor communications of individual source level data structures. In this paper, we present a suite of normalized indices which provide a convenient mechanism for focusing on a region of code with poor performance and highlights the cause of the problem in terms of processors, procedures and data structure interactions. All the indices are generated from trace files augmented with data structure information.. Further, we show with the help of examples from the NAS benchmark suite that the indices help in detecting potential cause of poor performance, based on augmented execution traces obtained by monitoring the program.

  20. Reproductive and Endocrine Biomarkers in Largemouth Bass (Micropterus salmoides) and Common Carp (Cyprinus carpio) from United States Waters

    USGS Publications Warehouse

    Goodbred, Steven L.; Smith, Stephen B.; Greene, Patricia S.; Rauschenberger, Richard H.; Bartish, Timothy M.

    2007-01-01

    A nationwide reconnaissance investigation was initiated in 1994 to develop and evaluate a suite of reproductive and endocrine biomarkers for their potential to assess reproductive health and status in teleost (bony) fish. Fish collections were made at 119 sites, representing many regions of the country and land- and water-use settings. Collectively, this report will provide a national and regional benchmark and a basis for evaluating biomarkers of endocrine and reproductive function. Approximately 2,200 common carp (Cyprinus carpio) and 650 largemouth bass (Micropterus salmoides) were collected from 1994 through 1997. The suite of biomarkers used for these studies included: the plasma sex-steroid hormones, 17?-estradiol (E2) and 11 ketotestosterone (11KT); the ratio of E2 to 11KT (E2:11KT); plasma vitellogenin (VTG); and stage of gonadal development. This data report provides fish size, stage and reproductive biomarker data for individual fish and for site and regional summaries of these variables.

  1. Development of a versatile high-temperature short-time (HTST) pasteurization device for small-scale processing of cell culture medium formulations.

    PubMed

    Floris, Patrick; Curtin, Sean; Kaisermayer, Christian; Lindeberg, Anna; Bones, Jonathan

    2018-07-01

    The compatibility of CHO cell culture medium formulations with all stages of the bioprocess must be evaluated through small-scale studies prior to scale-up for commercial manufacturing operations. Here, we describe the development of a bespoke small-scale device for assessing the compatibility of culture media with a widely implemented upstream viral clearance strategy, high-temperature short-time (HTST) treatment. The thermal stability of undefined medium formulations supplemented with soy hydrolysates was evaluated upon variations in critical HTST processing parameters, namely, holding times and temperatures. Prolonged holding times of 43 s at temperatures of 110 °C did not adversely impact medium quality while significant degradation was observed upon treatment at elevated temperatures (200 °C) for shorter time periods (11 s). The performance of the device was benchmarked against a commercially available mini-pilot HTST system upon treatment of identical formulations on both platforms. Processed medium samples were analyzed by untargeted LC-MS/MS for compositional profiling followed by chemometric evaluation, which confirmed the observed degradation effects caused by elevated holding temperatures but revealed comparable performance of our developed device with the commercial mini-pilot setup. The developed device can assist medium optimization activities by reducing volume requirements relative to commercially available mini-pilot instrumentation and by facilitating fast throughput evaluation of heat-induced effects on multiple medium lots.

  2. Controlled-Turbulence Bioreactors

    NASA Technical Reports Server (NTRS)

    Wolf, David A.; Schwartz, Ray; Trinh, Tinh

    1989-01-01

    Two versions of bioreactor vessel provide steady supplies of oxygen and nutrients with little turbulence. Suspends cells in environment needed for sustenance and growth, while inflicting less damage from agitation and bubbling than do propeller-stirred reactors. Gentle environments in new reactors well suited to delicate mammalian cells. One reactor kept human kidney cells alive for as long as 11 days. Cells grow on carrier beads suspended in liquid culture medium that fills cylindrical housing. Rotating vanes - inside vessel but outside filter - gently circulates nutrient medium. Vessel stationary; magnetic clutch drives filter cylinder and vanes. Another reactor creates even less turbulence. Oxygen-permeable tubing wrapped around rod extending along central axis. Small external pump feeds oxygen to tubing through rotary coupling, and oxygen diffuses into liquid medium.

  3. Characterizing a New Candidate Benchmark Brown Dwarf Companion in the β Pic Moving Group

    NASA Astrophysics Data System (ADS)

    Phillips, Caprice; Bowler, Brendan; Liu, Michael C.; Mace, Gregory N.; Sokal, Kimberly R.

    2018-01-01

    Benchmark brown dwarfs are objects that have at least two measured fundamental quantities such as luminosity and age, and therefore can be used to test substellar atmospheric and evolutionary models. Nearby, young, loose associations such as the β Pic moving group represent some of the best regions in which to identify intermediate-age benchmark brown dwarfs due to their well-constrained ages and metallicities. We present a spectroscopic study of a new companion at the hydrogen-burning limit orbiting a low-mass star at a separation of 9″ (650 AU) in the 23 Myr old β Pic moving group. The medium-resolution near-infrared spectrum of this companion from IRTF/SpeX shows clear signs of low surface gravity and yields an index-based spectral type of M6±1 with a VL-G gravity on the Allers & Liu classification system. Currently, there are four known brown dwarf and giant planet companions in the β Pic moving group: HR 7329 B, PZ Tel B, β Pic b, and 51 Eri b. Depending on its exact age and accretion history, this new object may represent the third brown dwarf companion and fifth substellar companion in this association.

  4. Short-Term Solar Forecasting Performance of Popular Machine Learning Algorithms: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Florita, Anthony R; Elgindy, Tarek; Hodge, Brian S

    A framework for assessing the performance of short-term solar forecasting is presented in conjunction with a range of numerical results using global horizontal irradiation (GHI) from the open-source Surface Radiation Budget (SURFRAD) data network. A suite of popular machine learning algorithms is compared according to a set of statistically distinct metrics and benchmarked against the persistence-of-cloudiness forecast and a cloud motion forecast. Results show significant improvement compared to the benchmarks with trade-offs among the machine learning algorithms depending on the desired error metric. Training inputs include time series observations of GHI for a history of years, historical weather and atmosphericmore » measurements, and corresponding date and time stamps such that training sensitivities might be inferred. Prediction outputs are GHI forecasts for 1, 2, 3, and 4 hours ahead of the issue time, and they are made for every month of the year for 7 locations. Photovoltaic power and energy outputs can then be made using the solar forecasts to better understand power system impacts.« less

  5. Investigation of propulsion system for large LNG ships

    NASA Astrophysics Data System (ADS)

    Sinha, R. P.; Nik, Wan Mohd Norsani Wan

    2012-09-01

    Requirements to move away from coal for power generation has made LNG as the most sought after fuel source, raising steep demands on its supply and production. Added to this scenario is the gradual depletion of the offshore oil and gas fields which is pushing future explorations and production activities far away into the hostile environment of deep sea. Production of gas in such environment has great technical and commercial impacts on gas business. For instance, laying gas pipes from deep sea to distant receiving terminals will be technically and economically challenging. Alternative to laying gas pipes will require installing re-liquefaction unit on board FPSOs to convert gas into liquid for transportation by sea. But, then because of increased distance between gas source and receiving terminals the current medium size LNG ships will no longer remain economical to operate. Recognizing this business scenario shipowners are making huge investments in the acquisition of large LNG ships. As power need of large LNG ships is very different from the current small ones, a variety of propulsion derivatives such as UST, DFDE, 2-Stroke DRL and Combined cycle GT have been proposed by leading engine manufacturers. Since, propulsion system constitutes major element of the ship's capital and life cycle cost, which of these options is most suited for large LNG ships is currently a major concern of the shipping industry and must be thoroughly assessed. In this paper the authors investigate relative merits of these propulsion options against the benchmark performance criteria of BOG disposal, fuel consumption, gas emissions, plant availability and overall life cycle cost.

  6. Numerical Analysis of Base Flowfield for a Four-Engine Clustered Nozzle Configuration

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    1995-01-01

    Excessive base heating has been a problem for many launch vehicles. For certain designs such as the direct dump of turbine exhaust inside and at the lip of the nozzle, the potential burning of the turbine exhaust in the base region can be of great concern. Accurate prediction of the base environment at altitudes is therefore very important during the vehicle design phase. Otherwise, undesirable consequences may occur. In this study, the turbulent base flowfield of a cold flow experimental investigation for a four-engine clustered nozzle was numerically benchmarked using a pressure-based computational fluid dynamics (CFD) method. This is a necessary step before the benchmarking of hot flow and combustion flow tests can be considered. Since the medium was unheated air, reasonable prediction of the base pressure distribution at high altitude was the main goal. Several physical phenomena pertaining to the multiengine clustered nozzle base flow physics were deduced from the analysis.

  7. NASA Indexing Benchmarks: Evaluating Text Search Engines

    NASA Technical Reports Server (NTRS)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  8. The Power of Perspective: Teaching Social Policy with Documentary Film

    ERIC Educational Resources Information Center

    Shdaimah, Corey

    2009-01-01

    Documentaries can be used as a pedagogical tool to better understand social policy. This medium is especially suited to provide perspectives that often go unheard in policy debates, academic discussions, and textbooks. While many social work educators use documentary films, there is little discussion about this media in the social work literature.…

  9. Tropical Cyclone Wind Probability Forecasting (WINDP).

    DTIC Science & Technology

    1981-04-01

    llq. h. ,c ilrac (t’ small probabilities (below 107c) is limited II(t’h, numb(r o!, significant digits given: therefore 1t( huld lU r~ruidvd as being...APPLIED SCI. CORP. ENGLAMD ;7MOS. SCIENCES OEPT., LIBRARY ATTN: LIBARY , SUITE 500 400 WASHINGTON AVE. 6811 KENILWORTH AVE. EUROPEAN CENTRE FOR MEDIUM

  10. Racism; A Film Course Study Guide.

    ERIC Educational Resources Information Center

    Kernan, Margot

    The medium of film is uniquely suited to the representation of social problems such as racism. By stressing major issues of racism--slavery, the black cultural heritage, black power, and the black civil rights movement--and coupling these issues with films which give a realistic view of the substance and problems of racism, both concepts…

  11. Evaluation of the cytotoxic and genotoxic effects of benchmark multi-walled carbon nanotubes in relation to their physicochemical properties.

    PubMed

    Louro, Henriqueta; Pinhão, Mariana; Santos, Joana; Tavares, Ana; Vital, Nádia; Silva, Maria João

    2016-11-16

    To contribute with scientific evidence to the grouping strategy for the safety assessment of multi-walled carbon nanotubes (MWCNTs), this work describes the investigation of the cytotoxic and genotoxic effects of four benchmark MWCNTs in relation to their physicochemical characteristics, using two types of human respiratory cells. The cytotoxic effects were analysed using the clonogenic assay and replication index determination. A 48h-exposure of cells revealed that NM-401 was the only cytotoxic MWCNT in both cell lines, but after 8-days exposure, the clonogenic assay in A549 cells showed cytotoxic effects for all the tested MWCNTs. Correlation analysis suggested an association between the MWCNTs size in cell culture medium and cytotoxicity. No induction of DNA damage was observed after any MWCNTs in any cell line by the comet assay, while the micronucleus assay revealed that both NM-401 and NM-402 were genotoxic in A549 cells. NM-401 and NM-402 are the two longest MWCNTs analyzed in this work, suggesting that length may be determinant for genotoxicity. No induction of micronuclei was observed in BBEAS-2Beas-2B cell line and the different effect in both cell lines is explained in view of the size-distribution of MWCNTs in the cell culture medium, rather than cell's specificities. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. A study on operation efficiency evaluation based on firm's financial index and benchmark selection: take China Unicom as an example

    NASA Astrophysics Data System (ADS)

    Wu, Zu-guang; Tian, Zhan-jun; Liu, Hui; Huang, Rui; Zhu, Guo-hua

    2009-07-01

    Being the only listed telecom operators of A share market, China Unicom has always been attracted many institutional investors under the concept of 3G recent years,which itself is a great technical progress expectation.Do the institutional investors or the concept of technical progress have signficant effect on the improving of firm's operating efficiency?Though reviewing the documentary about operating efficiency we find that schoolars study this problem useing the regress analyzing based on traditional production function and data envelopment analysis(DEA) and financial index anayzing and marginal function and capital labor ratio coefficient etc. All the methods mainly based on macrodata. This paper we use the micro-data of company to evaluate the operating efficiency.Using factor analyzing based on financial index and comparing the factor score of three years from 2005 to 2007, we find that China Unicom's operating efficiency is under the averge level of benchmark corporates and has't improved under the concept of 3G from 2005 to 2007.In other words,institutional investor or the conception of technical progress expectation have faint effect on the changes of China Unicom's operating efficiency. Selecting benchmark corporates as post to evaluate the operating efficiency is a characteristic of this method ,which is basicallly sipmly and direct.This method is suit for the operation efficiency evaluation of agriculture listed companies because agriculture listed also face technical progress and marketing concept such as tax-free etc.

  13. The EB Factory: Fundamental Stellar Astrophysics with Eclipsing Binary Stars Discovered by Kepler

    NASA Astrophysics Data System (ADS)

    Stassun, Keivan

    Eclipsing binaries (EBs) are key laboratories for determining the fundamental properties of stars. EBs are therefore foundational objects for constraining stellar evolution models, which in turn are central to determinations of stellar mass functions, of exoplanet properties, and many other areas. The primary goal of this proposal is to mine the Kepler mission light curves for: (1) EBs that include a subgiant star, from which precise ages can be derived and which can thus serve as critically needed age benchmarks; and within these, (2) long-period EBs that include low-mass M stars or brown dwarfs, which are increa-singly becoming the focus of exoplanet searches, but for which there are the fewest available fundamental mass- radius-age benchmarks. A secondary goal of this proposal is to develop an end-to-end computational pipeline -- the Kepler EB Factory -- that allows automatic processing of Kepler light curves for EBs, from period finding, to object classification, to determination of EB physical properties for the most scientifically interesting EBs, and finally to accurate modeling of these EBs for detailed tests and benchmarking of theoretical stellar evolution models. We will integrate the most successful algorithms into a single, cohesive workflow environment, and apply this 'Kepler EB Factory' to the full public Kepler dataset to find and characterize new "benchmark grade" EBs, and will disseminate both the enhanced data products from this pipeline and the pipeline itself to the broader NASA science community. The proposed work responds directly to two of the defined Research Areas of the NASA Astrophysics Data Analysis Program (ADAP), specifically Research Area #2 (Stellar Astrophysics) and Research Area #9 (Astrophysical Databases). To be clear, our primary goal is the fundamental stellar astrophysics that will be enabled by the discovery and analysis of relatively rare, benchmark-grade EBs in the Kepler dataset. At the same time, to enable this goal will require bringing a suite of extant and new custom algorithms to bear on the Kepler data, and thus our development of the Kepler EB Factory represents a value-added product that will allow the widest scientific impact of the in-formation locked within the vast reservoir of the Kepler light curves.

  14. PHISICS/RELAP5-3D RESULTS FOR EXERCISES II-1 AND II-2 OF THE OECD/NEA MHTGR-350 BENCHMARK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, Gerhard

    2016-03-01

    The Idaho National Laboratory (INL) Advanced Reactor Technologies (ART) High-Temperature Gas-Cooled Reactor (HTGR) Methods group currently leads the Modular High-Temperature Gas-Cooled Reactor (MHTGR) 350 benchmark. The benchmark consists of a set of lattice-depletion, steady-state, and transient problems that can be used by HTGR simulation groups to assess the performance of their code suites. The paper summarizes the results obtained for the first two transient exercises defined for Phase II of the benchmark. The Parallel and Highly Innovative Simulation for INL Code System (PHISICS), coupled with the INL system code RELAP5-3D, was used to generate the results for the Depressurized Conductionmore » Cooldown (DCC) (exercise II-1a) and Pressurized Conduction Cooldown (PCC) (exercise II-2) transients. These exercises require the time-dependent simulation of coupled neutronics and thermal-hydraulics phenomena, and utilize the steady-state solution previously obtained for exercise I-3 of Phase I. This paper also includes a comparison of the benchmark results obtained with a traditional system code “ring” model against a more detailed “block” model that include kinetics feedback on an individual block level and thermal feedbacks on a triangular sub-mesh. The higher spatial fidelity that can be obtained by the block model is illustrated with comparisons of the maximum fuel temperatures, especially in the case of natural convection conditions that dominate the DCC and PCC events. Differences up to 125 K (or 10%) were observed between the ring and block model predictions of the DCC transient, mostly due to the block model’s capability of tracking individual block decay powers and more detailed helium flow distributions. In general, the block model only required DCC and PCC calculation times twice as long as the ring models, and it therefore seems that the additional development and calculation time required for the block model could be worth the gain that can be obtained in the spatial resolution« less

  15. Investigating the Added Value of Interactivity and Serious Gaming for Educational TV

    ERIC Educational Resources Information Center

    Bellotti, F.; Berta, R.; De Gloria, A.; Ozolina, A.

    2011-01-01

    TV is a medium with high penetration rates and has been suited to deliver informal education in several aspects since years. Thus, interactive TV may play a significant role in the current Life-Long Learning challenges, provided that meaningful applications are implemented. In this research work, we have explored the added value of interactivity…

  16. A Fast Visible-Infrared Imaging Radiometer Suite Simulator for Cloudy Atmopheres

    NASA Technical Reports Server (NTRS)

    Liu, Chao; Yang, Ping; Nasiri, Shaima L.; Platnick, Steven; Meyer, Kerry G.; Wang, Chen Xi; Ding, Shouguo

    2015-01-01

    A fast instrument simulator is developed to simulate the observations made in cloudy atmospheres by the Visible Infrared Imaging Radiometer Suite (VIIRS). The correlated k-distribution (CKD) technique is used to compute the transmissivity of absorbing atmospheric gases. The bulk scattering properties of ice clouds used in this study are based on the ice model used for the MODIS Collection 6 ice cloud products. Two fast radiative transfer models based on pre-computed ice cloud look-up-tables are used for the VIIRS solar and infrared channels. The accuracy and efficiency of the fast simulator are quantify in comparison with a combination of the rigorous line-by-line (LBLRTM) and discrete ordinate radiative transfer (DISORT) models. Relative errors are less than 2 for simulated TOA reflectances for the solar channels and the brightness temperature differences for the infrared channels are less than 0.2 K. The simulator is over three orders of magnitude faster than the benchmark LBLRTM+DISORT model. Furthermore, the cloudy atmosphere reflectances and brightness temperatures from the fast VIIRS simulator compare favorably with those from VIIRS observations.

  17. Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; VanderWijngaart, Rob

    2003-01-01

    The growing gap between sustained and peak performance for scientific applications has become a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to bridge this gap for a significant number of computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines a full spectrum of low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks using some simple optimizations. Finally, we evaluate the perfor- mance of several numerical codes from key scientific computing domains. Overall results demonstrate that the SX6 achieves high performance on a large fraction of our application suite and in many cases significantly outperforms the RISC-based architectures. However, certain classes of applications are not easily amenable to vectorization and would likely require extensive reengineering of both algorithm and implementation to utilize the SX6 effectively.

  18. Validation of a Low-Thrust Mission Design Tool Using Operational Navigation Software

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Knittel, Jeremy M.; Williams, Ken; Stanbridge, Dale; Ellison, Donald H.

    2017-01-01

    Design of flight trajectories for missions employing solar electric propulsion requires a suitably high-fidelity design tool. In this work, the Evolutionary Mission Trajectory Generator (EMTG) is presented as a medium-high fidelity design tool that is suitable for mission proposals. EMTG is validated against the high-heritage deep-space navigation tool MIRAGE, demonstrating both the accuracy of EMTG's model and an operational mission design and navigation procedure using both tools. The validation is performed using a benchmark mission to the Jupiter Trojans.

  19. Finite difference time domain (FDTD) modeling of implanted deep brain stimulation electrodes and brain tissue.

    PubMed

    Gabran, S R I; Saad, J H; Salama, M M A; Mansour, R R

    2009-01-01

    This paper demonstrates the electromagnetic modeling and simulation of an implanted Medtronic deep brain stimulation (DBS) electrode using finite difference time domain (FDTD). The model is developed using Empire XCcel and represents the electrode surrounded with brain tissue assuming homogenous and isotropic medium. The model is created to study the parameters influencing the electric field distribution within the tissue in order to provide reference and benchmarking data for DBS and intra-cortical electrode development.

  20. Benchmarking variable-density flow in saturated and unsaturated porous media

    NASA Astrophysics Data System (ADS)

    Guevara Morel, Carlos Roberto; Cremer, Clemens; Graf, Thomas

    2015-04-01

    In natural environments, fluid density and viscosity can be affected by spatial and temporal variations of solute concentration and/or temperature. These variations can occur, for example, due to salt water intrusion in coastal aquifers, leachate infiltration from waste disposal sites and upconing of saline water from deep aquifers. As a consequence, potentially unstable situations may exist in which a dense fluid overlies a less dense fluid. This situation can produce instabilities that manifest as dense plume fingers that move vertically downwards counterbalanced by vertical upwards flow of the less dense fluid. Resulting free convection increases solute transport rates over large distances and times relative to constant-density flow. Therefore, the understanding of free convection is relevant for the protection of freshwater aquifer systems. The results from a laboratory experiment of saturated and unsaturated variable-density flow and solute transport (Simmons et al., Transp. Porous Medium, 2002) are used as the physical basis to define a mathematical benchmark. The HydroGeoSphere code coupled with PEST are used to estimate the optimal parameter set capable of reproducing the physical model. A grid convergency analysis (in space and time) is also undertaken in order to obtain the adequate spatial and temporal discretizations. The new mathematical benchmark is useful for model comparison and testing of variable-density variably saturated flow in porous media.

  1. Vascular surgical education in a medium-income country.

    PubMed

    Abdool-Carrim, A T O; Veller, M G

    2010-03-01

    Medium income country such as South Africa face a dilemma on the need to offer high quality vascular surgical care in a resource constrained environment, where the vast majority of population has inadequate access to even the most basic health care provision. At the same time with rapid development in technology there is also the need to provide high technological treatment to a small population that can afford high cost therapy. This apparent dichotomy in health care provides a challenge and the solution is for all role players in the health care provision to find a solution which will suite the population at large.

  2. A new method of preparing embeddment-free sections for transmission electron microscopy: applications to the cytoskeletal framework and other three-dimensional networks.

    PubMed

    Capco, D G; Krochmalnic, G; Penman, S

    1984-05-01

    Diethylene glycol distearate is used as a removable embedding medium to produce embeddment -free sections for transmission electron microscopy. The easily cut sections of this material float and form ribbons in a water-filled knife trough and exhibit interference colors that aid in the selection of sections of equal thickness. The images obtained with embeddment -free sections are compared with those from the more conventional epoxy-embedded sections, and illustrate that embedding medium can obscure important biological structures, especially protein filament networks. The embeddment -free section methodology is well suited for morphological studies of cytoskeletal preparations obtained by extraction of cells with nonionic detergent in cytoskeletal stabilizing medium. The embeddment -free section also serves to bridge the very different images afforded by embedded sections and unembedded whole mounts.

  3. Looking Past Primary Productivity: Benchmarking System Processes that Drive Ecosystem Level Responses in Models

    NASA Astrophysics Data System (ADS)

    Cowdery, E.; Dietze, M.

    2017-12-01

    As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentration are highly variable and contain a considerable amount of uncertainty. Benchmarking model predictions against data are necessary to assess their ability to replicate observed patterns, but also to identify and evaluate the assumptions causing inter-model differences. We have implemented a novel benchmarking workflow as part of the Predictive Ecosystem Analyzer (PEcAn) that is automated, repeatable, and generalized to incorporate different sites and ecological models. Building on the recent Free-Air CO2 Enrichment Model Data Synthesis (FACE-MDS) project, we used observational data from the FACE experiments to test this flexible, extensible benchmarking approach aimed at providing repeatable tests of model process representation that can be performed quickly and frequently. Model performance assessments are often limited to traditional residual error analysis; however, this can result in a loss of critical information. Models that fail tests of relative measures of fit may still perform well under measures of absolute fit and mathematical similarity. This implies that models that are discounted as poor predictors of ecological productivity may still be capturing important patterns. Conversely, models that have been found to be good predictors of productivity may be hiding error in their sub-process that result in the right answers for the wrong reasons. Our suite of tests have not only highlighted process based sources of uncertainty in model productivity calculations, they have also quantified the patterns and scale of this error. Combining these findings with PEcAn's model sensitivity analysis and variance decomposition strengthen our ability to identify which processes need further study and additional data constraints. This can be used to inform future experimental design and in turn can provide an informative starting point for data assimilation.

  4. Application configuration selection for energy-efficient execution on multicore systems

    DOE PAGES

    Wang, Shinan; Luo, Bing; Shi, Weisong; ...

    2015-09-21

    Balanced performance and energy consumption are incorporated in the design of modern computer systems. Several runtime factors, such as concurrency levels, thread mapping strategies, and dynamic voltage and frequency scaling (DVFS) should be considered in order to achieve optimal energy efficiency fora workload. Selecting appropriate run-time factors, however, is one of the most challenging tasks because the run-time factors are architecture-specific and workload-specific. And while most existing works concentrate on either static analysis of the workload or run-time prediction results, we present a hybrid two-step method that utilizes concurrency levels and DVFS settings to achieve the energy efficiency configuration formore » a worldoad. The experimental results based on a Xeon E5620 server with NPB and PARSEC benchmark suites show that the model is able to predict the energy efficient configuration accurately. On average, an additional 10% EDP (Energy Delay Product) saving is obtained by using run-time DVFS for the entire system. An off-line optimal solution is used to compare with the proposed scheme. Finally, the experimental results show that the average extra EDP saved by the optimal solution is within 5% on selective parallel benchmarks.« less

  5. Brownian motion properties of optoelectronic random bit generators based on laser chaos.

    PubMed

    Li, Pu; Yi, Xiaogang; Liu, Xianglian; Wang, Yuncai; Wang, Yongge

    2016-07-11

    The nondeterministic property of the optoelectronic random bit generator (RBG) based on laser chaos are experimentally analyzed from two aspects of the central limit theorem and law of iterated logarithm. The random bits are extracted from an optical feedback chaotic laser diode using a multi-bit extraction technique in the electrical domain. Our experimental results demonstrate that the generated random bits have no statistical distance from the Brownian motion, besides that they can pass the state-of-the-art industry-benchmark statistical test suite (NIST SP800-22). All of them give a mathematically provable evidence that the ultrafast random bit generator based on laser chaos can be used as a nondeterministic random bit source.

  6. Wilson Dslash Kernel From Lattice QCD Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show themore » technique gives excellent performance on regular Xeon Architecture as well.« less

  7. Characterizing Task-Based OpenMP Programs

    PubMed Central

    Muddukrishna, Ananya; Jonsson, Peter A.; Brorsson, Mats

    2015-01-01

    Programmers struggle to understand performance of task-based OpenMP programs since profiling tools only report thread-based performance. Performance tuning also requires task-based performance in order to balance per-task memory hierarchy utilization against exposed task parallelism. We provide a cost-effective method to extract detailed task-based performance information from OpenMP programs. We demonstrate the utility of our method by quickly diagnosing performance problems and characterizing exposed task parallelism and per-task instruction profiles of benchmarks in the widely-used Barcelona OpenMP Tasks Suite. Programmers can tune performance faster and understand performance tradeoffs more effectively than existing tools by using our method to characterize task-based performance. PMID:25860023

  8. Dynamic Curvature Steering Control for Autonomous Vehicle: Performance Analysis

    NASA Astrophysics Data System (ADS)

    Aizzat Zakaria, Muhammad; Zamzuri, Hairi; Amri Mazlan, Saiful

    2016-02-01

    This paper discusses the design of dynamic curvature steering control for autonomous vehicle. The lateral control and longitudinal control are discussed in this paper. The controller is designed based on the dynamic curvature calculation to estimate the path condition and modify the vehicle speed and steering wheel angle accordingly. In this paper, the simulation results are presented to show the capability of the controller to track the reference path. The controller is able to predict the path and modify the vehicle speed to suit the path condition. The effectiveness of the controller is shown in this paper whereby identical performance is achieved with the benchmark but with extra curvature adaptation capabilites.

  9. Improving competitiveness of small medium Batik printing industry on quality & productivity using value chain benchmarking (Case study SME X, SME Y & SME Z)

    NASA Astrophysics Data System (ADS)

    Fauzi, Rizky Hanif; Liquiddanu, Eko; Suletra, I. Wayan

    2018-02-01

    Batik printing is made by way of night printing as well as conventional batik & through the dyeing process like batik making in general. One of the areas that support the batik industry in Karisidenan Surakarta is Kliwonan Village, Masaran District, Sragen. Masaran district is known as one of batik centers originated from batik workers in Laweyan Solo area from Masaran, they considered that it would be more economical to produce batik in their village which is Masaran Sragen because it is impossible to do production from upstream to downstream in Solo. SME X is one of SME batik in Kliwonan Village, Masaran, Sragen which has been able to produce batik printing with sales coverage to national. One of the key SME X in selling its products is by participating in various national & international exhibitions that are able to catapult its name. SME Y & SME Z are also SMEs in Kliwonan Village, Masaran, Sragen producing batik printing. From the observations made there are several problems that must be fixed in SME Y & SME Z. The production process is delayed from schedule, maintenance of used equipment, procedures for batik workmanship, supervision of operators as well as unknown SMEY & SMEZ products are the problems found. The purpose of this research is to improve the primary activity in SME Y & Z value chain on batik prioting product by benchmarking to small & medium scale industries (SME) X which have better competence.

  10. Exact vibration analysis of a double-nanobeam-systems embedded in an elastic medium by a Hamiltonian-based method

    NASA Astrophysics Data System (ADS)

    Zhou, Zhenhuan; Li, Yuejie; Fan, Junhai; Rong, Dalun; Sui, Guohao; Xu, Chenghui

    2018-05-01

    A new Hamiltonian-based approach is presented for finding exact solutions for transverse vibrations of double-nanobeam-systems embedded in an elastic medium. The continuum model is established within the frameworks of the symplectic methodology and the nonlocal Euler-Bernoulli and Timoshenko beam beams. The symplectic eigenfunctions are obtained after expressing the governing equations in a Hamiltonian form. Exact frequency equations, vibration modes and displacement amplitudes are obtained by using symplectic eigenfunctions and end conditions. Comparisons with previously published work are presented to illustrate the accuracy and reliability of the proposed method. The comprehensive results for arbitrary boundary conditions could serve as benchmark results for verifying numerically obtained solutions. In addition, a study on the difference between the nonlocal beam and the nonlocal plate is also included.

  11. Astronomical chemistry.

    PubMed

    Klemperer, William

    2011-01-01

    The discovery of polar polyatomic molecules in higher-density regions of the interstellar medium by means of their rotational emission detected by radioastronomy has changed our conception of the universe from essentially atomic to highly molecular. We discuss models for molecule formation, emphasizing the general lack of thermodynamic equilibrium. Detailed chemical kinetics is needed to understand molecule formation as well as destruction. Ion molecule reactions appear to be an important class for the generally low temperatures of the interstellar medium. The need for the intrinsically high-quality factor of rotational transitions to definitively pin down molecular emitters has been well established by radioastronomy. The observation of abundant molecular ions both positive and, as recently observed, negative provides benchmarks for chemical kinetic schemes. Of considerable importance in guiding our understanding of astronomical chemistry is the fact that the larger molecules (with more than five atoms) are all organic.

  12. Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.

    PubMed

    Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J

    2016-01-01

    Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.

  13. Establishment and characterization of a human pancreatic cancer cell line (SUIT-2) producing carcinoembryonic antigen and carbohydrate antigen 19-9.

    PubMed

    Iwamura, T; Katsuki, T; Ide, K

    1987-01-01

    A new tumor cell line (SUIT-2) derived from a metastatic liver tumor of human pancreatic carcinoma has been established in tissue culture and in nude mice, and maintained for over five years. In tissue culture, the cells grew in a monolayered sheet with a population doubling time of about 38.2 hr, and floated or piled up to form small buds above the monolayered surface in relatively confluent cultures. Chromosome counts ranged from 34 to 176 with a modal number of 45. Subcutaneous injection of cultured cells into nude mice resulted in tumor formation, histopathologically closely resembling the original neoplasm which had been classified as moderately differentiated tubular adenocarcinoma. Electron microscopic observation of the neoplastic cells revealed a characteristic pancreatic ductal epithelium. SUIT-2 cell line produces and releases at least two tumor markers, carcinoembryonic antigen and carbohydrate antigen 19-9, propagates even in serum-free medium, and metastasizes to the regional lymph nodes in nude mice xenografts.

  14. Two-fluid dusty shocks: simple benchmarking problems and applications to protoplanetary discs

    NASA Astrophysics Data System (ADS)

    Lehmann, Andrew; Wardle, Mark

    2018-05-01

    The key role that dust plays in the interstellar medium has motivated the development of numerical codes designed to study the coupled evolution of dust and gas in systems such as turbulent molecular clouds and protoplanetary discs. Drift between dust and gas has proven to be important as well as numerically challenging. We provide simple benchmarking problems for dusty gas codes by numerically solving the two-fluid dust-gas equations for steady, plane-parallel shock waves. The two distinct shock solutions to these equations allow a numerical code to test different forms of drag between the two fluids, the strength of that drag and the dust to gas ratio. We also provide an astrophysical application of J-type dust-gas shocks to studying the structure of accretion shocks on to protoplanetary discs. We find that two-fluid effects are most important for grains larger than 1 μm, and that the peak dust temperature within an accretion shock provides a signature of the dust-to-gas ratio of the infalling material.

  15. NASA's Black Marble Nighttime Lights Product Suite

    NASA Technical Reports Server (NTRS)

    Wang, Zhuosen; Sun, Qingsong; Seto, Karen C.; Oda, Tomohiro; Wolfe, Robert E.; Sarkar, Sudipta; Stevens, Joshua; Ramos Gonzalez, Olga M.; Detres, Yasmin; Esch, Thomas; hide

    2018-01-01

    NASA's Black Marble nighttime lights product suite (VNP46) is available at 500 meters resolution since January 2012 with data from the Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) onboard the Suomi National Polar-orbiting Platform (SNPP). The retrieval algorithm, developed and implemented for routine global processing at NASA's Land Science Investigator-led Processing System (SIPS), utilizes all high-quality, cloud-free, atmospheric-, terrain-, vegetation-, snow-, lunar-, and stray light-corrected radiances to estimate daily nighttime lights (NTL) and other intrinsic surface optical properties. Key algorithm enhancements include: (1) lunar irradiance modeling to resolve non-linear changes in phase and libration; (2) vector radiative transfer and lunar bidirectional surface anisotropic reflectance modeling to correct for atmospheric and BRDF (Bidirectional Reflectance Distribution Function) effects; (3) geometric-optical and canopy radiative transfer modeling to account for seasonal variations in NTL; and (4) temporal gap-filling to reduce persistent data gaps. Extensive benchmark tests at representative spatial and temporal scales were conducted on the VNP46 time series record to characterize the uncertainties stemming from upstream data sources. Initial validation results are presented together with example case studies illustrating the scientific utility of the products. This includes an evaluation of temporal patterns of NTL dynamics associated with urbanization, socioeconomic variability, cultural characteristics, and displaced populations affected by conflict. Current and planned activities under the Group on Earth Observations (GEO) Human Planet Initiative are aimed at evaluating the products at different geographic locations and time periods representing the full range of retrieval conditions.

  16. A standard test case suite for two-dimensional linear transport on the sphere: results from a collection of state-of-the-art schemes

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.

    2013-09-01

    Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent, The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data-sets are made available to facilitate the process of model evaluation and scheme intercomparison.

  17. A standard test case suite for two-dimensional linear transport on the sphere: results from a collection of state-of-the-art schemes

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.

    2014-01-01

    Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent. The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data sets are made available to facilitate the process of model evaluation and scheme intercomparison.

  18. A new method of preparing embeddment-free sections for transmission electron microscopy: applications to the cytoskeletal framework and other three-dimensional networks

    PubMed Central

    1984-01-01

    Diethylene glycol distearate is used as a removable embedding medium to produce embeddment -free sections for transmission electron microscopy. The easily cut sections of this material float and form ribbons in a water-filled knife trough and exhibit interference colors that aid in the selection of sections of equal thickness. The images obtained with embeddment -free sections are compared with those from the more conventional epoxy-embedded sections, and illustrate that embedding medium can obscure important biological structures, especially protein filament networks. The embeddment -free section methodology is well suited for morphological studies of cytoskeletal preparations obtained by extraction of cells with nonionic detergent in cytoskeletal stabilizing medium. The embeddment -free section also serves to bridge the very different images afforded by embedded sections and unembedded whole mounts. PMID:6539336

  19. Quadratic integrand double-hybrid made spin-component-scaled

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brémond, Éric, E-mail: eric.bremond@iit.it; Savarese, Marika; Sancho-García, Juan C.

    2016-03-28

    We propose two analytical expressions aiming to rationalize the spin-component-scaled (SCS) and spin-opposite-scaled (SOS) schemes for double-hybrid exchange-correlation density-functionals. Their performances are extensively tested within the framework of the nonempirical quadratic integrand double-hybrid (QIDH) model on energetic properties included into the very large GMTKN30 benchmark database, and on structural properties of semirigid medium-sized organic compounds. The SOS variant is revealed as a less computationally demanding alternative to reach the accuracy of the original QIDH model without losing any theoretical background.

  20. Thermally conductive polymers

    NASA Technical Reports Server (NTRS)

    Byrd, N. R.; Jenkins, R. K.; Lister, J. L. (Inventor)

    1971-01-01

    A thermally conductive polymer is provided having physical and chemical properties suited to use as a medium for potting electrical components. The polymer is prepared from hydroquinone, phenol, and formaldehyde, by conventional procedures employed for the preparation of phenol-formaldehyde resins. While the proportions of the monomers can be varied, a preferred polymer is formed from the monomers in a 1:1:2.4 molar or ratio of hydroquinone:phenol:formaldehyde.

  1. Mixing Single Scattering Properties in Vector Radiative Transfer for Deterministic and Stochastic Solutions

    NASA Astrophysics Data System (ADS)

    Mukherjee, L.; Zhai, P.; Hu, Y.; Winker, D. M.

    2016-12-01

    Among the primary factors, which determine the polarized radiation, field of a turbid medium are the single scattering properties of the medium. When multiple types of scatterers are present, the single scattering properties of the scatterers need to be properly mixed in order to find the solutions to the vector radiative transfer theory (VRT). The VRT solvers can be divided into two types: deterministic and stochastic. The deterministic solver can only accept one set of single scattering property in its smallest discretized spatial volume. When the medium contains more than one kind of scatterer, their single scattering properties are averaged, and then used as input for the deterministic solver. The stochastic solver, can work with different kinds of scatterers explicitly. In this work, two different mixing schemes are studied using the Successive Order of Scattering (SOS) method and Monte Carlo (MC) methods. One scheme is used for deterministic and the other is used for the stochastic Monte Carlo method. It is found that the solutions from the two VRT solvers using two different mixing schemes agree with each other extremely well. This confirms the equivalence to the two mixing schemes and also provides a benchmark for the VRT solution for the medium studied.

  2. MT71x: Multi-Temperature Library Based on ENDF/B-VII.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conlin, Jeremy Lloyd; Parsons, Donald Kent; Gray, Mark Girard

    The Nuclear Data Team has released a multitemperature transport library, MT71x, based upon ENDF/B-VII.1 with a few modifications as well as additional evaluations for a total of 427 isotope tables. The library was processed using NJOY2012.39 into 23 temperatures. MT71x consists of two sub-libraries; MT71xMG for multigroup energy representation data and MT71xCE for continuous energy representation data. These sub-libraries are suitable for deterministic transport and Monte Carlo transport applications, respectively. The SZAs used are the same for the two sub-libraries; that is, the same SZA can be used for both libraries. This makes comparisons between the two libraries and betweenmore » deterministic and Monte Carlo codes straightforward. Both the multigroup energy and continuous energy libraries were verified and validated with our checking codes checkmg and checkace (multigroup and continuous energy, respectively) Then an expanded suite of tests was used for additional verification and, finally, verified using an extensive suite of critical benchmark models. We feel that this library is suitable for all calculations and is particularly useful for calculations sensitive to temperature effects.« less

  3. Biomarker Benchmarks: Reproductive and Endocrine Biomarkers in Largemouth Bass and Common Carp from United States Waters

    USGS Publications Warehouse

    Goodbred, Steven L.; Smith, Stephen B.; Greene, Patricia S.; Rauschenberger, Richard H.; Bartish, Timothy M.

    2007-01-01

    The U.S. Geological Survey (USGS) has developed a national database and report on endocrine and reproductive condition in two species of fish collected in U.S. streams and rivers. This information provides scientists with a national basis for comparing results of endocrine measurements in fish from individual sites throughout the country, so that scientists can better ascertain normal levels of biomarkers. The database includes information on several measures of reproductive and endocrine condition for common carp and largemouth bass. Data summaries are provided by reproductive season and geographic region. A national-scale reconnaissance investigation was initiated in 1994 by the USGS that utilized a suite of biological assays (biomarkers) as indicators of reproductive health, and potentially, endocrine disruption in two widely distributed species of teleost (bony) fish, largemouth bass (Micropterus salmoides) and common carp (Cyrinus carpio). The suite of assays included plasma sex-steroid hormones, stage of gonadal development, and plasma vitellogenin, an egg protein that indicates exposure to estrogenic compounds when found in male fish. More than 2,200 common carp and 650 largemouth bass were collected at 119 rivers and streams (fig. 1).

  4. Noninterceptive transverse emittance measurements using BPM for Chinese ADS R&D project

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-Jun; Feng, Chi; He, Yuan; Dou, Weiping; Tao, Yue; Chen, Wei-long; Jia, Huan; Liu, Shu-hui; Wang, Wang-sheng; Zhang, Yong; Wu, Jian-qiang; Zhang, Sheng-hu; Zhang, X. L.

    2016-04-01

    The noninterceptive four-dimensional transverse emittance measurements are essential for commissioning the high power continue-wave (CW) proton linacs as well as their operations. The conventional emittance measuring devices such as slits and wire scanners are not well suited under these conditions due to sure beam damages. Therefore, the method of using noninterceptive Beam Position Monitor (BPM) is developed and demonstrated on Injector Scheme II at the Chinese Accelerator Driven Sub-critical System (China-ADS) proofing facility inside Institute of Modern Physics (IMP) [1]. The results of measurements are in good agreements with wire scanners and slits at low duty-factor pulsed (LDFP) beam. In this paper, the detailed experiment designs, data analysis and result benchmarking are presented.

  5. Issues in ATM Support of High-Performance, Geographically Distributed Computing

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G

    1995-01-01

    This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.

  6. Delta-ray Production in MCNP 6.2.0

    NASA Astrophysics Data System (ADS)

    Anderson, C.; McKinney, G.; Tutt, J.; James, M.

    Secondary electrons in the form of delta-rays, also referred to as knock-on electrons, have been a feature of MCNP for electron and positron transport for over 20 years. While MCNP6 now includes transport for a suite of heavy-ions and charged particles from its integration with MCNPX, the production of delta-rays was still limited to electron and positron transport. In the newest release of MCNP6, version 6.2.0, delta-ray production has now been extended for all energetic charged particles. The basis of this production is the analytical formulation from Rossi and ICRU Report 37. This paper discusses the MCNP6 heavy charged-particle implementation and provides production results for several benchmark/test problems.

  7. Industrial research for transmutation scenarios

    NASA Astrophysics Data System (ADS)

    Camarcat, Noel; Garzenne, Claude; Le Mer, Joël; Leroyer, Hadrien; Desroches, Estelle; Delbecq, Jean-Michel

    2011-04-01

    This article presents the results of research scenarios for americium transmutation in a 22nd century French nuclear fleet, using sodium fast breeder reactors. We benchmark the americium transmutation benefits and drawbacks with a reference case consisting of a hypothetical 60 GWe fleet of pure plutonium breeders. The fluxes in the various parts of the cycle (reactors, fabrication plants, reprocessing plants and underground disposals) are calculated using EDF's suite of codes, comparable in capabilities to those of other research facilities. We study underground thermal heat load reduction due to americium partitioning and repository area minimization. We endeavor to estimate the increased technical complexity of surface facilities to handle the americium fluxes in special fuel fabrication plants, americium fast burners, special reprocessing shops, handling equipments and transport casks between those facilities.

  8. A performance comparison of the Cray-2 and the Cray X-MP

    NASA Technical Reports Server (NTRS)

    Schmickley, Ronald; Bailey, David H.

    1986-01-01

    A suite of thirteen large Fortran benchmark codes were run on Cray-2 and Cray X-MP supercomputers. These codes were a mix of compute-intensive scientific application programs (mostly Computational Fluid Dynamics) and some special vectorized computation exercise programs. For the general class of programs tested on the Cray-2, most of which were not specially tuned for speed, the floating point operation rates varied under a variety of system load configurations from 40 percent up to 125 percent of X-MP performance rates. It is concluded that the Cray-2, in the original system configuration studied (without memory pseudo-banking) will run untuned Fortran code, on average, about 70 percent of X-MP speeds.

  9. Representative Benchmark Suites for Barrier Heights of Diverse Reaction Types and Assessment of Electronic Structure Methods for Thermochemical Kinetics

    DTIC Science & Technology

    2006-12-19

    10,22,49 PBE1KCIS,17–19,23,50 PBE1PBE,23 PW6B95,51 PWB6K,51 TPSS1KCIS,17–21,52 TPSSh,20,21 X3LYP ,9,10,26,53 and τHCTHh34), and single-level WFT...3.61 10 PBE1KCIS/MG3 −8.56 8.56 −1.77 1.88 −0.86 2.64 4.36 12 X3LYP /MG3S −8.48 8.48 −2.89 2.90 −1.43 2.06 4.48 11 B3LYP/MG3S −8.49 8.49 −3.25 3.25

  10. Complete low-cost implementation of a teleoperated control system for a humanoid robot.

    PubMed

    Cela, Andrés; Yebes, J Javier; Arroyo, Roberto; Bergasa, Luis M; Barea, Rafael; López, Elena

    2013-01-24

    Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system.

  11. Complete Low-Cost Implementation of a Teleoperated Control System for a Humanoid Robot

    PubMed Central

    Cela, Andrés; Yebes, J. Javier; Arroyo, Roberto; Bergasa, Luis M.; Barea, Rafael; López, Elena

    2013-01-01

    Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system. PMID:23348029

  12. Superplasticity in a lean Fe-Mn-Al steel.

    PubMed

    Han, Jeongho; Kang, Seok-Hyeon; Lee, Seung-Joon; Kawasaki, Megumi; Lee, Han-Joo; Ponge, Dirk; Raabe, Dierk; Lee, Young-Kook

    2017-09-29

    Superplastic alloys exhibit extremely high ductility (>300%) without cracks when tensile-strained at temperatures above half of their melting point. Superplasticity, which resembles the flow behavior of honey, is caused by grain boundary sliding in metals. Although several non-ferrous and ferrous superplastic alloys are reported, their practical applications are limited due to high material cost, low strength after forming, high deformation temperature, and complicated fabrication process. Here we introduce a new compositionally lean (Fe-6.6Mn-2.3Al, wt.%) superplastic medium Mn steel that resolves these limitations. The medium Mn steel is characterized by ultrafine grains, low material costs, simple fabrication, i.e., conventional hot and cold rolling, low deformation temperature (ca. 650 °C) and superior ductility above 1300% at 850 °C. We suggest that this ultrafine-grained medium Mn steel may accelerate the commercialization of superplastic ferrous alloys.Research in new alloy compositions and treatments may allow the increased strength of mass-produced, intricately shaped parts. Here authors introduce a superplastic medium manganese steel which has an inexpensive lean chemical composition and which is suited for conventional manufacturing processes.

  13. The Effect of Aggressive Corrosion Mediums on the Microstructure and Properties of Mild Steel

    NASA Astrophysics Data System (ADS)

    Araoyinbo, A. O.; Salleh, M. A. A. Mohd; Rahmat, A.; Azmi, A. I.; Rahim, W. M. F. Wan Abd; Achitei, D. C.; Jin, T. S.

    2018-06-01

    Mild steel is known to be one of the major construction materials and have been extensively used in most chemical and material industries due to its interesting properties which can be easily altered to suit various application areas. In this research, mild steel is exposed to different aggressive mediums in order to observe the effect of these interactions on its surface morphology and properties. The mild steel used was cut into dimensions of 7 cm length and width of 3 cm. The aggressive mediums used are 100 mls of aqueous solution of hydrochloric acid, sodium hydroxide (40 g/L), and sodium chloride (35 g/L) at room temperature. The characterizations performed are the hardness test with the Rockwell hardness tester, the surface morphology by optical microscope, surface roughness and the weight loss from the immersion test. It was observed that the hardness value and the weight loss for the different cut samples of mild steel immersed in the different aggressive mediums reduces with prolong exposure and severe pitting form of corrosion was present on its surface.

  14. Method and Apparatus for a Miniature Bioreactor System for Long-Term Cell Culture

    NASA Technical Reports Server (NTRS)

    Kleis, Stanley J. (Inventor); Geffert, Sandra K. (Inventor); Gonda, Steve R. (Inventor)

    2015-01-01

    A bioreactor and method that permits continuous and simultaneous short, moderate, or long term cell culturing of one or more cell types or tissue in a laminar flow configuration is disclosed, where the bioreactor supports at least two laminar flow zones, which are isolated by laminar flow without the need for physical barriers between the zones. The bioreactors of this invention are ideally suited for studying short, moderate and long term studies of cell cultures and the response of cell cultures to one or more stressors such as pharmaceuticals, hypoxia, pathogens, or any other stressor. The bioreactors of this invention are also ideally suited for short, moderate or long term cell culturing with periodic cell harvesting and/or medium processing for secreted cellular components.

  15. Influence of sediment chemistry and sediment toxicity on macroinvertebrate communities across 99 wadable streams of the Midwestern USA

    USGS Publications Warehouse

    Moran, Patrick W.; Nowell, Lisa H.; Kemble, Nile E.; Mahler, Barbara J.; Waite, Ian R.; Van Metre, Peter C.

    2017-01-01

    Simultaneous assessment of sediment chemistry, sediment toxicity, and macroinvertebrate communities can provide multiple lines of evidence when investigating relations between sediment contaminants and ecological degradation. These three measures were evaluated at 99 wadable stream sites across 11 states in the Midwestern United States during the summer of 2013 to assess sediment pollution across a large agricultural landscape. This evaluation considers an extensive suite of sediment chemistry totaling 274 analytes (polycyclic aromatic hydrocarbons, organochlorine compounds, polychlorinated biphenyls, polybrominated diphenyl ethers, trace elements, and current-use pesticides) and a mixture assessment based on the ratios of detected compounds to available effects-based benchmarks. The sediments were tested for toxicity with the amphipod Hyalella azteca (28-d exposure), the midge Chironomus dilutus (10-d), and, at a few sites, with the freshwater mussel Lampsilis siliquoidea (28-d). Sediment concentrations, normalized to organic carbon content, infrequently exceeded benchmarks for aquatic health, which was generally consistent with low rates of observed toxicity. However, the benchmark-based mixture score and the pyrethroid insecticide bifenthrin were significantly related to observed sediment toxicity. The sediment mixture score and bifenthrin were also significant predictors of the upper limits of several univariate measures of the macroinvertebrate community (EPT percent, MMI (Macroinvertebrate Multimetric Index) Score, Ephemeroptera and Trichoptera richness) using quantile regression. Multivariate pattern matching (Mantel-like tests) of macroinvertebrate species per site to identified contaminant metrics and sediment toxicity also indicate that the sediment mixture score and bifenthrin have weak, albeit significant, influence on the observed invertebrate community composition. Together, these three lines of evidence (toxicity tests, univariate metrics, and multivariate community analysis) suggest that elevated contaminant concentrations in sediments, in particular bifenthrin, is limiting macroinvertebrate communities in several of these Midwest streams.

  16. The infinite medium Green's function for neutron transport in plane geometry 40 years later

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.

    1993-01-01

    In 1953, the first of what was supposed to be two volumes on neutron transport theory was published. The monograph, entitled [open quotes]Introduction to the Theory of Neutron Diffusion[close quotes] by Case et al., appeared as a Los Alamos National Laboratory report and was to be followed by a second volume, which never appeared as intended because of the death of Placzek. Instead, Case and Zweifel collaborated on the now classic work entitled Linear Transport Theory 2 in which the underlying mathematical theory of linear transport was presented. The initial monograph, however, represented the coming of age of neutron transportmore » theory, which had its roots in radiative transfer and kinetic theory. In addition, it provided the first benchmark results along with the mathematical development for several fundamental neutron transport problems. In particular, one-dimensional infinite medium Green's functions for the monoenergetic transport equation in plane and spherical geometries were considered complete with numerical results to be used as standards to guide code development for applications. Unfortunately, because of the limited computational resources of the day, some numerical results were incorrect. Also, only conventional mathematics and numerical methods were used because the transport theorists of the day were just becoming acquainted with more modern mathematical approaches. In this paper, Green's function solution is revisited in light of modern numerical benchmarking methods with an emphasis on evaluation rather than theoretical results. The primary motivation for considering the Green's function at this time is its emerging use in solving finite and heterogeneous media transport problems.« less

  17. Narrow linewidth diode laser modules for quantum optical sensor applications in the field and in space

    NASA Astrophysics Data System (ADS)

    Wicht, A.; Bawamia, A.; Krüger, M.; Kürbis, Ch.; Schiemangk, M.; Smol, R.; Peters, A.; Tränkle, G.

    2017-02-01

    We present the status of our efforts to develop very compact and robust diode laser modules specifically suited for quantum optics experiments in the field and in space. The paper describes why hybrid micro-integration and GaAs-diode laser technology is best suited to meet the needs of such applications. The electro-optical performance achieved with hybrid micro-integrated, medium linewidth, high power distributed-feedback master-oscillator-power-amplifier modules and with medium power, narrow linewidth extended cavity diode lasers emitting at 767 nm and 780 nm are briefly described and the status of space relevant stress tests and space heritage is summarized. We also describe the performance of an ECDL operating at 1070 nm. Further, a novel and versatile technology platform is introduced that allows for integration of any type of laser system or electro-optical module that can be constructed from two GaAs chips. This facilitates, for the first time, hybrid micro-integration, e.g. of extended cavity diode laser master-oscillator-poweramplifier modules, of dual-stage optical amplifiers, or of lasers with integrated, chip-based phase modulator. As an example we describe the implementation of an ECDL-MOPA designed for experiments on ultra-cold rubidium and potassium atoms on board a sounding rocket and give basic performance parameters.

  18. TH-D-204-00: The Pursuit of Radiation Oncology Performance Excellence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The Malcolm Baldrige National Quality Improvement Act was signed into law in 1987 to advance U.S. business competitiveness and economic growth. Administered by the National Institute of Standards and Technology NIST, the Act created the Baldrige National Quality Program, now renamed the Baldrige Performance Excellence Program. The comprehensive analytical approaches referred to as the Baldrige Healthcare Criteria, are very well suited for the evaluation and sustainable improvement of radiation oncology management and operations. A multidisciplinary self-assessment approach is used for radiotherapy program evaluation and development in order to generate a fact based knowledge driven system for improving quality of care,more » increasing patient satisfaction, building employee engagement, and boosting organizational innovation. The methodology also provides a valuable framework for benchmarking an individual radiation oncology practice against guidelines defined by accreditation and professional organizations and regulatory agencies. Learning Objectives: To gain knowledge of the Baldrige Performance Excellence Program as it relates to Radiation Oncology. To appreciate the value of a multidisciplinary self-assessment approach in the pursuit of Radiation Oncology quality care, patient satisfaction, and workforce commitment. To acquire a set of useful measurement tools with which an individual Radiation Oncology practice can benchmark its performance against guidelines defined by accreditation and professional organizations and regulatory agencies.« less

  19. TH-D-204-01: The Pursuit of Radiation Oncology Performance Excellence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sternick, E.

    The Malcolm Baldrige National Quality Improvement Act was signed into law in 1987 to advance U.S. business competitiveness and economic growth. Administered by the National Institute of Standards and Technology NIST, the Act created the Baldrige National Quality Program, now renamed the Baldrige Performance Excellence Program. The comprehensive analytical approaches referred to as the Baldrige Healthcare Criteria, are very well suited for the evaluation and sustainable improvement of radiation oncology management and operations. A multidisciplinary self-assessment approach is used for radiotherapy program evaluation and development in order to generate a fact based knowledge driven system for improving quality of care,more » increasing patient satisfaction, building employee engagement, and boosting organizational innovation. The methodology also provides a valuable framework for benchmarking an individual radiation oncology practice against guidelines defined by accreditation and professional organizations and regulatory agencies. Learning Objectives: To gain knowledge of the Baldrige Performance Excellence Program as it relates to Radiation Oncology. To appreciate the value of a multidisciplinary self-assessment approach in the pursuit of Radiation Oncology quality care, patient satisfaction, and workforce commitment. To acquire a set of useful measurement tools with which an individual Radiation Oncology practice can benchmark its performance against guidelines defined by accreditation and professional organizations and regulatory agencies.« less

  20. TomoPhantom, a software package to generate 2D-4D analytical phantoms for CT image reconstruction algorithm benchmarks

    NASA Astrophysics Data System (ADS)

    Kazantsev, Daniil; Pickalov, Valery; Nagella, Srikanth; Pasca, Edoardo; Withers, Philip J.

    2018-01-01

    In the field of computerized tomographic imaging, many novel reconstruction techniques are routinely tested using simplistic numerical phantoms, e.g. the well-known Shepp-Logan phantom. These phantoms cannot sufficiently cover the broad spectrum of applications in CT imaging where, for instance, smooth or piecewise-smooth 3D objects are common. TomoPhantom provides quick access to an external library of modular analytical 2D/3D phantoms with temporal extensions. In TomoPhantom, quite complex phantoms can be built using additive combinations of geometrical objects, such as, Gaussians, parabolas, cones, ellipses, rectangles and volumetric extensions of them. Newly designed phantoms are better suited for benchmarking and testing of different image processing techniques. Specifically, tomographic reconstruction algorithms which employ 2D and 3D scanning geometries, can be rigorously analyzed using the software. TomoPhantom also provides a capability of obtaining analytical tomographic projections which further extends the applicability of software towards more realistic, free from the "inverse crime" testing. All core modules of the package are written in the C-OpenMP language and wrappers for Python and MATLAB are provided to enable easy access. Due to C-based multi-threaded implementation, volumetric phantoms of high spatial resolution can be obtained with computational efficiency.

  1. A homology-based pipeline for global prediction of post-translational modification sites

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Shi, Shao-Ping; Xu, Hao-Dong; Suo, Sheng-Bao; Qiu, Jian-Ding

    2016-05-01

    The pathways of protein post-translational modifications (PTMs) have been shown to play particularly important roles for almost any biological process. Identification of PTM substrates along with information on the exact sites is fundamental for fully understanding or controlling biological processes. Alternative computational strategies would help to annotate PTMs in a high-throughput manner. Traditional algorithms are suited for identifying the common organisms and tissues that have a complete PTM atlas or extensive experimental data. While annotation of rare PTMs in most organisms is a clear challenge. In this work, to this end we have developed a novel homology-based pipeline named PTMProber that allows identification of potential modification sites for most of the proteomes lacking PTMs data. Cross-promotion E-value (CPE) as stringent benchmark has been used in our pipeline to evaluate homology to known modification sites. Independent-validation tests show that PTMProber achieves over 58.8% recall with high precision by CPE benchmark. Comparisons with other machine-learning tools show that PTMProber pipeline performs better on general predictions. In addition, we developed a web-based tool to integrate this pipeline at http://bioinfo.ncu.edu.cn/PTMProber/index.aspx. In addition to pre-constructed prediction models of PTM, the website provides an extensional functionality to allow users to customize models.

  2. Review of pathogen treatment reductions for onsite non ...

    EPA Pesticide Factsheets

    Communities face a challenge when implementing onsite reuse of collected waters for non-potable purposes given the lack of national microbial standards. Quantitative Microbial Risk Assessment (QMRA) can be used to predict the pathogen risks associated with the non-potable reuse of onsite-collected waters; the present work reviewed the relevant QMRA literature to prioritize knowledge gaps and identify health-protective pathogen treatment reduction targets. The review indicated that ingestion of untreated, onsite-collected graywater, rainwater, seepage water and stormwater from a variety of exposure routes resulted in gastrointestinal infection risks greater than the traditional acceptable level of risk. We found no QMRAs that estimated the pathogen risks associated with onsite, non-potable reuse of blackwater. Pathogen treatment reduction targets for non-potable, onsite reuse that included a suite of reference pathogens (i.e., including relevant bacterial, protozoan, and viral hazards) were limited to graywater (for a limited set of domestic uses) and stormwater (for domestic and municipal uses). These treatment reductions corresponded with the health benchmark of a probability of infection or illness of 10−3 per person per year or less. The pathogen treatment reduction targets varied depending on the target health benchmark, reference pathogen, source water, and water reuse application. Overall, there remains a need for pathogen reduction targets that are heal

  3. DensToolKit: A comprehensive open-source package for analyzing the electron density and its derivative scalar and vector fields

    NASA Astrophysics Data System (ADS)

    Solano-Altamirano, J. M.; Hernández-Pérez, Julio M.

    2015-11-01

    DensToolKit is a suite of cross-platform, optionally parallelized, programs for analyzing the molecular electron density (ρ) and several fields derived from it. Scalar and vector fields, such as the gradient of the electron density (∇ρ), electron localization function (ELF) and its gradient, localized orbital locator (LOL), region of slow electrons (RoSE), reduced density gradient, localized electrons detector (LED), information entropy, molecular electrostatic potential, kinetic energy densities K and G, among others, can be evaluated on zero, one, two, and three dimensional grids. The suite includes a program for searching critical points and bond paths of the electron density, under the framework of Quantum Theory of Atoms in Molecules. DensToolKit also evaluates the momentum space electron density on spatial grids, and the reduced density matrix of order one along lines joining two arbitrary atoms of a molecule. The source code is distributed under the GNU-GPLv3 license, and we release the code with the intent of establishing an open-source collaborative project. The style of DensToolKit's code follows some of the guidelines of an object-oriented program. This allows us to supply the user with a simple manner for easily implement new scalar or vector fields, provided they are derived from any of the fields already implemented in the code. In this paper, we present some of the most salient features of the programs contained in the suite, some examples of how to run them, and the mathematical definitions of the implemented fields along with hints of how we optimized their evaluation. We benchmarked our suite against both a freely-available program and a commercial package. Speed-ups of ˜2×, and up to 12× were obtained using a non-parallel compilation of DensToolKit for the evaluation of fields. DensToolKit takes similar times for finding critical points, compared to a commercial package. Finally, we present some perspectives for the future development and growth of the suite.

  4. Enhanced NIF neutron activation diagnostics.

    PubMed

    Yeamans, C B; Bleuel, D L; Bernstein, L A

    2012-10-01

    The NIF neutron activation diagnostic suite relies on removable activation samples, leading to operational inefficiencies and a fundamental lower limit on the half-life of the activated product that can be observed. A neutron diagnostic system measuring activation of permanently installed samples could remove these limitations and significantly enhance overall neutron diagnostic capabilities. The physics and engineering aspects of two proposed systems are considered: one measuring the (89)Zr/(89 m)Zr isomer ratio in the existing Zr activation medium and the other using potassium zirconate as the activation medium. Both proposed systems could improve the signal-to-noise ratio of the current system by at least a factor of 5 and would allow independent measurement of fusion core velocity and fuel areal density.

  5. BODYFIT-1FE: a computer code for three-dimensional steady-state/transient single-phase rod-bundle thermal-hydraulic analysis. Draft report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, B.C.J.; Sha, W.T.; Doria, M.L.

    1980-11-01

    The governing equations, i.e., conservation equations for mass, momentum, and energy, are solved as a boundary-value problem in space and an initial-value problem in time. BODYFIT-1FE code uses the technique of boundary-fitted coordinate systems where all the physical boundaries are transformed to be coincident with constant coordinate lines in the transformed space. By using this technique, one can prescribe boundary conditions accurately without interpolation. The transformed governing equations in terms of the boundary-fitted coordinates are then solved by using implicit cell-by-cell procedure with a choice of either central or upwind convective derivatives. It is a true benchmark rod-bundle code withoutmore » invoking any assumptions in the case of laminar flow. However, for turbulent flow, some empiricism must be employed due to the closure problem of turbulence modeling. The detailed velocity and temperature distributions calculated from the code can be used to benchmark and calibrate empirical coefficients employed in subchannel codes and porous-medium analyses.« less

  6. Assessment of the Accuracy of the Bethe-Salpeter (BSE/GW) Oscillator Strengths.

    PubMed

    Jacquemin, Denis; Duchemin, Ivan; Blondel, Aymeric; Blase, Xavier

    2016-08-09

    Aiming to assess the accuracy of the oscillator strengths determined at the BSE/GW level, we performed benchmark calculations using three complementary sets of molecules. In the first, we considered ∼80 states in Thiel's set of compounds and compared the BSE/GW oscillator strengths to recently determined ADC(3/2) and CC3 reference values. The second set includes the oscillator strengths of the low-lying states of 80 medium to large dyes for which we have determined CC2/aug-cc-pVTZ values. The third set contains 30 anthraquinones for which experimental oscillator strengths are available. We find that BSE/GW accurately reproduces the trends for all series with excellent correlation coefficients to the benchmark data and generally very small errors. Indeed, for Thiel's sets, the BSE/GW values are more accurate (using CC3 references) than both CC2 and ADC(3/2) values on both absolute and relative scales. For all three sets, BSE/GW errors also tend to be nicely spread with almost equal numbers of positive and negative deviations as compared to reference values.

  7. Groundwater flow with energy transport and water-ice phase change: Numerical simulations, benchmarks, and application to freezing in peat bogs

    USGS Publications Warehouse

    McKenzie, J.M.; Voss, C.I.; Siegel, D.I.

    2007-01-01

    In northern peatlands, subsurface ice formation is an important process that can control heat transport, groundwater flow, and biological activity. Temperature was measured over one and a half years in a vertical profile in the Red Lake Bog, Minnesota. To successfully simulate the transport of heat within the peat profile, the U.S. Geological Survey's SUTRA computer code was modified. The modified code simulates fully saturated, coupled porewater-energy transport, with freezing and melting porewater, and includes proportional heat capacity and thermal conductivity of water and ice, decreasing matrix permeability due to ice formation, and latent heat. The model is verified by correctly simulating the Lunardini analytical solution for ice formation in a porous medium with a mixed ice-water zone. The modified SUTRA model correctly simulates the temperature and ice distributions in the peat bog. Two possible benchmark problems for groundwater and energy transport with ice formation and melting are proposed that may be used by other researchers for code comparison. ?? 2006 Elsevier Ltd. All rights reserved.

  8. Comparison of discrete ordinate and Monte Carlo simulations of polarized radiative transfer in two coupled slabs with different refractive indices.

    PubMed

    Cohen, D; Stamnes, S; Tanikawa, T; Sommersten, E R; Stamnes, J J; Lotsberg, J K; Stamnes, K

    2013-04-22

    A comparison is presented of two different methods for polarized radiative transfer in coupled media consisting of two adjacent slabs with different refractive indices, each slab being a stratified medium with no change in optical properties except in the direction of stratification. One of the methods is based on solving the integro-differential radiative transfer equation for the two coupled slabs using the discrete ordinate approximation. The other method is based on probabilistic and statistical concepts and simulates the propagation of polarized light using the Monte Carlo approach. The emphasis is on non-Rayleigh scattering for particles in the Mie regime. Comparisons with benchmark results available for a slab with constant refractive index show that both methods reproduce these benchmark results when the refractive index is set to be the same in the two slabs. Computed results for test cases with coupling (different refractive indices in the two slabs) show that the two methods produce essentially identical results for identical input in terms of absorption and scattering coefficients and scattering phase matrices.

  9. Assessment of different radiative transfer equation solvers for combined natural convection and radiation heat transfer problems

    NASA Astrophysics Data System (ADS)

    Sun, Yujia; Zhang, Xiaobing; Howell, John R.

    2017-06-01

    This work investigates the performance of the DOM, FVM, P1, SP3 and P3 methods for 2D combined natural convection and radiation heat transfer for an absorbing, emitting medium. The Monte Carlo method is used to solve the RTE coupled with the energy equation, and its results are used as benchmark solutions. Effects of the Rayleigh number, Planck number and optical thickness are considered, all covering several orders of magnitude. Temperature distributions, heat transfer rate and computational performance in terms of accuracy and computing time are presented and analyzed.

  10. Neutron radiative capture cross section of Cu,6563 between 0.4 and 7.5 MeV

    NASA Astrophysics Data System (ADS)

    Newsome, I.; Bhike, M.; Krishichayan, Tornow, W.

    2018-04-01

    Natural copper is commonly used as cooling and shielding medium in detector arrangements designed to search for neutrinoless double-β decay. Neutron-induced background reactions on copper could potentially produce signals that are indistinguishable from the signals of interest. The present work focuses on radiative neutron capture experiments on Cu,6563 in the 0.4 to 7.5 MeV neutron energy range. The new data provide evaluations and model calculations with benchmark data needed to extend their applicability in predicting background rates in neutrinoless double-β decay experiments.

  11. Center for the Integration of Optical Computing

    DTIC Science & Technology

    1993-10-15

    medium-high-speed two- beam coupling that could be used in systems as an all- optical interconnect. The basis of our studies was the fact that operating at...to investigate near-band edge photorefractivity for optical interconnects, at least when used at small beam ratio or in phase conjugate resonators. I...field pattern a mess. Their poor beam quality makes laser diode arrays ill suited for many applications, such as launching intense light into single

  12. Nuclear physics with a medium-energy Electron-Ion Collider

    NASA Astrophysics Data System (ADS)

    Accardi, A.; Guzey, V.; Prokudin, A.; Weiss, C.

    2012-06-01

    A polarized ep/ eA collider (Electron-Ion Collider, or EIC) with variable center-of-mass energy √ s ˜ 20-70 GeV and luminosity ˜1034 cm-2 s-1 would be uniquely suited to address several outstanding questions of Quantum Chromodynamics (QCD) and the microscopic structure of hadrons and nuclei: i) the three-dimensional structure of the nucleon in QCD (sea quark and gluon spatial distributions, orbital motion, polarization, correlations); ii) the fundamental color fields in nuclei (nuclear parton densities, shadowing, coherence effects, color transparency); iii) the conversion of color charge to hadrons (fragmentation, parton propagation through matter, in-medium jets). We briefly review the conceptual aspects of these questions and the measurements that would address them, emphasizing the qualitatively new information that could be obtained with the collider. Such a medium-energy EIC could be realized at Jefferson Lab after the 12GeV Upgrade (MEIC), or at Brookhaven National Lab as the low-energy stage of eRHIC.

  13. Optical reaction cell and light source for ›18F! fluoride radiotracer synthesis

    DOEpatents

    Ferrieri, Richard A.; Schlyer, David; Becker, Richard J.

    1998-09-15

    Apparatus for performing organic synthetic reactions, particularly no-carrier-added nucleophilic radiofluorination reactions for PET radiotracer production. The apparatus includes an optical reaction cell and a source of broadband infrared radiant energy, which permits direct coupling of the emitted radiant energy with the reaction medium to heat the reaction medium. Preferably, the apparatus includes means for focusing the emitted radiant energy into the reaction cell, and the reaction cell itself is preferably configured to reflect transmitted radiant energy back into the reaction medium to further improve the efficiency of the apparatus. The apparatus is well suited to the production of high-yield syntheses of 2-›.sup.18 F!fluoro-2-deoxy-D-glucose. Also provided is a method for performing organic synthetic reactions, including the manufacture of ›.sup.18 F!-labeled compounds useful as PET radiotracers, and particularly for the preparation of 2-›.sup.18 F!fluoro-2-deoxy-D-glucose in higher yields than previously possible.

  14. Optical reaction cell and light source for [18F] fluoride radiotracer synthesis

    DOEpatents

    Ferrieri, R.A.; Schlyer, D.; Becker, R.J.

    1998-09-15

    An apparatus is disclosed for performing organic synthetic reactions, particularly no-carrier-added nucleophilic radiofluorination reactions for PET radiotracer production. The apparatus includes an optical reaction cell and a source of broadband infrared radiant energy, which permits direct coupling of the emitted radiant energy with the reaction medium to heat the reaction medium. Preferably, the apparatus includes means for focusing the emitted radiant energy into the reaction cell, and the reaction cell itself is preferably configured to reflect transmitted radiant energy back into the reaction medium to further improve the efficiency of the apparatus. The apparatus is well suited to the production of high-yield syntheses of 2-[{sup 18}F]fluoro-2-deoxy-Dglucose. Also provided is a method for performing organic synthetic reactions, including the manufacture of [{sup 18}F]-labeled compounds useful as PET radiotracers, and particularly for the preparation of 2-[{sup 18}F]fluoro-2-deoxy-D-glucose in higher yields than previously possible. 4 figs.

  15. Defining Constellation Suit Helmet Field of View Requirements Employing a Mission Segment Based Reduction Process

    NASA Technical Reports Server (NTRS)

    McFarland, Shane M.

    2008-01-01

    Field of view has always been a design feature paramount to helmet design, and in particular space suit design, where the helmet must provide an adequate field of view for a large range of activities, environments, and body positions. For Project Constellation, a slightly different approach to helmet requirement maturation was utilized; one that was less a direct function of body position and suit pressure and more a function of the mission segment in which the field of view is required. Through taxonimization of various parameters that affect suited FOV, as well as consideration for possible nominal and contingency operations during that mission segment, a reduction process was able to condense the large number of possible outcomes to only six unique field of view angle requirements that still captured all necessary variables without sacrificing fidelity. The specific field of view angles were defined by considering mission segment activities, historical performance of other suits, comparison between similar requirements (pressure visor up versus down, etc.), estimated requirements from other teams for field of view (Orion, Altair, EVA), previous field of view tests, medical data for shirtsleeve field of view performance, and mapping of visual field data to generate 45degree off-axis field of view requirements. Full resolution of several specific field of view angle requirements warranted further work, which consisted of low and medium fidelity field of view testing in the rear entry ISuit and DO27 helmet prototype. This paper serves to document this reduction progress and followup testing employed to write the Constellation requirements for helmet field of view.

  16. Delta-ray Production in MCNP 6.2.0

    DOE PAGES

    Anderson, Casey Alan; McKinney, Gregg Walter; Tutt, James Robert; ...

    2017-10-26

    Secondary electrons in the form of delta-rays, also referred to as knock-on electrons, have been a feature of MCNP for electron and positron transport for over 20 years. While MCNP6 now includes transport for a suite of heavy-ions and charged particles from its integration with MCNPX, the production of delta-rays was still limited to electron and positron transport. In the newest release of MCNP6, version 6.2.0, delta-ray production has now been extended for all energetic charged particles. The basis of this production is the analytical formulation from Rossi and ICRU Report 37. As a result, this paper discusses the MCNP6more » heavy charged-particle implementation and provides production results for several benchmark/test problems.« less

  17. An Absorbing Boundary Condition for the Lattice Boltzmann Method Based on the Perfectly Matched Layer

    PubMed Central

    Najafi-Yazdi, A.; Mongeau, L.

    2012-01-01

    The Lattice Boltzmann Method (LBM) is a well established computational tool for fluid flow simulations. This method has been recently utilized for low Mach number computational aeroacoustics. Robust and nonreflective boundary conditions, similar to those used in Navier-Stokes solvers, are needed for LBM-based aeroacoustics simulations. The goal of the present study was to develop an absorbing boundary condition based on the perfectly matched layer (PML) concept for LBM. The derivation of formulations for both two and three dimensional problems are presented. The macroscopic behavior of the new formulation is discussed. The new formulation was tested using benchmark acoustic problems. The perfectly matched layer concept appears to be very well suited for LBM, and yielded very low acoustic reflection factor. PMID:23526050

  18. Using Coronal Hole Maps to Constrain MHD Models

    NASA Astrophysics Data System (ADS)

    Caplan, Ronald M.; Downs, Cooper; Linker, Jon A.; Mikic, Zoran

    2017-08-01

    In this presentation, we explore the use of coronal hole maps (CHMs) as a constraint for thermodynamic MHD models of the solar corona. Using our EUV2CHM software suite (predsci.com/chd), we construct CHMs from SDO/AIA 193Å and STEREO-A/EUVI 195Å images for multiple Carrington rotations leading up to the August 21st, 2017 total solar eclipse. We then contruct synoptic CHMs from synthetic EUV images generated from global thermodynamic MHD simulations of the corona for each rotation. Comparisons of apparent coronal hole boundaries and estimates of the net open flux are used to benchmark and constrain our MHD model leading up to the eclipse. Specifically, the comparisons are used to find optimal parameterizations of our wave turbulence dissipation (WTD) coronal heating model.

  19. Counterpropagating Radiative Shock Experiments on the Orion Laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki-Vidal, F.; Clayson, T.; Stehlé, C.

    We present new experiments to study the formation of radiative shocks and the interaction between two counterpropagating radiative shocks. The experiments are performed at the Orion laser facility, which is used to drive shocks in xenon inside large aspect ratio gas cells. The collision between the two shocks and their respective radiative precursors, combined with the formation of inherently three-dimensional shocks, provides a novel platform particularly suited for the benchmarking of numerical codes. The dynamics of the shocks before and after the collision are investigated using point-projection x-ray backlighting while, simultaneously, the electron density in the radiative precursor was measuredmore » via optical laser interferometry. Modeling of the experiments using the 2D radiation hydrodynamic codes nym and petra shows very good agreement with the experimental results.« less

  20. Land Ice Verification and Validation Kit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-07-15

    To address a pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice-sheet models is underway. The associated verification and validation process of these models is being coordinated through a new, robust, python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVV). This release provides robust and automated verification and a performance evaluation on LCF platforms. The performance V&V involves a comprehensive comparison of model performance relative to expected behavior on a given computing platform. LIVV operates on a set of benchmark and testmore » data, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-4-bit evaluation, and plots of tests where differences occur.« less

  1. A theoretical and experimental benchmark study of core-excited states in nitrogen

    NASA Astrophysics Data System (ADS)

    Myhre, Rolf H.; Wolf, Thomas J. A.; Cheng, Lan; Nandi, Saikat; Coriani, Sonia; Gühr, Markus; Koch, Henrik

    2018-02-01

    The high resolution near edge X-ray absorption fine structure spectrum of nitrogen displays the vibrational structure of the core-excited states. This makes nitrogen well suited for assessing the accuracy of different electronic structure methods for core excitations. We report high resolution experimental measurements performed at the SOLEIL synchrotron facility. These are compared with theoretical spectra calculated using coupled cluster theory and algebraic diagrammatic construction theory. The coupled cluster singles and doubles with perturbative triples model known as CC3 is shown to accurately reproduce the experimental excitation energies as well as the spacing of the vibrational transitions. The computational results are also shown to be systematically improved within the coupled cluster hierarchy, with the coupled cluster singles, doubles, triples, and quadruples method faithfully reproducing the experimental vibrational structure.

  2. Counterpropagating Radiative Shock Experiments on the Orion Laser.

    PubMed

    Suzuki-Vidal, F; Clayson, T; Stehlé, C; Swadling, G F; Foster, J M; Skidmore, J; Graham, P; Burdiak, G C; Lebedev, S V; Chaulagain, U; Singh, R L; Gumbrell, E T; Patankar, S; Spindloe, C; Larour, J; Kozlova, M; Rodriguez, R; Gil, J M; Espinosa, G; Velarde, P; Danson, C

    2017-08-04

    We present new experiments to study the formation of radiative shocks and the interaction between two counterpropagating radiative shocks. The experiments are performed at the Orion laser facility, which is used to drive shocks in xenon inside large aspect ratio gas cells. The collision between the two shocks and their respective radiative precursors, combined with the formation of inherently three-dimensional shocks, provides a novel platform particularly suited for the benchmarking of numerical codes. The dynamics of the shocks before and after the collision are investigated using point-projection x-ray backlighting while, simultaneously, the electron density in the radiative precursor was measured via optical laser interferometry. Modeling of the experiments using the 2D radiation hydrodynamic codes nym and petra shows very good agreement with the experimental results.

  3. Counterpropagating Radiative Shock Experiments on the Orion Laser

    DOE PAGES

    Suzuki-Vidal, F.; Clayson, T.; Stehlé, C.; ...

    2017-08-02

    We present new experiments to study the formation of radiative shocks and the interaction between two counterpropagating radiative shocks. The experiments are performed at the Orion laser facility, which is used to drive shocks in xenon inside large aspect ratio gas cells. The collision between the two shocks and their respective radiative precursors, combined with the formation of inherently three-dimensional shocks, provides a novel platform particularly suited for the benchmarking of numerical codes. The dynamics of the shocks before and after the collision are investigated using point-projection x-ray backlighting while, simultaneously, the electron density in the radiative precursor was measuredmore » via optical laser interferometry. Modeling of the experiments using the 2D radiation hydrodynamic codes nym and petra shows very good agreement with the experimental results.« less

  4. Software Support for Transiently Powered Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Der Woude, Joel Matthew

    With the continued reduction in size and cost of computing, power becomes an increasingly heavy burden on system designers for embedded applications. While energy harvesting techniques are an increasingly desirable solution for many deeply embedded applications where size and lifetime are a priority, previous work has shown that energy harvesting provides insufficient power for long running computation. We present Ratchet, which to the authors knowledge is the first automatic, software-only checkpointing system for energy harvesting platforms. We show that Ratchet provides a means to extend computation across power cycles, consistent with those experienced by energy harvesting devices. We demonstrate themore » correctness of our system under frequent failures and show that it has an average overhead of 58.9% across a suite of benchmarks representative for embedded applications.« less

  5. Two-Dimensional Homogeneous Fermi Gases

    NASA Astrophysics Data System (ADS)

    Hueck, Klaus; Luick, Niclas; Sobirey, Lennart; Siegl, Jonas; Lompe, Thomas; Moritz, Henning

    2018-02-01

    We report on the experimental realization of homogeneous two-dimensional (2D) Fermi gases trapped in a box potential. In contrast to harmonically trapped gases, these homogeneous 2D systems are ideally suited to probe local as well as nonlocal properties of strongly interacting many-body systems. As a first benchmark experiment, we use a local probe to measure the density of a noninteracting 2D Fermi gas as a function of the chemical potential and find excellent agreement with the corresponding equation of state. We then perform matter wave focusing to extract the momentum distribution of the system and directly observe Pauli blocking in a near unity occupation of momentum states. Finally, we measure the momentum distribution of an interacting homogeneous 2D gas in the crossover between attractively interacting fermions and bosonic dimers.

  6. Neoarchean crustal growth and Paleoproterozoic reworking in the Borborema Province, NE Brazil: Insights from geochemical and isotopic data of TTG and metagranitic rocks of the Alto Moxotó Terrane

    NASA Astrophysics Data System (ADS)

    Montefalco de Lira Santos, Lauro Cézar; Dantas, Elton Luiz; Cawood, Peter A.; José dos Santos, Edilton; Fuck, Reinhardt A.

    2017-11-01

    Pre-Brasiliano rocks in the Borborema Province (NE Brazil) are concentrated in basement blocks, such as the Alto Moxotó Terrane. Petrographic, geochemical, and U-Pb and Sm-Nd isotopic data from two basement metagranitic suites within the terrane provide evidence for Neoarchean (2.6 Ga) and Paleoproterozoic (2.1 Ga) subduction-related events. The Riacho das Lajes Suite is made of medium to coarse-grained hornblende and biotite-bearing metatonalites and metamonzogranites. Whole-rock geochemical data indicate that these rocks represent calcic, magnesian and meta-to peraluminous magmas, and have unequivocal affinities with high-Al low-REE tonalite-trondhjemite-granodiorites (TTG). Zircon U-Pb data from two samples of this suite indicate that they were emplaced at 2.6 Ga, which is the first discovered Archean crust in the central portion of the province. The suite has Neoarchean depleted mantle model ages (TDM) and slightly negative to positive εNd(t), indicating slight crustal contamination. The overall geochemical and isotopic data indicate a Neoarchean intraoceanic setting for genesis of the Riacho das Lajes magma via melting of basaltic oceanic crust submitted to high-pressure eclogite facies conditions. On the other hand, the Floresta Suite comprise metaigneous rocks, which are mostly tonalitic and granodioritic in composition. Geochemical data indicate that this suite shares similarities with calcic to calc-alkalic magmas with magnesian and metaluminous to slightly peraluminous characteristics. Other geochemical features include anomolous Ni, V and Cr contents, as well as high large-ion litophile elements (LILE) values. The suite yields U-Pb zircon ages of approximately 2.1 Ga, Archean to Paleoproterozoic TDM ages, and negative to positive εNd(t) values, suggesting both new crust formation and reworking of Archean crust, in addition to mantle metasomatism, reflecting mixed sources. The most likely tectonic setting for the Floresta Suite magmas involved crustal thickening by terrane accretion, coeval to slab break off. Our results provide new insights on proto-Western Gondwana crustal evolution.

  7. Grassland futures in Great Britain - Productivity assessment and scenarios for land use change opportunities.

    PubMed

    Qi, Aiming; Holland, Robert A; Taylor, Gail; Richter, Goetz M

    2018-09-01

    To optimise trade-offs provided by future changes in grassland use intensity, spatially and temporally explicit estimates of respective grassland productivities are required at the systems level. Here, we benchmark the potential national availability of grassland biomass, identify optimal strategies for its management, and investigate the relative importance of intensification over reversion (prioritising productivity versus environmental ecosystem services). Process-conservative meta-models for different grasslands were used to calculate the baseline dry matter yields (DMY; 1961-1990) at 1km 2 resolution for the whole UK. The effects of climate change, rising atmospheric [CO 2 ] and technological progress on baseline DMYs were used to estimate future grassland productivities (up to 2050) for low and medium CO 2 emission scenarios of UKCP09. UK benchmark productivities of 12.5, 8.7 and 2.8t/ha on temporary, permanent and rough-grazing grassland, respectively, accounted for productivity gains by 2010. By 2050, productivities under medium emission scenario are predicted to increase to 15.5 and 9.8t/ha on temporary and permanent grassland, respectively, but not on rough grassland. Based on surveyed grassland distributions for Great Britain in 2010 the annual availability of grassland biomass is likely to rise from 64 to 72milliontonnes by 2050. Assuming optimal N application could close existing productivity gaps of ca. 40% a range of management options could deliver additional 21∗10 6 tonnes of biomass available for bioenergy. Scenarios of changes in grassland use intensity demonstrated considerable scope for maintaining or further increasing grassland production and sparing some grassland for the provision of environmental ecosystem services. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Review of NASA short-haul studies

    NASA Technical Reports Server (NTRS)

    Kenyon, G. C.

    1975-01-01

    The paper summarizes the results of NASA-conducted technological and economic studies of low, medium, and high density short-haul transportation systems. Aircraft concepts considered included CTOL, RTOL, STOL, and general aviation aircraft. For low density systems, it was found that viable air service becomes possible if city pairs are at least 100 km apart and a two-way total travel demand of at least 200 daily passengers exists. Currently available aircraft were found suitable. The medium-density study showed that a 60-passenger twin engine turbofan was the best suited aircraft. For high density systems, STOL appears to be an economically viable means of reducing noise and congestion at major hub airports. Adequate runways 914 m in length or greater either already exist or could be added to most existing major hub airports.

  9. RF Wave Simulation Using the MFEM Open Source FEM Package

    NASA Astrophysics Data System (ADS)

    Stillerman, J.; Shiraiwa, S.; Bonoli, P. T.; Wright, J. C.; Green, D. L.; Kolev, T.

    2016-10-01

    A new plasma wave simulation environment based on the finite element method is presented. MFEM, a scalable open-source FEM library, is used as the basis for this capability. MFEM allows for assembling an FEM matrix of arbitrarily high order in a parallel computing environment. A 3D frequency domain RF physics layer was implemented using a python wrapper for MFEM and a cold collisional plasma model was ported. This physics layer allows for defining the plasma RF wave simulation model without user knowledge of the FEM weak-form formulation. A graphical user interface is built on πScope, a python-based scientific workbench, such that a user can build a model definition file interactively. Benchmark cases have been ported to this new environment, with results being consistent with those obtained using COMSOL multiphysics, GENRAY, and TORIC/TORLH spectral solvers. This work is a first step in bringing to bear the sophisticated computational tool suite that MFEM provides (e.g., adaptive mesh refinement, solver suite, element types) to the linear plasma-wave interaction problem, and within more complicated integrated workflows, such as coupling with core spectral solver, or incorporating additional physics such as an RF sheath potential model or kinetic effects. USDoE Awards DE-FC02-99ER54512, DE-FC02-01ER54648.

  10. Seismic anisotropy of the crust: electron-backscatter diffraction measurements from the Basin and Range

    NASA Astrophysics Data System (ADS)

    Erdman, Monica E.; Hacker, Bradley R.; Zandt, George; Seward, Gareth

    2013-11-01

    Crystal preferred orientations were measured in a suite of rocks from three locations in the Basin and Range using electron-backscatter diffraction. Anisotropic velocities were calculated for all rocks using single-crystal stiffnesses, the Christoffel equation and Voigt-Reuss-Hill averaging. Anisotropic velocities were calculated for all three crustal sections using these values combined with rock proportions as exposed in the field. One suite of rocks previously measured in the laboratory was used as a benchmark to evaluate the accuracy of the calculated velocities. Differences in the seismic anisotropy of the Funeral Mountains, Ruby Mountains and East Humboldt Range sections arise because of differences in mineralogy and strain, with the calc-silicate dominated Ruby Mountains section having higher P-wave speeds and VP/VS ratios because of the reduced quartz content. In all cases, the velocities show either transverse isotropy or nearly so, with a unique slow axis normal to the foliation. Velocity anisotropy can thus be used to infer the flow plane, but not the flow direction in typical crustal rocks. Areas with a subhorizontal foliation have minimal shear wave splitting for vertically propagating waves and are thus good places to measure mantle anisotropy using SKS-splitting.

  11. Beaming Electricity via Relay Satellites in Support of Deployed Combat Forces

    DTIC Science & Technology

    2012-09-01

    Power kHz Kilohertz km Kilometer kW Kilowatt kW/h Kilowatt/hour LEO Low Earth Orbit MEO Medium Earth Orbit MW Megawatt RF Radio Frequency STK ...using the Satellite Tool Kit ( STK ) software suite. D. CHAPTER SUMMARY 1. Chapter II - Background This chapter contains background information to...are modeled using STK . The results of those models are presented. A description of how each model took shape was developed is provided followed

  12. Far-Field Lorenz-Mie Scattering in an Absorbing Host Medium: Theoretical Formalism and FORTRAN Program

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Yang, Ping

    2018-01-01

    In this paper we make practical use of the recently developed first-principles approach to electromagnetic scattering by particles immersed in an unbounded absorbing host medium. Specifically, we introduce an actual computational tool for the calculation of pertinent far-field optical observables in the context of the classical Lorenzâ€"Mie theory. The paper summarizes the relevant theoretical formalism, explains various aspects of the corresponding numerical algorithm, specifies the input and output parameters of a FORTRAN program available at https://www.giss.nasa.gov/staff/mmishchenko/Lorenz-Mie.html, and tabulates benchmark results useful for testing purposes. This public-domain FORTRAN program enables one to solve the following two important problems: (i) simulate theoretically the reading of a remote well-collimated radiometer measuring electromagnetic scattering by an individual spherical particle or a small random group of spherical particles; and (ii) compute the single-scattering parameters that enter the vector radiative transfer equation derived directly from the Maxwell equations.

  13. Discontinuous finite element method for vector radiative transfer

    NASA Astrophysics Data System (ADS)

    Wang, Cun-Hai; Yi, Hong-Liang; Tan, He-Ping

    2017-03-01

    The discontinuous finite element method (DFEM) is applied to solve the vector radiative transfer in participating media. The derivation in a discrete form of the vector radiation governing equations is presented, in which the angular space is discretized by the discrete-ordinates approach with a local refined modification, and the spatial domain is discretized into finite non-overlapped discontinuous elements. The elements in the whole solution domain are connected by modelling the boundary numerical flux between adjacent elements, which makes the DFEM numerically stable for solving radiative transfer equations. Several various problems of vector radiative transfer are tested to verify the performance of the developed DFEM, including vector radiative transfer in a one-dimensional parallel slab containing a Mie/Rayleigh/strong forward scattering medium and a two-dimensional square medium. The fact that DFEM results agree very well with the benchmark solutions in published references shows that the developed DFEM in this paper is accurate and effective for solving vector radiative transfer problems.

  14. Carbon footprint analysis as a tool for energy and environmental management in small and medium-sized enterprises

    NASA Astrophysics Data System (ADS)

    Giama, E.; Papadopoulos, A. M.

    2018-01-01

    The reduction of carbon emissions has become a top priority in the decision-making process for governments and companies, the strict European legislation framework being a major driving force behind this effort. On the other hand, many companies face difficulties in estimating their footprint and in linking the results derived from environmental evaluation processes with an integrated energy management strategy, which will eventually lead to energy-efficient and cost-effective solutions. The paper highlights the need of companies to establish integrated environmental management practices, with tools such as carbon footprint analysis to monitor the energy performance of production processes. Concepts and methods are analysed, and selected indicators are presented by means of benchmarking, monitoring and reporting the results in order to be used effectively from the companies. The study is based on data from more than 90 Greek small and medium enterprises, followed by a comprehensive discussion of cost-effective and realistic energy-saving measures.

  15. Far-field Lorenz-Mie scattering in an absorbing host medium: Theoretical formalism and FORTRAN program

    NASA Astrophysics Data System (ADS)

    Mishchenko, Michael I.; Yang, Ping

    2018-01-01

    In this paper we make practical use of the recently developed first-principles approach to electromagnetic scattering by particles immersed in an unbounded absorbing host medium. Specifically, we introduce an actual computational tool for the calculation of pertinent far-field optical observables in the context of the classical Lorenz-Mie theory. The paper summarizes the relevant theoretical formalism, explains various aspects of the corresponding numerical algorithm, specifies the input and output parameters of a FORTRAN program available at https://www.giss.nasa.gov/staff/mmishchenko/Lorenz-Mie.html, and tabulates benchmark results useful for testing purposes. This public-domain FORTRAN program enables one to solve the following two important problems: (i) simulate theoretically the reading of a remote well-collimated radiometer measuring electromagnetic scattering by an individual spherical particle or a small random group of spherical particles; and (ii) compute the single-scattering parameters that enter the vector radiative transfer equation derived directly from the Maxwell equations.

  16. Indentation theory on a half-space of transversely isotropic multi-ferroic composite medium: sliding friction effect

    NASA Astrophysics Data System (ADS)

    Wu, F.; Wu, T.-H.; Li, X.-Y.

    2018-03-01

    This article aims to present a systematic indentation theory on a half-space of multi-ferroic composite medium with transverse isotropy. The effect of sliding friction between the indenter and substrate is taken into account. The cylindrical flat-ended indenter is assumed to be electrically/magnetically conducting or insulating, which leads to four sets of mixed boundary-value problems. The indentation forces in the normal and tangential directions are related to the Coulomb friction law. For each case, the integral equations governing the contact behavior are developed by means of the generalized method of potential theory, and the corresponding coupling field is obtained in terms of elementary functions. The effect of sliding on the contact behavior is investigated. Finite element method (FEM) in the context of magneto-electro-elasticity is developed to discuss the validity of the analytical solutions. The obtained analytical solutions may serve as benchmarks to various simplified analyses and numerical codes and as a guide for future experimental studies.

  17. An Approach for Performance Assessments of Extravehicular Activity Gloves

    NASA Technical Reports Server (NTRS)

    Aitchison, Lindsay; Benosn, Elizabeth

    2014-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of two sets of prototype EVA gloves developed ILC Dover and David Clark Company as compared to the Phase VI. Both companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test subjects representing the design-to hand anthropometry completed range of motion, grip/pinch strength, dexterity, and fit evaluations for each glove design in pressurized conditions, with and without thermal micrometeoroid garments (TMG) installed. This paper provides a detailed description of hardware and test methodologies used and lessons learned.

  18. Organic waste compounds in streams: Occurrence and aquatic toxicity in different stream compartments, flow regimes, and land uses in southeast Wisconsin, 2006–9

    USGS Publications Warehouse

    Baldwin, Austin K.; Corsi, Steven R.; Richards, Kevin D.; Geis, Steven W.; Magruder, Christopher

    2013-01-01

    An assessment of organic chemicals and aquatic toxicity in streams located near Milwaukee, Wisconsin, indicated high potential for adverse impacts on aquatic organisms that could be related to organic waste compounds (OWCs). OWCs used in agriculture, industry, and households make their way into surface waters through runoff, leaking septic-conveyance systems, regulated and unregulated discharges, and combined sewage overflows, among other sources. Many of these compounds are toxic at elevated concentrations and (or) known to have endocrine-disrupting potential, and often they occur as complex mixtures. There is still much to be learned about the chronic exposure effects of these compounds on aquatic populations. During 2006–9, the U.S. Geological Survey, in cooperation with the Milwaukee Metropolitan Sewerage District (MMSD), conducted a study to determine the occurrence and potential toxicity of OWCs in different stream compartments and flow regimes for streams in the Milwaukee area. Samples were collected at 17 sites and analyzed for a suite of 69 OWCs. Three types of stream compartments were represented: water column, streambed pore water, and streambed sediment. Water-column samples were subdivided by flow regime into stormflow and base-flow samples. One or more compounds were detected in all 196 samples collected, and 64 of the 69 compounds were detected at least once. Base-flow samples had the lowest detection rates, with a median of 12 compounds detected per sample. Median detection rates for stormflow, pore-water, and sediment samples were more than double that of base-flow samples. Compounds with the highest detection rates include polycyclic aromatic hydrocarbons (PAHs), insecticides, herbicides, and dyes/pigments. Elevated occurrence and concentrations of some compounds were detected in samples from urban sites, as compared with more rural sites, especially during stormflow conditions. These include the PAHs and the domestic waste-water-indicator compounds, among others. Urban runoff and storm-related leaks of sanitary sewers and (or) septic systems may be important sources of these and other compounds to the streams. The Kinnickinnic River, a highly urbanized site, had the highest detection rates and concentrations of compounds of all the sampled sites. The Milwaukee River near Cedarburg—one of the least urban sites—and the Outer Milwaukee Harbor site had the lowest detection rates and concentrations. Aquatic-toxicity benchmarks were exceeded for 12 of the 25 compounds with known benchmarks. The compounds with the greatest benchmark exceedances were the PAHs, both in terms of exceedance frequency (up to 93 percent for some compounds in sediment samples) and magnitude (concentrations up to 1,024 times greater than the benchmark value). Other compounds with toxicity-benchmark exceedances include Bis(2-ethylhexyl) phthalate (a plasticizer), 2-Methylnapthalene (a component of fuel and oil), phenol (an antimicrobial disinfectant with diverse uses), and 4-Nonylphenol (sum of all isomers; a detergent metabolite, among other uses). Analyzed as a mixture, the suite of PAH compounds were found to be potentially toxic for most non-base-flow samples. Bioassay tests were conducted on samples from 14 streams: Ceriodaphnia dubia in base-flow samples, Ceriodaphnia dubia and Hyallela azteca in pore-water samples, and Hyallela azteca and Chironomus tentans in sediment samples. The greatest adverse effect was observed in tests with Chironomus tentans from sediment samples. The weight of Chironomus tentans after exposure to sediments decreased with increased OWC concentrations. This was most evident in the relation between PAH results and Chironomus tentans bioassay results for the majority of samples; however, solvents and flame retardants appeared to be important for one site each. These results for PAHs were consistent with assessment of PAH potency factors for sediment, indicating that PAHs were likely to have adverse effects on aquatic organisms in many of the streams studied.

  19. Comparison of thermal insulation performance of fibrous materials for the advanced space suit.

    PubMed

    Paul, Heather L; Diller, Kenneth R

    2003-10-01

    The current multi-layer insulation used in the extravehicular mobility unit (EMU) will not be effective in the atmosphere of Mars due to the presence of interstitial gases. Alternative thermal insulation means have been subjected to preliminary evaluation by NASA to attempt to identify a material that will meet the target conductivity of 0.005 W/m-K. This study analyzes numerically the thermal conductivity performance for three of these candidate insulating fiber materials in terms of various denier (size), interstitial void fractions, interstitial void media, and orientations to the applied temperature gradient to evaluate their applicability for the new Mars suit insulation. The results demonstrate that the best conductive insulation is achieved for a high-void-fraction configuration with a grooved fiber cross section, aerogel void medium, and the fibers oriented normal to the heat flux vector. However, this configuration still exceeds the target thermal conductivity by a factor of 1.5.

  20. Continental Deformation in Madagascar from GNSS Observations

    NASA Astrophysics Data System (ADS)

    Stamps, D. S.; Rajaonarison, T.; Rambolamanana, G.; Herimitsinjo, N.; Carrillo, R.; Jesmok, G.

    2015-12-01

    D.S. Stamps, T. Rajaonarison, G. Rambolamanana Madagascar is the easternmost continental segment of the East African Rift System (EARS). Plate reconstructions assume the continental island behaves as a rigid block, but studies of geologically recent kinematics suggest Madagascar undergoes extension related to the broader EARS. In this work we test for rigidity of Madagascar in two steps. First, we quantify surface motions using a novel dataset of episodic and continuous GNSS observations that span Madagascar from north to south. We established a countrywide network of precision benchmarks fixed in bedrock and with open skyview in 2010 that we measured for 48-72 hours with dual frequency receivers. The benchmarks were remeasured in 2012 and 2014. We processed the episodic GNSS data with ABPO, the only continuous GNSS station in Madagascar with >2.5 years of data, for millimeter precision positions and velocities at 7 locations using GAMIT-GLOBK. Our velocity field shows 2 mm/yr of differential motion between southern and northern Madagascar. Second, we test a suite of kinematic predictions from previous studies and find residual velocities are greater than 95% uncertainties. We also calculate angular velocity vectors assuming Madagascar moves with the Lwandle plate or the Somalian plate. Our new velocity field in Madagascar is inconsistent with all models that assume plate rigidity at the 95% uncertainty level; this result indicates the continental island undergoes statistically significant internal deformation.

  1. An Improved Evolutionary Programming with Voting and Elitist Dispersal Scheme

    NASA Astrophysics Data System (ADS)

    Maity, Sayan; Gunjan, Kumar; Das, Swagatam

    Although initially conceived for evolving finite state machines, Evolutionary Programming (EP), in its present form, is largely used as a powerful real parameter optimizer. For function optimization, EP mainly relies on its mutation operators. Over past few years several mutation operators have been proposed to improve the performance of EP on a wide variety of numerical benchmarks. However, unlike real-coded GAs, there has been no fitness-induced bias in parent selection for mutation in EP. That means the i-th population member is selected deterministically for mutation and creation of the i-th offspring in each generation. In this article we present an improved EP variant called Evolutionary Programming with Voting and Elitist Dispersal (EPVE). The scheme encompasses a voting process which not only gives importance to best solutions but also consider those solutions which are converging fast. By introducing Elitist Dispersal Scheme we maintain the elitism by keeping the potential solutions intact and other solutions are perturbed accordingly, so that those come out of the local minima. By applying these two techniques we can be able to explore those regions which have not been explored so far that may contain optima. Comparison with the recent and best-known versions of EP over 25 benchmark functions from the CEC (Congress on Evolutionary Computation) 2005 test-suite for real parameter optimization reflects the superiority of the new scheme in terms of final accuracy, speed, and robustness.

  2. What is a food and what is a medicinal product in the European Union? Use of the benchmark dose (BMD) methodology to define a threshold for "pharmacological action".

    PubMed

    Lachenmeier, Dirk W; Steffen, Christian; el-Atma, Oliver; Maixner, Sibylle; Löbell-Behrends, Sigrid; Kohl-Himmelseher, Matthias

    2012-11-01

    The decision criterion for the demarcation between foods and medicinal products in the EU is the significant "pharmacological action". Based on six examples of substances with ambivalent status, the benchmark dose (BMD) method is evaluated to provide a threshold for pharmacological action. Using significant dose-response models from literature clinical trial data or epidemiology, the BMD values were 63mg/day for caffeine, 5g/day for alcohol, 6mg/day for lovastatin, 769mg/day for glucosamine sulfate, 151mg/day for Ginkgo biloba extract, and 0.4mg/day for melatonin. The examples for caffeine and alcohol validate the approach because intake above BMD clearly exhibits pharmacological action. Nevertheless, due to uncertainties in dose-response modelling as well as the need for additional uncertainty factors to consider differences in sensitivity within the human population, a "borderline range" on the dose-response curve remains. "Pharmacological action" has proven to be not very well suited as binary decision criterion between foods and medicinal product. The European legislator should rethink the definition of medicinal products, as the current situation based on complicated case-by-case decisions on pharmacological action leads to an unregulated market flooded with potentially illegal food supplements. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Review and Analysis of Algorithmic Approaches Developed for Prognostics on CMAPSS Dataset

    NASA Technical Reports Server (NTRS)

    Ramasso, Emannuel; Saxena, Abhinav

    2014-01-01

    Benchmarking of prognostic algorithms has been challenging due to limited availability of common datasets suitable for prognostics. In an attempt to alleviate this problem several benchmarking datasets have been collected by NASA's prognostic center of excellence and made available to the Prognostics and Health Management (PHM) community to allow evaluation and comparison of prognostics algorithms. Among those datasets are five C-MAPSS datasets that have been extremely popular due to their unique characteristics making them suitable for prognostics. The C-MAPSS datasets pose several challenges that have been tackled by different methods in the PHM literature. In particular, management of high variability due to sensor noise, effects of operating conditions, and presence of multiple simultaneous fault modes are some factors that have great impact on the generalization capabilities of prognostics algorithms. More than 70 publications have used the C-MAPSS datasets for developing data-driven prognostic algorithms. The C-MAPSS datasets are also shown to be well-suited for development of new machine learning and pattern recognition tools for several key preprocessing steps such as feature extraction and selection, failure mode assessment, operating conditions assessment, health status estimation, uncertainty management, and prognostics performance evaluation. This paper summarizes a comprehensive literature review of publications using C-MAPSS datasets and provides guidelines and references to further usage of these datasets in a manner that allows clear and consistent comparison between different approaches.

  4. Anthropometry and Biomechanics Facility Presentation to Open EVA Research Forum

    NASA Technical Reports Server (NTRS)

    Rajulu, Sudhakar

    2017-01-01

    NASA is required to accommodate individuals who fall within a 1st to 99th percentile range on a variety of critical dimensions. The hardware the crew interacts with must therefore be designed and verified to allow these selected individuals to complete critical mission tasks safely and at an optimal performance level. Until now, designers have been provided simpler univariate critical dimensional analyses. The multivariate characteristics of intra-individual and inter-individual size variation must be accounted for, since an individual who is 1st percentile in one body dimension will not be 1st percentile in all other dimensions. A more simplistic approach, assuming every measurement of an individual will fall within the same percentile range, can lead to a model that does not represent realistic members of the population. In other words, there is no '1st percentile female' or '99th percentile male', and designing for these unrealistic body types can lead to hardware issues down the road. Furthermore, due to budget considerations, designers are normally limited to providing only 1 size of a prototype suit, thus requiring other possible means to ensure that a given suit architecture would yield the necessary suit sizes to accommodate the entire user population. Fortunately, modeling tools can be used to more accurately model the types of human body sizes and shapes that will be encountered in a population. Anthropometry toolkits have been designed with a variety of capabilities, including grouping the population into clusters based on critical dimensions, providing percentile information given test subject measurements, and listing measurement ranges for critical dimensions in the 1st-99th percentile range. These toolkits can be combined with full body laser scans to allow designers to build human models that better represent the astronaut population. More recently, some rescaling and reposing capabilities have been developed, to allow reshaping of these static laser scans in more representative postures, such as an abducted shoulder. All of the hardware designed for use with the crew must be sized to accommodate the user population, but the interaction between subject size and hardware fit is complicated with multi-component, complex systems like a space suit. Again, prototype suits are normally only provided in a limited size range, and suited testing is an expensive endeavor; both of these factors limit the number and size of people who can be used to benchmark a spacesuit. However, modeling tools for assessing suit-human interaction can allow potential issues to be modeled and visualized. These types of modeling tools can be used for analysis of a larger combination of anthropometries and hardware types than could feasibly be done with actual human subjects and physical mockups.

  5. Trace Element Compositions of Pallasite Olivine Grains and Pallasite Origin

    NASA Technical Reports Server (NTRS)

    Mittlefehldt, David W.; Herrin, J. S.

    2010-01-01

    Pallasites are mixtures of metal with magnesian olivine. Most have similar metal compositions and olivine oxygen isotopic compositions; these are the main-group pallasites (PMG). The Eagle Station grouplet of pallasites (PES) have distinctive metal and olivine compositions and oxygen isotopic compositions. Pallasites are thought to have formed at the core-mantle boundary of their parent asteroids by mixing molten metal with solid olivine of either cumulatic or restitic origin. We have continued our investigation of pallasite olivines by doing in situ trace element analyses in order to further constrain their origin. We determined Al, P, Ca, Ga and first row transition element contents of olivine grains from suite of PMG and PES by LA-ICP-MS at JSC. Included in the PMG suite are some that have anomalous metal compositions (PMG-am) and atypically ferroan olivines (PMG-as). Our EMPA work has shown that there are unanticipated variations in olivine Fe/Mn, even within those PMG that have uni-form Fe/Mg. Manganese is homologous with Fe2+, and thus can be used the same way to investigate magmatic fractionation processes. It has an advantage for pallasite studies in that it is unaffected by redox exchange with the metal. PMG can be divided into three clusters on the basis of Mn/Mg; low, medium and high that can be thought of as less, typically and more fractionated in an igneous sense. The majority of PMG have medium Mn/Mg ratios. PMG-am occur in all three clusters; there does not seem to be any relationship between putative olivine igneous fractionation and metal composition. The PMG-as and one PMG-am make up the high Mn/Mg cluster; no PMG are in this cluster. The high Mn/Mg cluster ought to be the most fractionated (equivalent to the most Fe-rich in igneous suites), yet they have among the lowest contents of incompatible lithophile elements Al and Ti and the two PMG-as in this cluster also have low Ca and Sc contents. This is inconsistent with simple igneous fractionation on a single, initially homogeneous parent asteroid. For Al and Ti, the low and high Mn/Mg clusters have generally uniform contents, while the medium cluster has wide ranges. This is also true of analyses of duplicate grains from the medium cluster pallasites which can have very different Al and Ti contents. Those from the low and high clusters do not. These observations suggest that pallasite olivines are not cumulates, but rather are restites from high degrees of melting. The moderately siderophile elements P and Ga show wide ranges in the high Mn/Mg cluster, but very uniform compositions in the medium cluster, opposite the case for Al and Ti. There is no correlation of P or Ga and Fe/Mn as might be expected if redox processes controlled the contents of moderately siderophile elements in the olivines. The lack of correlation of P could reflect equilibration with phosphates, although there is no correlation of Ca with P as might be expected

  6. Assessment of the municipal solid waste management system in Accra, Ghana: A 'Wasteaware' benchmark indicator approach.

    PubMed

    Oduro-Appiah, Kwaku; Scheinberg, Anne; Mensah, Anthony; Afful, Abraham; Boadu, Henry Kofi; de Vries, Nanne

    2017-11-01

    This article assesses the performance of the city of Accra, Ghana, in municipal solid waste management as defined by the integrated sustainable waste management framework. The article reports on a participatory process to socialise the Wasteaware benchmark indicators and apply them to an upgraded set of data and information. The process has engaged 24 key stakeholders for 9 months, to diagram the flow of materials and benchmark three physical components and three governance aspects of the city's municipal solid waste management system. The results indicate that Accra is well below some other lower middle-income cities regarding sustainable modernisation of solid waste services. Collection coverage and capture of 75% and 53%, respectively, are a disappointing result, despite (or perhaps because of) 20 years of formal private sector involvement in service delivery. A total of 62% of municipal solid waste continues to be disposed of in controlled landfills and the reported recycling rate of 5% indicates both a lack of good measurement and a lack of interest in diverting waste from disposal. Drains, illegal dumps and beaches are choked with discarded bottles and plastic packaging. The quality of collection, disposal and recycling score between low and medium on the Wasteaware indicators, and the scores for user inclusivity, financial sustainability and local institutional coherence are low. The analysis suggests that waste and recycling would improve through greater provider inclusivity, especially the recognition and integration of the informal sector, and interventions that respond to user needs for more inclusive decision-making.

  7. Computer conferencing: Choices and strategies

    NASA Technical Reports Server (NTRS)

    Smith, Jill Y.

    1991-01-01

    Computer conferencing permits meeting through the computer while sharing a common file. The primary advantages of computer conferencing are that participants may (1) meet simultaneously or nonsimultaneously, and (2) contribute across geographic distance and time zones. Due to these features, computer conferencing offers a viable meeting option for distributed business teams. Past research and practice is summarized denoting practical uses of computer conferencing as well as types of meeting activities ill suited to the medium. Additionally, effective team strategies are outlined which maximize the benefits of computer conferencing.

  8. An Investigative Study of Air Force Acquisition Management Work with the Intent of Identifying Its Nature and Required Tools

    DTIC Science & Technology

    1988-09-01

    other languages are better suited for more precise and narrow communications 50 .7 ’ 7 . HIGH VARIETY Art & Music (AMBIGUOUS) Body Language...change one’s understanding). Face-to-face conversation is the *richest" medium as it provides "immediate feedback* plus ’multiple cues’ such as body ...language and voice tone (Daft and Lengel, 1986:560). Some of the more ’hidden messages managers send* (like body language and office arrangement) can

  9. Future fuels and engines for railroad locomotives. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    Liddle, S. G.; Bonzo, B. B.; Purohit, G. P.; Stallkamp, J. A.

    1981-01-01

    The potential for reducing the dependence of railroads on petroleum fuel, particularly Diesel No. 2 was investigated. Two approaches are studied: (1) to determine how the use of Diesel No. 2 can be reduced through increased efficiency and conservation, and (2) to use fuels other than Diesel No. 2 both in Diesel and other types of engines. Because synthetic hydrocarbon fuels are particularly suited to medium speed diesel engines, the first commercial application of these fuels may be by the railroad industry.

  10. Dietary Interventions to Extend Life Span and Health Span Based on Calorie Restriction

    PubMed Central

    Minor, Robin K.; Allard, Joanne S.; Younts, Caitlin M.; Ward, Theresa M.

    2010-01-01

    The societal impact of obesity, diabetes, and other metabolic disorders continues to rise despite increasing evidence of their negative long-term consequences on health span, longevity, and aging. Unfortunately, dietary management and exercise frequently fail as remedies, underscoring the need for the development of alternative interventions to successfully treat metabolic disorders and enhance life span and health span. Using calorie restriction (CR)—which is well known to improve both health and longevity in controlled studies—as their benchmark, gerontologists are coming closer to identifying dietary and pharmacological therapies that may be applicable to aging humans. This review covers some of the more promising interventions targeted to affect pathways implicated in the aging process as well as variations on classical CR that may be better suited to human adaptation. PMID:20371545

  11. Investigating the impact of the cielo cray XE6 architecture on scientific application codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajan, Mahesh; Barrett, Richard; Pedretti, Kevin Thomas Tauke

    2010-12-01

    Cielo, a Cray XE6, is the Department of Energy NNSA Advanced Simulation and Computing (ASC) campaign's newest capability machine. Rated at 1.37 PFLOPS, it consists of 8,944 dual-socket oct-core AMD Magny-Cours compute nodes, linked using Cray's Gemini interconnect. Its primary mission objective is to enable a suite of the ASC applications implemented using MPI to scale to tens of thousands of cores. Cielo is an evolutionary improvement to a successful architecture previously available to many of our codes, thus enabling a basis for understanding the capabilities of this new architecture. Using three codes strategically important to the ASC campaign, andmore » supplemented with some micro-benchmarks that expose the fundamental capabilities of the XE6, we report on the performance characteristics and capabilities of Cielo.« less

  12. Partially-Averaged Navier Stokes Model for Turbulence: Implementation and Validation

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Abdol-Hamid, Khaled S.

    2005-01-01

    Partially-averaged Navier Stokes (PANS) is a suite of turbulence closure models of various modeled-to-resolved scale ratios ranging from Reynolds-averaged Navier Stokes (RANS) to Navier-Stokes (direct numerical simulations). The objective of PANS, like hybrid models, is to resolve large scale structures at reasonable computational expense. The modeled-to-resolved scale ratio or the level of physical resolution in PANS is quantified by two parameters: the unresolved-to-total ratios of kinetic energy (f(sub k)) and dissipation (f(sub epsilon)). The unresolved-scale stress is modeled with the Boussinesq approximation and modeled transport equations are solved for the unresolved kinetic energy and dissipation. In this paper, we first present a brief discussion of the PANS philosophy followed by a description of the implementation procedure and finally perform preliminary evaluation in benchmark problems.

  13. Genetic Parallel Programming: design and implementation.

    PubMed

    Cheang, Sin Man; Leung, Kwong Sak; Lee, Kin Hong

    2006-01-01

    This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.

  14. A theoretical and experimental benchmark study of core-excited states in nitrogen

    DOE PAGES

    Myhre, Rolf H.; Wolf, Thomas J. A.; Cheng, Lan; ...

    2018-02-14

    The high resolution near edge X-ray absorption fine structure spectrum of nitrogen displays the vibrational structure of the core-excited states. This makes nitrogen well suited for assessing the accuracy of different electronic structure methods for core excitations. We report high resolution experimental measurements performed at the SOLEIL synchrotron facility. These are compared with theoretical spectra calculated using coupled cluster theory and algebraic diagrammatic construction theory. The coupled cluster singles and doubles with perturbative triples model known as CC3 is shown to accurately reproduce the experimental excitation energies as well as the spacing of the vibrational transitions. In conclusion, the computationalmore » results are also shown to be systematically improved within the coupled cluster hierarchy, with the coupled cluster singles, doubles, triples, and quadruples method faithfully reproducing the experimental vibrational structure.« less

  15. 2020 Leadership Agenda for Existing Commercial and Multifamily Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Andrew; Goldthwaite, Carolyn Sarno; Coffman, Eric

    Leadership by state and local governments is critical to unlock national energy efficiency opportunities and deliver the benefits of efficiency to all Americans. But related to building energy efficiency, what will it mean to be a public sector leader over the next several years? What are the energy efficiency solutions that cities, counties, and states are implementing today that will make their communities more affordable, livable, healthy, and economically competitive? The SEE Action Network 2020 Leadership Agenda for Existing Commercial and Multifamily Buildings establishes a benchmark for state and local government leadership on improving the energy efficiency of buildings andmore » seeks two-way collaboration among state, local, and federal officials. It defines a suite of innovative, yet practical policies and programs for policymakers to consider implementing by 2020, focusing on six important areas.« less

  16. Measuring Skin Temperatures with the IASI Hyperspectral Mission

    NASA Astrophysics Data System (ADS)

    Safieddine, S.; George, M.; Clarisse, L.; Clerbaux, C.

    2017-12-01

    Although the role of satellites in observing the variability of the Earth system has increased in recent decades, remote-sensing observations are still underexploited to accurately assess climate change fingerprints, in particular temperature variations. The IASI - Flux and Temperature (IASI-FT) project aims at providing new benchmarks for temperature observations using the calibrated radiances measured twice a day at any location by the IASI thermal infrared instrument on the suite of MetOp satellites (2006-2025). The main challenge is to achieve the accuracy and stability needed for climate studies, particularly that required for climate trends. Time series for land and sea skin surface temperatures are derived and compared with in situ measurements and atmospheric reanalysis. The observed trends are analyzed at seasonal and regional scales in order to disentangle natural (weather/dynamical) variability and human-induced climate forcings.

  17. A theoretical and experimental benchmark study of core-excited states in nitrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myhre, Rolf H.; Wolf, Thomas J. A.; Cheng, Lan

    The high resolution near edge X-ray absorption fine structure spectrum of nitrogen displays the vibrational structure of the core-excited states. This makes nitrogen well suited for assessing the accuracy of different electronic structure methods for core excitations. We report high resolution experimental measurements performed at the SOLEIL synchrotron facility. These are compared with theoretical spectra calculated using coupled cluster theory and algebraic diagrammatic construction theory. The coupled cluster singles and doubles with perturbative triples model known as CC3 is shown to accurately reproduce the experimental excitation energies as well as the spacing of the vibrational transitions. In conclusion, the computationalmore » results are also shown to be systematically improved within the coupled cluster hierarchy, with the coupled cluster singles, doubles, triples, and quadruples method faithfully reproducing the experimental vibrational structure.« less

  18. ZPR-3 Assembly 11 : A cylindrical sssembly of highly enriched uranium and depleted uranium with an average {sup 235}U enrichment of 12 atom % and a depleted uranium reflector.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R. M.; McKnight, R. D.; Tsiboulia, A.

    2010-09-30

    Over a period of 30 years, more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited for nuclear data validation and to form the basis for criticality safety benchmarks. A number of the Argonne ZPR/ZPPR critical assemblies have been evaluated as ICSBEP and IRPhEP benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physicsmore » benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactor physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. ZPR-3 Assembly 11 (ZPR-3/11) was designed as a fast reactor physics benchmark experiment with an average core {sup 235}U enrichment of approximately 12 at.% and a depleted uranium reflector. Approximately 79.7% of the total fissions in this assembly occur above 100 keV, approximately 20.3% occur below 100 keV, and essentially none below 0.625 eV - thus the classification as a 'fast' assembly. This assembly is Fast Reactor Benchmark No. 8 in the Cross Section Evaluation Working Group (CSEWG) Benchmark Specificationsa and has historically been used as a data validation benchmark assembly. Loading of ZPR-3 Assembly 11 began in early January 1958, and the Assembly 11 program ended in late January 1958. The core consisted of highly enriched uranium (HEU) plates and depleted uranium plates loaded into stainless steel drawers, which were inserted into the central square stainless steel tubes of a 31 x 31 matrix on a split table machine. The core unit cell consisted of two columns of 0.125 in.-wide (3.175 mm) HEU plates, six columns of 0.125 in.-wide (3.175 mm) depleted uranium plates and one column of 1.0 in.-wide (25.4 mm) depleted uranium plates. The length of each column was 10 in. (254.0 mm) in each half of the core. The axial blanket consisted of 12 in. (304.8 mm) of depleted uranium behind the core. The thickness of the depleted uranium radial blanket was approximately 14 in. (355.6 mm), and the length of the radial blanket in each half of the matrix was 22 in. (558.8 mm). The assembly geometry approximated a right circular cylinder as closely as the square matrix tubes allowed. According to the logbook and loading records for ZPR-3/11, the reference critical configuration was loading 10 which was critical on January 21, 1958. Subsequent loadings were very similar but less clean for criticality because there were modifications made to accommodate reactor physics measurements other than criticality. Accordingly, ZPR-3/11 loading 10 was selected as the only configuration for this benchmark. As documented below, it was determined to be acceptable as a criticality safety benchmark experiment. A very accurate transformation to a simplified model is needed to make any ZPR assembly a practical criticality-safety benchmark. There is simply too much geometric detail in an exact (as-built) model of a ZPR assembly, even a clean core such as ZPR-3/11 loading 10. The transformation must reduce the detail to a practical level without masking any of the important features of the critical experiment. And it must do this without increasing the total uncertainty far beyond that of the original experiment. Such a transformation is described in Section 3. It was obtained using a pair of continuous-energy Monte Carlo calculations. First, the critical configuration was modeled in full detail - every plate, drawer, matrix tube, and air gap was modeled explicitly. Then the regionwise compositions and volumes from the detailed as-built model were used to construct a homogeneous, two-dimensional (RZ) model of ZPR-3/11 that conserved the mass of each nuclide and volume of each region. The simple cylindrical model is the criticality-safety benchmark model. The difference in the calculated k{sub eff} values between the as-built three-dimensional model and the homogeneous two-dimensional benchmark model was used to adjust the measured excess reactivity of ZPR-3/11 loading 10 to obtain the k{sub eff} for the benchmark model.« less

  19. About the age of the Neoproterozoic Lainici-Paius terrane (South Carpathians, Romania)

    NASA Astrophysics Data System (ADS)

    Balica, C.; Balintoni, I.; Ducea, M. N.; Berza, T.; Stremtan, C.

    2009-12-01

    The pre-Alpine basement of the Danubian domain nappes from South Carpathians consists of high grade metamorphic groups and late Neoproterozoic plutons, underlying low grade metamorphosed Ordovician to early Carboniferous formations (e.g. Seghedi et al., 2005). Two types of pre-Ordovician metamorphic complexes with contrasting protoliths petrology, metamorphism and associated igneous activity, involved in a pre-Permian nappe structure are separated: Lainici-Paius group, dominated by HT-LP metasediments and Dragsan group, dominated by medium grade metabasites. Based on their distinct lithologic compositions, geologic histories and clear boundaries, we consider these two groups as parts of two different terranes (i.e. Lainici-Paius and Dragsan terranes). The southern part of Lainici-Paius terrane is intruded by elongated plutons up to 100 km long and 15 km wide. Based on the geochemical composition, the plutons are assigned to two distinct suites, (i) medium K, calc-alkaline, mostly granodioritic-tonalitic suite (i.e. Susita type) and (ii) very high K, calc-alkaline and mostly granitic (i.e. Tismana type). The first suite comprises Susita and Oltet granitoid bodies and the second suite consists of Tismana and Novaci granitic plutons. Previous age dating was carried out only on Tismana (567±3 Ma upper intercept, Liégeois et al., 1996) and Novaci (588±5 Ma, Grünenfelder et al., 1983 recalculated by Liégeois et al., 1996) granites. In situ zircon U/Pb LA-ICP-MS analyses performed on all four granitoid plutons yielded 596.3±5.7 Ma for Tismana granite, 592.0±5.1 Ma for Novaci granite, 591.0±3.5 Ma for Susita granite and 588.7±3 Ma for Oltet granite. The same method has been additionally applied for detrital zircons from a metasandstone sequence comprised by the Lainici-Paius complex. Fifty-five ages out of 78 dated grains are ranging between 690.1±5.5 Ma and 811.4±12,7 Ma. Therefore, considering the protolith ages of the four dated granites and the youngest age within the mentioned detrital age distribution we can constrain the formation of the sedimentary protoliths of the Lainici-Paius group to 690-600 Ma span.

  20. Lattice Boltzmann formulation for conjugate heat transfer in heterogeneous media.

    PubMed

    Karani, Hamid; Huber, Christian

    2015-02-01

    In this paper, we propose an approach for studying conjugate heat transfer using the lattice Boltzmann method (LBM). The approach is based on reformulating the lattice Boltzmann equation for solving the conservative form of the energy equation. This leads to the appearance of a source term, which introduces the jump conditions at the interface between two phases or components with different thermal properties. The proposed source term formulation conserves conductive and advective heat flux simultaneously, which makes it suitable for modeling conjugate heat transfer in general multiphase or multicomponent systems. The simple implementation of the source term approach avoids any correction of distribution functions neighboring the interface and provides an algorithm that is independent from the topology of the interface. Moreover, our approach is independent of the choice of lattice discretization and can be easily applied to different advection-diffusion LBM solvers. The model is tested against several benchmark problems including steady-state convection-diffusion within two fluid layers with parallel and normal interfaces with respect to the flow direction, unsteady conduction in a three-layer stratified domain, and steady conduction in a two-layer annulus. The LBM results are in excellent agreement with analytical solution. Error analysis shows that our model is first-order accurate in space, but an extension to a second-order scheme is straightforward. We apply our LBM model to heat transfer in a two-component heterogeneous medium with a random microstructure. This example highlights that the method we propose is independent of the topology of interfaces between the different phases and, as such, is ideally suited for complex natural heterogeneous media. We further validate the present LBM formulation with a study of natural convection in a porous enclosure. The results confirm the reliability of the model in simulating complex coupled fluid and thermal dynamics in complex geometries.

  1. Transonic Flutter Suppression Control Law Design, Analysis and Wind-Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using classical, and minimax techniques are described. A unified general formulation and solution for the minimax approach, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  2. A Matter of Timing: Identifying Significant Multi-Dose Radiotherapy Improvements by Numerical Simulation and Genetic Algorithm Search

    PubMed Central

    Angus, Simon D.; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means of significantly improving clinical efficacy. PMID:25460164

  3. A matter of timing: identifying significant multi-dose radiotherapy improvements by numerical simulation and genetic algorithm search.

    PubMed

    Angus, Simon D; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17-18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means of significantly improving clinical efficacy.

  4. The MSRC ab initio methods benchmark suite: A measurement of hardware and software performance in the area of electronic structure methods

    NASA Astrophysics Data System (ADS)

    Feller, D. F.

    1993-07-01

    This collection of benchmark timings represents a snapshot of the hardware and software capabilities available for ab initio quantum chemical calculations at Pacific Northwest Laboratory's Molecular Science Research Center in late 1992 and early 1993. The 'snapshot' nature of these results should not be underestimated, because of the speed with which both hardware and software are changing. Even during the brief period of this study, we were presented with newer, faster versions of several of the codes. However, the deadline for completing this edition of the benchmarks precluded updating all the relevant entries in the tables. As will be discussed below, a similar situation occurred with the hardware. The timing data included in this report are subject to all the normal failures, omissions, and errors that accompany any human activity. In an attempt to mimic the manner in which calculations are typically performed, we have run the calculations with the maximum number of defaults provided by each program and a near minimum amount of memory. This approach may not produce the fastest performance that a particular code can deliver. It is not known to what extent improved timings could be obtained for each code by varying the run parameters. If sufficient interest exists, it might be possible to compile a second list of timing data corresponding to the fastest observed performance from each application, using an unrestricted set of input parameters. Improvements in I/O might have been possible by fine tuning the Unix kernel, but we resisted the temptation to make changes to the operating system. Due to the large number of possible variations in levels of operating system, compilers, speed of disks and memory, versions of applications, etc., readers of this report may not be able to exactly reproduce the times indicated. Copies of the output files from individual runs are available if questions arise about a particular set of timings.

  5. Benchmarking dairy herd health status using routinely recorded herd summary data.

    PubMed

    Parker Gaddis, K L; Cole, J B; Clay, J S; Maltecca, C

    2016-02-01

    Genetic improvement of dairy cattle health through the use of producer-recorded data has been determined to be feasible. Low estimated heritabilities indicate that genetic progress will be slow. Variation observed in lowly heritable traits can largely be attributed to nongenetic factors, such as the environment. More rapid improvement of dairy cattle health may be attainable if herd health programs incorporate environmental and managerial aspects. More than 1,100 herd characteristics are regularly recorded on farm test-days. We combined these data with producer-recorded health event data, and parametric and nonparametric models were used to benchmark herd and cow health status. Health events were grouped into 3 categories for analyses: mastitis, reproductive, and metabolic. Both herd incidence and individual incidence were used as dependent variables. Models implemented included stepwise logistic regression, support vector machines, and random forests. At both the herd and individual levels, random forest models attained the highest accuracy for predicting health status in all health event categories when evaluated with 10-fold cross-validation. Accuracy (SD) ranged from 0.61 (0.04) to 0.63 (0.04) when using random forest models at the herd level. Accuracy of prediction (SD) at the individual cow level ranged from 0.87 (0.06) to 0.93 (0.001) with random forest models. Highly significant variables and key words from logistic regression and random forest models were also investigated. All models identified several of the same key factors for each health event category, including movement out of the herd, size of the herd, and weather-related variables. We concluded that benchmarking health status using routinely collected herd data is feasible. Nonparametric models were better suited to handle this complex data with numerous variables. These data mining techniques were able to perform prediction of health status and could add evidence to personal experience in herd management. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Benchmark Problems of the Geothermal Technologies Office Code Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark D.; Podgorney, Robert; Kelkar, Sharad M.

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulationmore » capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners.« less

  7. Uncertainty Quantification Techniques of SCALE/TSUNAMI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Mueller, Don

    2011-01-01

    The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification ismore » useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.« less

  8. Benchmark Modeling of the Near-Field and Far-Field Wave Effects of Wave Energy Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhinefrank, Kenneth E; Haller, Merrick C; Ozkan-Haller, H Tuba

    2013-01-26

    This project is an industry-led partnership between Columbia Power Technologies and Oregon State University that will perform benchmark laboratory experiments and numerical modeling of the near-field and far-field impacts of wave scattering from an array of wave energy devices. These benchmark experimental observations will help to fill a gaping hole in our present knowledge of the near-field effects of multiple, floating wave energy converters and are a critical requirement for estimating the potential far-field environmental effects of wave energy arrays. The experiments will be performed at the Hinsdale Wave Research Laboratory (Oregon State University) and will utilize an array ofmore » newly developed Buoys' that are realistic, lab-scale floating power converters. The array of Buoys will be subjected to realistic, directional wave forcing (1:33 scale) that will approximate the expected conditions (waves and water depths) to be found off the Central Oregon Coast. Experimental observations will include comprehensive in-situ wave and current measurements as well as a suite of novel optical measurements. These new optical capabilities will include imaging of the 3D wave scattering using a binocular stereo camera system, as well as 3D device motion tracking using a newly acquired LED system. These observing systems will capture the 3D motion history of individual Buoys as well as resolve the 3D scattered wave field; thus resolving the constructive and destructive wave interference patterns produced by the array at high resolution. These data combined with the device motion tracking will provide necessary information for array design in order to balance array performance with the mitigation of far-field impacts. As a benchmark data set, these data will be an important resource for testing of models for wave/buoy interactions, buoy performance, and far-field effects on wave and current patterns due to the presence of arrays. Under the proposed project we will initiate high-resolution (fine scale, very near-field) fluid/structure interaction simulations of buoy motions, as well as array-scale, phase-resolving wave scattering simulations. These modeling efforts will utilize state-of-the-art research quality models, which have not yet been brought to bear on this complex problem of large array wave/structure interaction problem.« less

  9. JEFF-3.1, ENDF/B-VII and JENDL-3.3 Critical Assemblies Benchmarking With the Monte Carlo Code TRIPOLI

    NASA Astrophysics Data System (ADS)

    Sublet, Jean-Christophe

    2008-02-01

    ENDF/B-VII.0, the first release of the ENDF/B-VII nuclear data library, was formally released in December 2006. Prior to this event the European JEFF-3.1 nuclear data library was distributed in April 2005, while the Japanese JENDL-3.3 library has been available since 2002. The recent releases of these neutron transport libraries and special purpose files, the updates of the processing tools and the significant progress in computer power and potency, allow today far better leaner Monte Carlo code and pointwise library integration leading to enhanced benchmarking studies. A TRIPOLI-4.4 critical assembly suite has been set up as a collection of 86 benchmarks taken principally from the International Handbook of Evaluated Criticality Benchmarks Experiments (2006 Edition). It contains cases for a variety of U and Pu fuels and systems, ranging from fast to deep thermal solutions and assemblies. It covers cases with a variety of moderators, reflectors, absorbers, spectra and geometries. The results presented show that while the most recent library ENDF/B-VII.0, which benefited from the timely development of JENDL-3.3 and JEFF-3.1, produces better overall results, it suggest clearly also that improvements are still needed. This is true in particular in Light Water Reactor applications for thermal and epithermal plutonium data for all libraries and fast uranium data for JEFF-3.1 and JENDL-3.3. It is also true to state that other domains, in which Monte Carlo code are been used, such as astrophysics, fusion, high-energy or medical, radiation transport in general benefit notably from such enhanced libraries. It is particularly noticeable in term of the number of isotopes, materials available, the overall quality of the data and the much broader energy range for which evaluated (as opposed to modeled) data are available, spanning from meV to hundreds of MeV. In pointing out the impact of the different nuclear data at the library but also the isotopic levels one could not help noticing the importance and difference of the compensating effects that result from their single usage. Library differences are still important but tend to diminish due to the ever increasing and beneficial worldwide collaboration in the field of nuclear data measurement and evaluations.

  10. Spacesuit and Space Vehicle Comparative Ergonomic Evaluation

    NASA Technical Reports Server (NTRS)

    England, Scott; Benson, Elizabeth; Cowley, Matthew; Harvill, Lauren; Blackledge, Christopher; Perez, Esau; Rajulu, Sudhakar

    2011-01-01

    With the advent of the latest manned spaceflight objectives, a series of prototype launch and reentry spacesuit architectures were evaluated for eventual down selection by NASA based on the performance of a set of designated tasks. A consolidated approach was taken to testing, concurrently collecting suit mobility data, seat-suit-vehicle interface clearances and movement strategies within the volume of a Multi-Purpose Crew Vehicle mockup. To achieve the objectives of the test, a requirement was set forth to maintain high mockup fidelity while using advanced motion capture technologies. These seemingly mutually exclusive goals were accommodated with the construction of an optically transparent and fully adjustable frame mockup. The mockup was constructed such that it could be dimensionally validated rapidly with the motion capture system. This paper will describe the method used to create a motion capture compatible space vehicle mockup, the consolidated approach for evaluating spacesuits in action, as well as the various methods for generating hardware requirements for an entire population from the resulting complex data set using a limited number of test subjects. Kinematics, hardware clearance, suited anthropometry, and subjective feedback data were recorded on fifteen unsuited and five suited subjects. Unsuited subjects were selected chiefly by anthropometry, in an attempt to find subjects who fell within predefined criteria for medium male, large male and small female subjects. The suited subjects were selected as a subset of the unsuited subjects and tested in both unpressurized and pressurized conditions. Since the prototype spacesuits were fabricated in a single size to accommodate an approximately average sized male, the findings from the suit testing were systematically extrapolated to the extremes of the population to anticipate likely problem areas. This extrapolation was achieved by first performing population analysis through a comparison of suited subjects performance to their unsuited performance and then applying the results to the entire range of population. The use of a transparent space vehicle mockup enabled the collection of large amounts of data during human-in-the-loop testing. Mobility data revealed that most of the tested spacesuits had sufficient ranges of motion for tasks to be performed successfully. A failed tasked by a suited subject most often stemmed from a combination of poor field of view while seated and poor dexterity of the gloves when pressurized or from suit/vehicle interface issues. Seat ingress/egress testing showed that problems with anthropometric accommodation does not exclusively occur with the largest or smallest subjects, but rather specific combinations of measurements that lead to narrower seat ingress/egress clearance.

  11. Instrumentation effects on U and Pu CBNM standards spectra quality measured on a 500 mm3 CdZnTe and a 2×2 inch LaBr3 detectors

    NASA Astrophysics Data System (ADS)

    Meleshenkovskii, I.; Borella, A.; Van der Meer, K.; Bruggeman, M.; Pauly, N.; Labeau, P. E.; Schillebeeckx, P.

    2018-01-01

    Nowadays, there is interest in developing gamma-ray measuring devices based on the room temperature operated medium resolution detectors such as semiconductor detectors of the CdZnTe type and scintillators of the LaBr3 type. This is true also for safeguards applications and the International Atomic Energy Agency (IAEA) has launched a project devoted to the assessment of medium resolution gamma-ray spectroscopy for the verification of the isotopic composition of U and Pu bearing samples. This project is carried out within the Non-Destructive Assay Working Group of the European Safeguards Research and Development Association (ESARDA). In this study we analyze medium resolution spectra of U and Pu standards with the aim to develop an isotopic composition determination algorithm, particularly suited for these types of detectors. We show how the peak shape of a CdZnTe detector is influenced by the instrumentation parameters. The experimental setup consisted of a 500 mm3 CdZnTe detector, a 2×2 inch LaBr3 detector, two types of measurement instrumentation - an analogue one and a digital one, and a set of certified samples - a 207Bi point source and U and Pu CBNM standards. The results of our measurements indicate that the lowest contribution to the peak asymmetry and thus the smallest impact on the resolution of the 500 mm3 CdZnTe detector was achieved with the digital MCA. Analysis of acquired spectra allowed to reject poor quality measurement runs and produce summed spectra files with the least impact of instrumentation instabilities. This work is preliminary to further studies concerning the development of an isotopic composition determination algorithm particularly suited for CZT and LaBr3 detectors for safeguards applications.

  12. Verifying Digital Components of Physical Systems: Experimental Evaluation of Test Quality

    NASA Astrophysics Data System (ADS)

    Laputenko, A. V.; López, J. E.; Yevtushenko, N. V.

    2018-03-01

    This paper continues the study of high quality test derivation for verifying digital components which are used in various physical systems; those are sensors, data transfer components, etc. We have used logic circuits b01-b010 of the package of ITC'99 benchmarks (Second Release) for experimental evaluation which as stated before, describe digital components of physical systems designed for various applications. Test sequences are derived for detecting the most known faults of the reference logic circuit using three different approaches to test derivation. Three widely used fault types such as stuck-at-faults, bridges, and faults which slightly modify the behavior of one gate are considered as possible faults of the reference behavior. The most interesting test sequences are short test sequences that can provide appropriate guarantees after testing, and thus, we experimentally study various approaches to the derivation of the so-called complete test suites which detect all fault types. In the first series of experiments, we compare two approaches for deriving complete test suites. In the first approach, a shortest test sequence is derived for testing each fault. In the second approach, a test sequence is pseudo-randomly generated by the use of an appropriate software for logic synthesis and verification (ABC system in our study) and thus, can be longer. However, after deleting sequences detecting the same set of faults, a test suite returned by the second approach is shorter. The latter underlines the fact that in many cases it is useless to spend `time and efforts' for deriving a shortest distinguishing sequence; it is better to use the test minimization afterwards. The performed experiments also show that the use of only randomly generated test sequences is not very efficient since such sequences do not detect all the faults of any type. After reaching the fault coverage around 70%, saturation is observed, and the fault coverage cannot be increased anymore. For deriving high quality short test suites, the approach that is the combination of randomly generated sequences together with sequences which are aimed to detect faults not detected by random tests, allows to reach the good fault coverage using shortest test sequences.

  13. An open-source, FireWire camera-based, Labview-controlled image acquisition system for automated, dynamic pupillometry and blink detection.

    PubMed

    de Souza, John Kennedy Schettino; Pinto, Marcos Antonio da Silva; Vieira, Pedro Gabrielle; Baron, Jerome; Tierra-Criollo, Carlos Julio

    2013-12-01

    The dynamic, accurate measurement of pupil size is extremely valuable for studying a large number of neuronal functions and dysfunctions. Despite tremendous and well-documented progress in image processing techniques for estimating pupil parameters, comparatively little work has been reported on practical hardware issues involved in designing image acquisition systems for pupil analysis. Here, we describe and validate the basic features of such a system which is based on a relatively compact, off-the-shelf, low-cost FireWire digital camera. We successfully implemented two configurable modes of video record: a continuous mode and an event-triggered mode. The interoperability of the whole system is guaranteed by a set of modular software components hosted on a personal computer and written in Labview. An offline analysis suite of image processing algorithms for automatically estimating pupillary and eyelid parameters were assessed using data obtained in human subjects. Our benchmark results show that such measurements can be done in a temporally precise way at a sampling frequency of up to 120 Hz and with an estimated maximum spatial resolution of 0.03 mm. Our software is made available free of charge to the scientific community, allowing end users to either use the software as is or modify it to suit their own needs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Connecting scales: achieving in-field pest control from areawide and landscape ecology studies.

    PubMed

    Schellhorn, Nancy A; Parry, Hazel R; Macfadyen, Sarina; Wang, Yongmo; Zalucki, Myron P

    2015-02-01

    Areawide management has a long history of achieving solutions that target pests, however, there has been little focus on the areawide management of arthropod natural enemies. Landscape ecology studies that show a positive relationship between natural enemy abundance and habitat diversity demonstrate landscape-dependent pest suppression, but have not yet clearly linked their findings to pest management or to the suite of pests associated with crops that require control. Instead the focus has often been on model systems of single pest species and their natural enemies. We suggest that management actions to capture pest control from natural enemies may be forth coming if: (i) the suite of response and predictor variables focus on pest complexes and specific management actions; (ii) the contribution of "the landscape" is identified by assessing the timing and numbers of natural enemies immigrating and emigrating to and from the target crop, as well as pests; and (iii) pest control thresholds aligned with crop development stages are the benchmark to measure impact of natural enemies on pests, in turn allowing for comparison between study regions, and generalizations. To achieve pest control we will need to incorporate what has been learned from an ecological understanding of model pest and natural enemy systems and integrate areawide landscape management with in-field pest management. © 2014 Institute of Zoology, Chinese Academy of Sciences.

  15. QPROT: Statistical method for testing differential expression using protein-level intensity data in label-free quantitative proteomics.

    PubMed

    Choi, Hyungwon; Kim, Sinae; Fermin, Damian; Tsou, Chih-Chiang; Nesvizhskii, Alexey I

    2015-11-03

    We introduce QPROT, a statistical framework and computational tool for differential protein expression analysis using protein intensity data. QPROT is an extension of the QSPEC suite, originally developed for spectral count data, adapted for the analysis using continuously measured protein-level intensity data. QPROT offers a new intensity normalization procedure and model-based differential expression analysis, both of which account for missing data. Determination of differential expression of each protein is based on the standardized Z-statistic based on the posterior distribution of the log fold change parameter, guided by the false discovery rate estimated by a well-known Empirical Bayes method. We evaluated the classification performance of QPROT using the quantification calibration data from the clinical proteomic technology assessment for cancer (CPTAC) study and a recently published Escherichia coli benchmark dataset, with evaluation of FDR accuracy in the latter. QPROT is a statistical framework with computational software tool for comparative quantitative proteomics analysis. It features various extensions of QSPEC method originally built for spectral count data analysis, including probabilistic treatment of missing values in protein intensity data. With the increasing popularity of label-free quantitative proteomics data, the proposed method and accompanying software suite will be immediately useful for many proteomics laboratories. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Evaluation of a Reformulated CHROMagar Candida

    PubMed Central

    Jabra-Rizk, Mary Ann; Brenner, Troy M.; Romagnoli, Mark; Baqui, A. A. M. A.; Merz, William G.; Falkler, William A.; Meiller, Timothy F.

    2001-01-01

    CHROMagar Candida is a differential culture medium for the isolation and presumptive identification of clinically important yeasts. Recently the medium was reformulated by Becton Dickinson. This study was designed to evaluate the performance of the new formula of CHROMagar against the original CHROMagar Candida for recovery, growth, and colony color with stock cultures and with direct plating of clinical specimens. A total of 90 stock yeast isolates representing nine yeast species, including Candida dubliniensis, as well as 522 clinical specimens were included in this study. No major differences were noted in growth rate or colony size between the two media for most of the species. However, all 10 Candida albicans isolates evaluated consistently gave a lighter shade of green on the new CHROMagar formulation. In contrast, all 26 C. dubliniensis isolates gave the same typical dark green color on both media. A total of 173 of the 522 clinical specimens were positive for yeast, with eight yeast species recovered. The recovery rates for each species were equivalent on both media, with no consistent species-associated differences in colony size or color. Although both media were comparable in performance, the lighter green colonies of C. albicans isolates on the new CHROMagar made it easier to differentiate between C. albicans and C. dubliniensis isolates. In conclusion, the newly formulated Becton Dickinson CHROMagar Candida medium is as equally suited as a differential medium for the presumptive identification of yeast species and for the detection of multiple yeast species in clinical specimens as the original CHROMagar Candida medium. PMID:11326038

  17. Extracting the sovereigns’ CDS market hierarchy: A correlation-filtering approach

    NASA Astrophysics Data System (ADS)

    León, Carlos; Leiton, Karen; Pérez, Jhonatan

    2014-12-01

    This paper employs correlation-into-distance mapping techniques and a minimal spanning tree-based correlation-filtering methodology on 36 sovereign CDS spread time-series in order to identify the sovereigns’ informational hierarchy. The resulting hierarchy (i) concurs with sovereigns’ eigenvector centrality; (ii) confirms the importance of geographical and credit rating clustering; (iii) identifies Russia, Turkey and Brazil as regional benchmarks; (iv) reveals the idiosyncratic nature of Japan and United States; (v) confirms that a small set of common factors affects the system; (vi) suggests that lower-medium grade rated sovereigns are the most influential, but also the most prone to contagion; and (vii) suggests the existence of a “Latin American common factor”.

  18. Test Activities in the Langley Transonic Dynamics Tunnel and a Summary of Recent Facility Improvements

    NASA Technical Reports Server (NTRS)

    Cole, Stanley R.; Johnson, R. Keith; Piatak, David J.; Florance, Jennifer P.; Rivera, Jose A., Jr.

    2003-01-01

    The Langley Transonic Dynamics Tunnel (TDT) has provided a unique capability for aeroelastic testing for over forty years. The facility has a rich history of significant contributions to the design of many United States commercial transports, military aircraft, launch vehicles, and spacecraft. The facility has many features that contribute to its uniqueness for aeroelasticity testing, perhaps the most important feature being the use of a heavy gas test medium to achieve higher test densities compared to testing in air. Higher test medium densities substantially improve model-building requirements and therefore simplify the fabrication process for building aeroelastically scaled wind tunnel models. This paper describes TDT capabilities that make it particularly suited for aeroelasticity testing. The paper also discusses the nature of recent test activities in the TDT, including summaries of several specific tests. Finally, the paper documents recent facility improvement projects and the continuous statistical quality assessment effort for the TDT.

  19. Cardiac tissue engineering using perfusion bioreactor systems

    PubMed Central

    Radisic, Milica; Marsano, Anna; Maidhof, Robert; Wang, Yadong; Vunjak-Novakovic, Gordana

    2009-01-01

    This protocol describes tissue engineering of synchronously contractile cardiac constructs by culturing cardiac cell populations on porous scaffolds (in some cases with an array of channels) and bioreactors with perfusion of culture medium (in some cases supplemented with an oxygen carrier). The overall approach is ‘biomimetic’ in nature as it tends to provide in vivo-like oxygen supply to cultured cells and thereby overcome inherent limitations of diffusional transport in conventional culture systems. In order to mimic the capillary network, cells are cultured on channeled elastomer scaffolds that are perfused with culture medium that can contain oxygen carriers. The overall protocol takes 2–4 weeks, including assembly of the perfusion systems, preparation of scaffolds, cell seeding and cultivation, and on-line and end-point assessment methods. This model is well suited for a wide range of cardiac tissue engineering applications, including the use of human stem cells, and high-fidelity models for biological research. PMID:18388955

  20. IMAX camera in payload bay

    NASA Image and Video Library

    1995-12-20

    STS074-361-035 (12-20 Nov 1995) --- This medium close-up view centers on the IMAX Cargo Bay Camera (ICBC) and its associated IMAX Camera Container Equipment (ICCE) at its position in the cargo bay of the Earth-orbiting Space Shuttle Atlantis. With its own ?space suit? or protective covering to protect it from the rigors of space, this version of the IMAX was able to record scenes not accessible with the in-cabin cameras. For docking and undocking activities involving Russia?s Mir Space Station and the Space Shuttle Atlantis, the camera joined a variety of in-cabin camera hardware in recording the historical events. IMAX?s secondary objectives were to film Earth views. The IMAX project is a collaboration between NASA, the Smithsonian Institution?s National Air and Space Museum (NASM), IMAX Systems Corporation, and the Lockheed Corporation to document significant space activities and promote NASA?s educational goals using the IMAX film medium.

  1. Implementation of ADI: Schemes on MIMD parallel computers

    NASA Technical Reports Server (NTRS)

    Vanderwijngaart, Rob F.

    1993-01-01

    In order to simulate the effects of the impingement of hot exhaust jets of High Performance Aircraft on landing surfaces a multi-disciplinary computation coupling flow dynamics to heat conduction in the runway needs to be carried out. Such simulations, which are essentially unsteady, require very large computational power in order to be completed within a reasonable time frame of the order of an hour. Such power can be furnished by the latest generation of massively parallel computers. These remove the bottleneck of ever more congested data paths to one or a few highly specialized central processing units (CPU's) by having many off-the-shelf CPU's work independently on their own data, and exchange information only when needed. During the past year the first phase of this project was completed, in which the optimal strategy for mapping an ADI-algorithm for the three dimensional unsteady heat equation to a MIMD parallel computer was identified. This was done by implementing and comparing three different domain decomposition techniques that define the tasks for the CPU's in the parallel machine. These implementations were done for a Cartesian grid and Dirichlet boundary conditions. The most promising technique was then used to implement the heat equation solver on a general curvilinear grid with a suite of nontrivial boundary conditions. Finally, this technique was also used to implement the Scalar Penta-diagonal (SP) benchmark, which was taken from the NAS Parallel Benchmarks report. All implementations were done in the programming language C on the Intel iPSC/860 computer.

  2. Influence relevance voting: an accurate and interpretable virtual high throughput screening method.

    PubMed

    Swamidass, S Joshua; Azencott, Chloé-Agathe; Lin, Ting-Wan; Gramajo, Hugo; Tsai, Shiou-Chuan; Baldi, Pierre

    2009-04-01

    Given activity training data from high-throughput screening (HTS) experiments, virtual high-throughput screening (vHTS) methods aim to predict in silico the activity of untested chemicals. We present a novel method, the Influence Relevance Voter (IRV), specifically tailored for the vHTS task. The IRV is a low-parameter neural network which refines a k-nearest neighbor classifier by nonlinearly combining the influences of a chemical's neighbors in the training set. Influences are decomposed, also nonlinearly, into a relevance component and a vote component. The IRV is benchmarked using the data and rules of two large, open, competitions, and its performance compared to the performance of other participating methods, as well as of an in-house support vector machine (SVM) method. On these benchmark data sets, IRV achieves state-of-the-art results, comparable to the SVM in one case, and significantly better than the SVM in the other, retrieving three times as many actives in the top 1% of its prediction-sorted list. The IRV presents several other important advantages over SVMs and other methods: (1) the output predictions have a probabilistic semantic; (2) the underlying inferences are interpretable; (3) the training time is very short, on the order of minutes even for very large data sets; (4) the risk of overfitting is minimal, due to the small number of free parameters; and (5) additional information can easily be incorporated into the IRV architecture. Combined with its performance, these qualities make the IRV particularly well suited for vHTS.

  3. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Bue, Grant; Makinen, Janice; Cognata, Thomas

    2012-01-01

    Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested space environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality. The human thermal database developed at the Johnson Space Center (JSC) is intended to evaluate a set of widely used human thermal models. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models.

  4. Reactor Testing and Qualification: Prioritized High-level Criticality Testing Needs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Bragg-Sitton; J. Bess; J. Werner

    2011-09-01

    Researchers at the Idaho National Laboratory (INL) were tasked with reviewing possible criticality testing needs to support development of the fission surface power system reactor design. Reactor physics testing can provide significant information to aid in development of technologies associated with small, fast spectrum reactors that could be applied for non-terrestrial power systems, leading to eventual system qualification. Several studies have been conducted in recent years to assess the data and analyses required to design and build a space fission power system with high confidence that the system will perform as designed [Marcille, 2004a, 2004b; Weaver, 2007; Parry et al.,more » 2008]. This report will provide a summary of previous critical tests and physics measurements that are potentially applicable to the current reactor design (both those that have been benchmarked and those not yet benchmarked), summarize recent studies of potential nuclear testing needs for space reactor development and their applicability to the current baseline fission surface power (FSP) system design, and provide an overview of a suite of tests (separate effects, sub-critical or critical) that could fill in the information database to improve the accuracy of physics modeling efforts as the FSP design is refined. Some recommendations for tasks that could be completed in the near term are also included. Specific recommendations on critical test configurations will be reserved until after the sensitivity analyses being conducted by Los Alamos National Laboratory (LANL) are completed (due August 2011).« less

  5. Studies of Neutron-Induced Fission of 235U, 238U, and 239Pu

    NASA Astrophysics Data System (ADS)

    Duke, Dana; TKE Team

    2014-09-01

    A Frisch-gridded ionization chamber and the double energy (2E) analysis method were used to study mass yield distributions and average total kinetic energy (TKE) release from neutron-induced fission of 235U, 238U, and 239Pu. Despite decades of fission research, little or no TKE data exist for high incident neutron energies. Additional average TKE information at incident neutron energies relevant to defense- and energy-related applications will provide a valuable observable for benchmarking simulations. The data can also be used as inputs in theoretical fission models. The Los Alamos Neutron Science Center-Weapons Neutron Research (LANSCE - WNR) provides a neutron beam from thermal to hundreds of MeV, well-suited for filling in the gaps in existing data and exploring fission behavior in the fast neutron region. The results of the studies on 238U, 235U, and 239Pu will be presented. LA-UR-14-24921.

  6. What Makes Red Giants Tick? Linking Tidal Forces, Activity, and Solar-Like Oscillations via Eclipsing Binaries

    NASA Astrophysics Data System (ADS)

    Rawls, Meredith L.; Gaulme, Patrick; McKeever, Jean; Jackiewicz, Jason

    2016-01-01

    Thanks to advances in asteroseismology, red giants have become astrophysical laboratories for studying stellar evolution and probing the Milky Way. However, not all red giants show solar-like oscillations. It has been proposed that stronger tidal interactions from short-period binaries and increased magnetic activity on spotty giants are linked to absent or damped solar-like oscillations, yet each star tells a nuanced story. In this work, we characterize a subset of red giants in eclipsing binaries observed by Kepler. The binaries exhibit a range of orbital periods, solar-like oscillation behavior, and stellar activity. We use orbital solutions together with a suite of modeling tools to combine photometry and spectroscopy in a detailed analysis of tidal synchronization timescales, star spot activity, and stellar evolution histories. These red giants offer an unprecedented opportunity to test stellar physics and are important benchmarks for ensemble asteroseismology.

  7. Lattice Boltzmann and Navier-Stokes Cartesian CFD Approaches for Airframe Noise Predictions

    NASA Technical Reports Server (NTRS)

    Barad, Michael F.; Kocheemoolayil, Joseph G.; Kiris, Cetin C.

    2017-01-01

    Lattice Boltzmann (LB) and compressible Navier-Stokes (NS) equations based computational fluid dynamics (CFD) approaches are compared for simulating airframe noise. Both LB and NS CFD approaches are implemented within the Launch Ascent and Vehicle Aerodynamics (LAVA) framework. Both schemes utilize the same underlying Cartesian structured mesh paradigm with provision for local adaptive grid refinement and sub-cycling in time. We choose a prototypical massively separated, wake-dominated flow ideally suited for Cartesian-grid based approaches in this study - The partially-dressed, cavity-closed nose landing gear (PDCC-NLG) noise problem from AIAA's Benchmark problems for Airframe Noise Computations (BANC) series of workshops. The relative accuracy and computational efficiency of the two approaches are systematically compared. Detailed comments are made on the potential held by LB to significantly reduce time-to-solution for a desired level of accuracy within the context of modeling airframes noise from first principles.

  8. Benchmarking polarizable molecular dynamics simulations of aqueous sodium hydroxide by diffraction measurements.

    PubMed

    Vácha, Robert; Megyes, Tunde; Bakó, Imre; Pusztai, László; Jungwirth, Pavel

    2009-04-23

    Results from molecular dynamics simulations of aqueous hydroxide of varying concentrations have been compared with experimental structural data. First, the polarizable POL3 model was verified against neutron scattering using a reverse Monte Carlo fitting procedure. It was found to be competitive with other simple water models and well suited for combining with hydroxide ions. Second, a set of four polarizable models of OH- were developed by fitting against accurate ab initio calculations for small hydroxide-water clusters. All of these models were found to provide similar results that robustly agree with structural data from X-ray scattering. The present force field thus represents a significant improvement over previously tested nonpolarizable potentials. Although it cannot in principle capture proton hopping and can only approximately describe the charge delocalization within the immediate solvent shell around OH-, it provides structural data that are almost entirely consistent with data obtained from scattering experiments.

  9. Flexible missile autopilot design studies with PC-MATLAB/386

    NASA Technical Reports Server (NTRS)

    Ruth, Michael J.

    1989-01-01

    Development of a responsive, high-bandwidth missile autopilot for airframes which have structural modes of unusually low frequency presents a challenging design task. Such systems are viable candidates for modern, state-space control design methods. The PC-MATLAB interactive software package provides an environment well-suited to the development of candidate linear control laws for flexible missile autopilots. The strengths of MATLAB include: (1) exceptionally high speed (MATLAB's version for 80386-based PC's offers benchmarks approaching minicomputer and mainframe performance); (2) ability to handle large design models of several hundred degrees of freedom, if necessary; and (3) broad extensibility through user-defined functions. To characterize MATLAB capabilities, a simplified design example is presented. This involves interactive definition of an observer-based state-space compensator for a flexible missile autopilot design task. MATLAB capabilities and limitations, in the context of this design task, are then summarized.

  10. Enhancing artificial bee colony algorithm with self-adaptive searching strategy and artificial immune network operators for global optimization.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments.

  11. A Ka-band chirped-pulse Fourier transform microwave spectrometer

    NASA Astrophysics Data System (ADS)

    Zaleski, Daniel P.; Neill, Justin L.; Muckle, Matt T.; Seifert, Nathan A.; Brandon Carroll, P.; Widicus Weaver, Susanna L.; Pate, Brooks H.

    2012-10-01

    The design and performance of a new chirped-pulse Fourier transform microwave (CP-FTMW) spectrometer operating from 25 to 40 GHz (Ka-band) is presented. This spectrometer is well-suited for the study of complex organic molecules of astronomical interest in the size range of 6-10 atoms that have strong rotational transitions in Ka-band under pulsed jet sample conditions (Trot = 1-10 K). The spectrometer permits acquisition of the full spectral band in a single data acquisition event. Sensitivity is enhanced by using two pulsed jet sources and acquiring 10 broadband measurements for each sample injection cycle. The spectrometer performance is benchmarked by measuring the pure rotational spectrum of several isotopologues of acetaldehyde in natural abundance. The rotational spectra of the singly substituted 13C and 18O isotopologues of the two lowest energy conformers of ethyl formate have been analyzed and the resulting substitution structures for these conformers are compared to electronic structure theory calculations.

  12. A two dimensional interface element for coupling of independently modeled three dimensional finite element meshes and extensions to dynamic and non-linear regimes

    NASA Technical Reports Server (NTRS)

    Aminpour, Mohammad

    1995-01-01

    The work reported here pertains only to the first year of research for a three year proposal period. As a prelude to this two dimensional interface element, the one dimensional element was tested and errors were discovered in the code for built-up structures and curved interfaces. These errors were corrected and the benchmark Boeing composite crown panel was analyzed successfully. A study of various splines led to the conclusion that cubic B-splines best suit this interface element application. A least squares approach combined with cubic B-splines was constructed to make a smooth function from the noisy data obtained with random error in the coordinate data points of the Boeing crown panel analysis. Preliminary investigations for the formulation of discontinuous 2-D shell and 3-D solid elements were conducted.

  13. Multivariate Spatial Condition Mapping Using Subtractive Fuzzy Cluster Means

    PubMed Central

    Sabit, Hakilo; Al-Anbuky, Adnan

    2014-01-01

    Wireless sensor networks are usually deployed for monitoring given physical phenomena taking place in a specific space and over a specific duration of time. The spatio-temporal distribution of these phenomena often correlates to certain physical events. To appropriately characterise these events-phenomena relationships over a given space for a given time frame, we require continuous monitoring of the conditions. WSNs are perfectly suited for these tasks, due to their inherent robustness. This paper presents a subtractive fuzzy cluster means algorithm and its application in data stream mining for wireless sensor systems over a cloud-computing-like architecture, which we call sensor cloud data stream mining. Benchmarking on standard mining algorithms, the k-means and the FCM algorithms, we have demonstrated that the subtractive fuzzy cluster means model can perform high quality distributed data stream mining tasks comparable to centralised data stream mining. PMID:25313495

  14. Exponentially-Biased Ground-State Sampling of Quantum Annealing Machines with Transverse-Field Driving Hamiltonians

    NASA Technical Reports Server (NTRS)

    Mandra, Salvatore

    2017-01-01

    We study the performance of the D-Wave 2X quantum annealing machine on systems with well-controlled ground-state degeneracy. While obtaining the ground state of a spin-glass benchmark instance represents a difficult task, the gold standard for any optimization algorithm or machine is to sample all solutions that minimize the Hamiltonian with more or less equal probability. Our results show that while naive transverse-field quantum annealing on the D-Wave 2X device can find the ground-state energy of the problems, it is not well suited in identifying all degenerate ground-state configurations associated to a particular instance. Even worse, some states are exponentially suppressed, in agreement with previous studies on toy model problems [New J. Phys. 11, 073021 (2009)]. These results suggest that more complex driving Hamiltonians are needed in future quantum annealing machines to ensure a fair sampling of the ground-state manifold.

  15. Advanced studies at the VISA FEL in the SASE and seeded modes

    NASA Astrophysics Data System (ADS)

    Andonian, G.; Dunning, M.; Hemsing, E.; Murokh, A.; Pellegrini, C.; Reiche, S.; Rosenzweig, J.; Babzien, M.; Yakimenko, V.

    2008-08-01

    The VISA (Visible to Infrared SASE Amplifier) program has been in operation at the BNL ATF since the year 2000. The program has produced numerous results including, demonstrated saturation at 840 nm with a gain length of 18 cm, chirped beam amplification with the observation of anomalously large bandwidth of the emitted radiation, and successful benchmarking of a start-to-end simulation suite to measured results. This paper will review the prior results of the VISA program and discuss planned novel measurements, including detuning studies of a 1 μm seeded amplifier, and measurements of the orbital angular momentum of the emitted radiation. The installation of a dedicated chicane bunch compressor followed by an x-band linac to mitigate energy spread will allow for high-current operations (reduced saturation length, and deep-saturation studies). Other measurements, such as coherent transition undulator radiation, are also proposed.

  16. Regularization destriping of remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle

    2017-07-01

    We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.

  17. Enhancing Artificial Bee Colony Algorithm with Self-Adaptive Searching Strategy and Artificial Immune Network Operators for Global Optimization

    PubMed Central

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023

  18. Unsteady Computational Tests of a Non-Equilibrium

    NASA Astrophysics Data System (ADS)

    Jirasek, Adam; Hamlington, Peter; Lofthouse, Andrew; Usafa Collaboration; Cu Boulder Collaboration

    2017-11-01

    A non-equilibrium turbulence model is assessed on simulations of three practically-relevant unsteady test cases; oscillating channel flow, transonic flow around an oscillating airfoil, and transonic flow around the Benchmark Super-Critical Wing. The first case is related to piston-driven flows while the remaining cases are relevant to unsteady aerodynamics at high angles of attack and transonic speeds. Non-equilibrium turbulence effects arise in each of these cases in the form of a lag between the mean strain rate and Reynolds stresses, resulting in reduced kinetic energy production compared to classical equilibrium turbulence models that are based on the gradient transport (or Boussinesq) hypothesis. As a result of the improved representation of unsteady flow effects, the non-equilibrium model provides substantially better agreement with available experimental data than do classical equilibrium turbulence models. This suggests that the non-equilibrium model may be ideally suited for simulations of modern high-speed, high angle of attack aerodynamics problems.

  19. Using baldrige performance excellence program approaches in the pursuit of radiation oncology quality care, patient satisfaction, and workforce commitment.

    PubMed

    Sternick, Edward S

    2011-01-01

    The Malcolm Baldrige National Quality Improvement Act was signed into law in 1987 to advance US business competitiveness and economic growth. Administered by the National Institute of Standards and Technology, the Act created the Baldrige National Quality Program, recently renamed the Baldrige Performance Excellence Program. The comprehensive analytical approaches referred to as the Baldrige Healthcare Criteria, are very well-suited for the evaluation and sustainable improvement of radiation oncology management and operations. A multidisciplinary self-assessment approach is used for radiotherapy program evaluation and development in order to generate a fact-based, knowledge-driven system for improving quality of care, increasing patient satisfaction, enhancing leadership effectiveness, building employee engagement, and boosting organizational innovation. This methodology also provides a valuable framework for benchmarking an individual radiation oncology practice's operations and results against guidelines defined by accreditation and professional organizations and regulatory agencies.

  20. Charmonium interaction in nuclear matter at FAIR

    NASA Astrophysics Data System (ADS)

    Pratim Bhaduri, Partha; Deveaux, Michael; Toia, Alberica

    2018-05-01

    We have studied the dissociation of J/ψ mesons in low energy proton-nucleus (p + A) collisions in the energy range of the future SIS100 accelerator at Facility for Anti-proton and Ion Research (FAIR). According to the results of our calculations, various scenarios of J/ψ absorption in nuclear matter show very distinct suppression patterns in the kinematic regime to be probed at FAIR. This suggests that the SIS100 energies are particularly suited to shed light on the issue of interaction of J/ψ resonance in nuclear medium.

  1. Overview of Shipboard Data Fusion and Resource Management R&D Results and Rationale for Its Real-Time Implementation in the ASCACT Testbed

    DTIC Science & Technology

    1996-04-01

    and IRST sensor simulations. More specifically, the CPF radars currently supported by the CASE_ATTI sensor module are the SG-150 Sea Giraffe and the...specifications. The current A WW sensor suite of the CPF comprises the SPS-49 long range 2-D radar, the Sea Giraffe medium range 2-D radar, the CANEWS ESM...Sea Giraffe . This represents an original novelty of our simulation environment. P435278.PDF [Page: 66 of 128] UNCLASSIFIED 50 The baseline

  2. Sub-Doppler Rovibrational Spectroscopy of the H_3^+ Cation and Isotopologues

    NASA Astrophysics Data System (ADS)

    Markus, Charles R.; McCollum, Jefferson E.; Dieter, Thomas S.; Kocheril, Philip A.; McCall, Benjamin J.

    2017-06-01

    Molecular ions play a central role in the chemistry of the interstellar medium (ISM) and act as benchmarks for state of the art ab initio theory. The molecular ion H_3^+ initiates a chain of ion-neutral reactions which drives chemistry in the ISM, and observing it either directly or indirectly through its isotopologues is valuable for understanding interstellar chemistry. Improving the accuracy of laboratory measurements will assist future astronomical observations. H_3^+ is also one of a few systems whose rovibrational transitions can be predicted to spectroscopic accuracy (<1 cm^{-1}), and with careful treatment of adiabatic, nonadiabatic, and quantum electrodynamic corrections to the potential energy surface, predictions of low lying rovibrational states can rival the uncertainty of experimental measurements New experimental data will be needed to benchmark future treatment of these corrections. Previously we have reported 26 transitions within the fundamental band of H_3^+ with MHz-level uncertainties. With recent improvements to our overall sensitivity, we have expanded this survey to include additional transitions within the fundamental band and the first hot band. These new data will ultimately be used to predict ground state rovibrational energy levels through combination differences which will act as benchmarks for ab initio theory and predict forbidden rotational transitions of H_3^+. We will also discuss progress in measuring rovibrational transitions of the isotopologues H_2D^+ and D_2H^+, which will be used to assist in future THz astronomical observations. New experimental data will be needed to benchmark future treatment of these corrections. J. N. Hodges, A. J. Perry, P. A. Jenkins II, B. M. Siller, and B. J. McCall, J. Chem. Phys. (2013), 139, 164201. A. J. Perry, J. N. Hodges, C. R. Markus, G. S. Kocheril, and B. J. McCall, J. Mol. Spectrosc. (2015), 317, 71-73. A. J. Perry, C. R. Markus, J. N. Hodges, G. S. Kocheril, and B. J. McCall, 71st International Symposium on Molecular Spectroscopy (2016), MH03. C. R. Markus, A. J. Perry, J. N. Hodges, and B. J. McCall, Opt. Express (2017), 25, 3709-3721.

  3. Modeling Solar Wind Flow with the Multi-Scale Fluid-Kinetic Simulation Suite

    DOE PAGES

    Pogorelov, N.V.; Borovikov, S. N.; Bedford, M. C.; ...

    2013-04-01

    Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) is a package of numerical codes capable of performing adaptive mesh refinement simulations of complex plasma flows in the presence of discontinuities and charge exchange between ions and neutral atoms. The flow of the ionized component is described with the ideal MHD equations, while the transport of atoms is governed either by the Boltzmann equation or multiple Euler gas dynamics equations. We have enhanced the code with additional physical treatments for the transport of turbulence and acceleration of pickup ions in the interplanetary space and at the termination shock. In this article, we present themore » results of our numerical simulation of the solar wind (SW) interaction with the local interstellar medium (LISM) in different time-dependent and stationary formulations. Numerical results are compared with the Ulysses, Voyager, and OMNI observations. Finally, the SW boundary conditions are derived from in-situ spacecraft measurements and remote observations.« less

  4. A generalised porous medium approach to study thermo-fluid dynamics in human eyes.

    PubMed

    Mauro, Alessandro; Massarotti, Nicola; Salahudeen, Mohamed; Romano, Mario R; Romano, Vito; Nithiarasu, Perumal

    2018-03-22

    The present work describes the application of the generalised porous medium model to study heat and fluid flow in healthy and glaucomatous eyes of different subject specimens, considering the presence of ocular cavities and porous tissues. The 2D computational model, implemented into the open-source software OpenFOAM, has been verified against benchmark data for mixed convection in domains partially filled with a porous medium. The verified model has been employed to simulate the thermo-fluid dynamic phenomena occurring in the anterior section of four patient-specific human eyes, considering the presence of anterior chamber (AC), trabecular meshwork (TM), Schlemm's canal (SC), and collector channels (CC). The computational domains of the eye are extracted from tomographic images. The dependence of TM porosity and permeability on intraocular pressure (IOP) has been analysed in detail, and the differences between healthy and glaucomatous eye conditions have been highlighted, proving that the different physiological conditions of patients have a significant influence on the thermo-fluid dynamic phenomena. The influence of different eye positions (supine and standing) on thermo-fluid dynamic variables has been also investigated: results are presented in terms of velocity, pressure, temperature, friction coefficient and local Nusselt number. The results clearly indicate that porosity and permeability of TM are two important parameters that affect eye pressure distribution. Graphical abstract Velocity contours and vectors for healthy eyes (top) and glaucomatous eyes (bottom) for standing position.

  5. Screening of agricultural wastes as a medium production of catalase for enzymatic fuel cell by Neurospora crassa InaCC F226

    NASA Astrophysics Data System (ADS)

    Santoso, Pugoh; Yopi

    2017-12-01

    Explorations of local microorganisms from Indonesia that can produce of catalase are still limited. Neurospora crassa is a fungus which resulting of two kinds of catalase, namely catalase-1 and catalase-3. We studied the production of catalase by Neurospora crassa (no. F226) from Indonesia Culture Collection (InaCC) in Solid State Fermentation (SSF). Among four screened agro wastes (corn cob, rice straw, oil palm empty fruit bunches, and bagasse), rice straw and oil palm empty fruit bunches (OPEFB) were remarked as the most promising substrate suited for the excellent growth and adequate production of catalase. Based on the result, the method of solid state fermentation was suitable to production of catalase. It is caused that the medium served to maintain microbial growth and metabolism. The filamentous filament is more suitable for living on solid media because it has a high tolerance to low water activity, and it has a high potential to excrete hydrolytic enzymes that caused of its morphology. The filamentous filament morphology allows the fungus to form colonies and penetrate the solid substrates in order to obtain nutrients. The results showed that the highest catalase activity was obtained on rice straw and oil palm empty fruit bunches medium with catalase activity of 39.1 U/mL and 37,7 U/mL in 50% moisture content medium, respectively. Optimization of humidity and pH medium in the rice straw were investigated which is the highest activity obtained in 30% moisture content and pH medium of 6. The catalase activity was reached in the value of 53.761 U/mL and 56.903 U/mL by incubated 48 hours and 96 hours, respectively.

  6. Benchmarking reference services: step by step.

    PubMed

    Buchanan, H S; Marshall, J G

    1996-01-01

    This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.

  7. Prevention of occupational injuries: Evidence for effective good practices in foundries.

    PubMed

    Porru, Stefano; Calza, Stefano; Arici, Cecilia

    2017-02-01

    Occupational injuries are a relevant research and practical issue. However, intervention studies evaluating the effectiveness of workplace injury prevention programs are seldom performed. The effectiveness of a multifaceted intervention aimed at reducing occupational injury rates (incidence/employment-based=IR, frequency/hours-based=FR, severity=SR) was evaluated between 2008 and 2013 in 29 Italian foundries (22 ferrous; 7 non-ferrous; 3,460 male blue collar workers/year) of varying sizes. Each foundry established an internal multidisciplinary prevention team for risk assessment, monitoring and prevention of occupational injuries, involving employers, occupational physicians, safety personnel, workers' representatives, supervisors. Targets of intervention were workers, equipment, organization, workplace, job tasks. An interrupted time series (ITS) design was applied. 4,604 occupational injuries and 83,156 lost workdays were registered between 2003 and 2013. Statistical analysis showed, after intervention, a reduction of all injury rates (-26% IR, -15% FR, -18% SR) in ferrous foundries and of SR (-4%) in non-ferrous foundries. A significant (p=0.021) 'step-effect' was shown for IR in ferrous foundries, independent of secular trends (p<0.001). Sector-specific benchmarks for all injury rates were developed separately for ferrous and non-ferrous foundries. Strengths of the study were: ITS design, according to standardized quality criteria (i.e., at least three data points before and three data points after intervention; clearly defined intervention point); pragmatic approach, with good external validity; promotion of effective good practices. Main limitations were the non-randomized nature and a medium length post-intervention period. In conclusion, a multifaceted, pragmatic and accountable intervention is effective in reducing the burden of occupational injuries in small-, medium- and large-sized foundries. Practical Applications: The study poses the basis for feasible good practice guidelines to be implemented to prevent occupational injuries, by means of sector-specific numerical benchmarks, with potentially relevant impacts on workers, companies, occupational health professionals and society at large. Copyright © 2016 National Safety Council and Elsevier Ltd. All rights reserved.

  8. MSNoise: a Python Package for Monitoring Seismic Velocity Changes using Ambient Seismic Noise

    NASA Astrophysics Data System (ADS)

    Lecocq, T.; Caudron, C.; Brenguier, F.

    2013-12-01

    Earthquakes occur every day all around the world and are recorded by thousands of seismic stations. In between earthquakes, stations are recording "noise". In the last 10 years, the understanding of this noise and its potential usage have been increasing rapidly. The method, called "seismic interferometry", uses the principle that seismic waves travel between two recorders and are multiple-scattered in the medium. By cross-correlating the two records, one gets an information on the medium below/between the stations. The cross-correlation function (CCF) is a proxy to the Green Function of the medium. Recent developments of the technique have shown those CCF can be used to image the earth at depth (3D seismic tomography) or study the medium changes with time. We present MSNoise, a complete software suite to compute relative seismic velocity changes under a seismic network, using ambient seismic noise. The whole is written in Python, from the monitoring of data archives, to the production of high quality figures. All steps have been optimized to only compute the necessary steps and to use 'job'-based processing. We present a validation of the software on a dataset acquired during the UnderVolc[1] project on the Piton de la Fournaise Volcano, La Réunion Island, France, for which precursory relative changes of seismic velocity are visible for three eruptions betwee 2009 and 2011.

  9. Transonic Flutter Suppression Control Law Design, Analysis and Wind Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  10. Transonic Flutter Suppression Control Law Design, Analysis and Wind-Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  11. Transonic Flutter Suppression Control Law Design Using Classical and Optimal Techniques with Wind-Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  12. Using scientific evidence to improve hospital library services: Southern Chapter/Medical Library Association journal usage study.

    PubMed

    Dee, C R; Rankin, J A; Burns, C A

    1998-07-01

    Journal usage studies, which are useful for budget management and for evaluating collection performance relative to library use, have generally described a single library or subject discipline. The Southern Chapter/Medical Library Association (SC/MLA) study has examined journal usage at the aggregate data level with the long-term goal of developing hospital library benchmarks for journal use. Thirty-six SC/MLA hospital libraries, categorized for the study by size as small, medium, or large, reported current journal title use centrally for a one-year period following standardized data collection procedures. Institutional and aggregate data were analyzed for the average annual frequency of use, average costs per use and non-use, and average percent of non-used titles. Permutation F-type tests were used to measure difference among the three hospital groups. Averages were reported for each data set analysis. Statistical tests indicated no significant differences between the hospital groups, suggesting that benchmarks can be derived applying to all types of hospital libraries. The unanticipated lack of commonality among heavily used titles pointed to a need for uniquely tailored collections. Although the small sample size precluded definitive results, the study's findings constituted a baseline of data that can be compared against future studies.

  13. Using scientific evidence to improve hospital library services: Southern Chapter/Medical Library Association journal usage study.

    PubMed Central

    Dee, C R; Rankin, J A; Burns, C A

    1998-01-01

    BACKGROUND: Journal usage studies, which are useful for budget management and for evaluating collection performance relative to library use, have generally described a single library or subject discipline. The Southern Chapter/Medical Library Association (SC/MLA) study has examined journal usage at the aggregate data level with the long-term goal of developing hospital library benchmarks for journal use. METHODS: Thirty-six SC/MLA hospital libraries, categorized for the study by size as small, medium, or large, reported current journal title use centrally for a one-year period following standardized data collection procedures. Institutional and aggregate data were analyzed for the average annual frequency of use, average costs per use and non-use, and average percent of non-used titles. Permutation F-type tests were used to measure difference among the three hospital groups. RESULTS: Averages were reported for each data set analysis. Statistical tests indicated no significant differences between the hospital groups, suggesting that benchmarks can be derived applying to all types of hospital libraries. The unanticipated lack of commonality among heavily used titles pointed to a need for uniquely tailored collections. CONCLUSION: Although the small sample size precluded definitive results, the study's findings constituted a baseline of data that can be compared against future studies. PMID:9681164

  14. A computationally simple model for determining the time dependent spectral neutron flux in a nuclear reactor core

    NASA Astrophysics Data System (ADS)

    Schneider, E. A.; Deinert, M. R.; Cady, K. B.

    2006-10-01

    The balance of isotopes in a nuclear reactor core is key to understanding the overall performance of a given fuel cycle. This balance is in turn most strongly affected by the time and energy-dependent neutron flux. While many large and involved computer packages exist for determining this spectrum, a simplified approach amenable to rapid computation is missing from the literature. We present such a model, which accepts as inputs the fuel element/moderator geometry and composition, reactor geometry, fuel residence time and target burnup and we compare it to OECD/NEA benchmarks for homogeneous MOX and UOX LWR cores. Collision probability approximations to the neutron transport equation are used to decouple the spatial and energy variables. The lethargy dependent neutron flux, governed by coupled integral equations for the fuel and moderator/coolant regions is treated by multigroup thermalization methods, and the transport of neutrons through space is modeled by fuel to moderator transport and escape probabilities. Reactivity control is achieved through use of a burnable poison or adjustable control medium. The model calculates the buildup of 24 actinides, as well as fission products, along with the lethargy dependent neutron flux and the results of several simulations are compared with benchmarked standards.

  15. Workplace road safety risk management: An investigation into Australian practices.

    PubMed

    Warmerdam, Amanda; Newnam, Sharon; Sheppard, Dianne; Griffin, Mark; Stevenson, Mark

    2017-01-01

    In Australia, more than 30% of the traffic volume can be attributed to work-related vehicles. Although work-related driver safety has been given increasing attention in the scientific literature, it is uncertain how well this knowledge has been translated into practice in industry. It is also unclear how current practice in industry can inform scientific knowledge. The aim of the research was to use a benchmarking tool developed by the National Road Safety Partnership Program to assess industry maturity in relation to risk management practices. A total of 83 managers from a range of small, medium and large organisations were recruited through the Victorian Work Authority. Semi-structured interviews aimed at eliciting information on current organisational practices, as well as policy and procedures around work-related driving were conducted and the data mapped onto the benchmarking tool. Overall, the results demonstrated varying levels of maturity of risk management practices across organisations, highlighting the need to build accountability within organisations, improve communication practices, improve journey management, reduce vehicle-related risk, improve driver competency through an effective workplace road safety management program and review organisational incident and infringement management. The findings of the study have important implications for industry and highlight the need to review current risk management practices. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Establishment of a primary hepatocyte culture from the small Indian mongoose (Herpestes auropunctatus) and distribution of mercury in liver tissue.

    PubMed

    Horai, Sawako; Yanagi, Kumiko; Kaname, Tadashi; Yamamoto, Masatatsu; Watanabe, Izumi; Ogura, Go; Abe, Shintaro; Tanabe, Shinsuke; Furukawa, Tatsuhiko

    2014-11-01

    The present study established a primary hepatocyte culture for the small Indian mongoose (Herpestes auropunctatus). To determine the suitable medium for growing the primary hepatic cells of this species, we compared the condition of cells cultured in three media that are frequently used for mammalian cell culture: Dulbecco's Modified Eagle's Medium, RPMI-1640, and William's E. Of these, William's E medium was best suited for culturing the hepatic cells of this species. Using periodic acid-Schiff staining and ultrastructural observations, we demonstrated the cells collected from mongoose livers were hepatocytes. To evaluate the distribution of mercury (Hg) in the liver tissue, we carried out autometallography staining. Most of the Hg compounds were found in the central region of hepatic lobules. Smooth endoplasmic reticulum, which plays a role inxenobiotic metabolism, lipid/cholesterol metabolism, and the digestion and detoxification of lipophilic substances is grown in this area. This suggested that Hg colocalized with smooth endoplasmic reticulum. The results of the present study could be useful to identify the detoxification systems of wildlife with high Hg content in the body, and to evaluate the susceptibility of wildlife to Hg toxicity.

  17. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  18. Computational Nuclear Physics and Post Hartree-Fock Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lietz, Justin; Sam, Novario; Hjorth-Jensen, M.

    We present a computational approach to infinite nuclear matter employing Hartree-Fock theory, many-body perturbation theory and coupled cluster theory. These lectures are closely linked with those of chapters 9, 10 and 11 and serve as input for the correlation functions employed in Monte Carlo calculations in chapter 9, the in-medium similarity renormalization group theory of dense fermionic systems of chapter 10 and the Green's function approach in chapter 11. We provide extensive code examples and benchmark calculations, allowing thereby an eventual reader to start writing her/his own codes. We start with an object-oriented serial code and end with discussions onmore » strategies for porting the code to present and planned high-performance computing facilities.« less

  19. An Integrated Extravehicular Activity Research Plan

    NASA Technical Reports Server (NTRS)

    Abercromby, Andrew F. J.; Ross, Amy J.; Cupples, J. Scott

    2016-01-01

    Multiple organizations within NASA and outside of NASA fund and participate in research related to extravehicular activity (EVA). In October 2015, representatives of the EVA Office, the Crew and Thermal Systems Division (CTSD), and the Human Research Program (HRP) at NASA Johnson Space Center agreed on a formal framework to improve multi-year coordination and collaboration in EVA research. At the core of the framework is an Integrated EVA Research Plan and a process by which it will be annually reviewed and updated. The over-arching objective of the collaborative framework is to conduct multi-disciplinary cost-effective research that will enable humans to perform EVAs safely, effectively, comfortably, and efficiently, as needed to enable and enhance human space exploration missions. Research activities must be defined, prioritized, planned and executed to comprehensively address the right questions, avoid duplication, leverage other complementary activities where possible, and ultimately provide actionable evidence-based results in time to inform subsequent tests, developments and/or research activities. Representation of all appropriate stakeholders in the definition, prioritization, planning and execution of research activities is essential to accomplishing the over-arching objective. A formal review of the Integrated EVA Research Plan will be conducted annually. External peer review of all HRP EVA research activities including compilation and review of published literature in the EVA Evidence Book is already performed annually. Coordination with stakeholders outside of the EVA Office, CTSD, and HRP is already in effect on a study-by-study basis; closer coordination on multi-year planning with other EVA stakeholders including academia is being actively pursued. Details of the current Integrated EVA Research Plan are presented including description of ongoing and planned research activities in the areas of: Benchmarking; Anthropometry and Suit Fit; Sensors; Human-Suit Modeling; Suit Trauma Monitoring and Countermeasures; EVA Workload and Duration Effects; Decompression Sickness Risk Mitigation; Deconditioned EVA Performance; and Exploration EVA Concept of Operations.

  20. OPM: The Open Porous Media Initiative

    NASA Astrophysics Data System (ADS)

    Flemisch, B.; Flornes, K. M.; Lie, K.; Rasmussen, A.

    2011-12-01

    The principal objective of the Open Porous Media (OPM) initiative is to develop a simulation suite that is capable of modeling industrially and scientifically relevant flow and transport processes in porous media and bridge the gap between the different application areas of porous media modeling, including reservoir mechanics, CO2 sequestration, biological systems, and product development of engineered media. The OPM initiative will provide a long-lasting, efficient, and well-maintained open-source software for flow and transport in porous media built on modern software principles. The suite is released under the GNU General Public License (GPL). Our motivation is to provide a means to unite industry and public research on simulation of flow and transport in porous media. For academic users, we seek to provide a software infrastructure that facilitates testing of new ideas on models with industry-standard complexity, while at the same time giving the researcher control over discretization and solvers. Similarly, we aim to accelerate the technology transfer from academic institutions to professional companies by making new research results available as free software of professional standard. The OPM initiative is currently supported by six research groups in Norway and Germany and funded by existing grants from public research agencies as well as from Statoil Petroleum and Total E&P Norge. However, a full-scale development of the OPM initiative requires substantially more funding and involvement of more research groups and potential end users. In this talk, we will provide an overview of the current activities in the OPM initiative. Special emphasis will be given to the demonstration of the synergies achieved by combining the strengths of individual open-source software components. In particular, a new fully implicit solver developed within the DUNE-based simulator DuMux could be enhanced by the ability to read industry-standard Eclipse input files and to run on grids given in corner-point format. Examples taken from the SPE comparative solution projects and CO2 sequestration benchmarks illustrate the current capabilities of the simulation suite.

  1. Multi-beam effects on backscatter and its saturation in experiments with conditions relevant to ignition

    DOE PAGES

    Kirkwood, R. K.; Michel, P.; London, R.; ...

    2011-05-26

    To optimize the coupling to indirect drive targets in the National Ignition Campaign (NIC) at the National Ignition Facility, a model of stimulated scattering produced by multiple laser beams is used. The model has shown that scatter of the 351 nm beams can be significantly enhanced over single beam predictions in ignition relevant targets by the interaction of the multiple crossing beams with a millimeter scale length, 2.5 keV, 0.02 - 0.05 x critical density, plasma. The model uses a suite of simulation capabilities and its key aspects are benchmarked with experiments at smaller laser facilities. The model has alsomore » influenced the design of the initial targets used for NIC by showing that both the stimulated Brillouin scattering (SBS) and stimulated Raman scattering (SRS) can be reduced by the reduction of the plasma density in the beam intersection volume that is caused by an increase in the diameter of the laser entrance hole (LEH). In this model, a linear wave response leads to a small gain exponent produced by each crossing quad of beams (<~1 per quad) which amplifies the scattering that originates in the target interior where the individual beams are separated and crosses many or all other beams near the LEH as it exits the target. As a result all 23 crossing quads of beams produce a total gain exponent of several or greater for seeds of light with wavelengths in the range that is expected for scattering from the interior (480 to 580 nm for SRS). This means that in the absence of wave saturation, the overall multi-beam scatter will be significantly larger than the expectations for single beams. The potential for non-linear saturation of the Langmuir waves amplifying SRS light is also analyzed with a two dimensional, vectorized, particle in cell code (2D VPIC) that is benchmarked by amplification experiments in a plasma with normalized parameters similar to ignition targets. The physics of cumulative scattering by multiple crossing beams that simultaneously amplify the same SBS light wave is further demonstrated in experiments that benchmark the linear models for the ion waves amplifying SBS. Here, the expectation from this model and its experimental benchmarks is shown to be consistent with observations of stimulated Raman scatter in the first series of energetic experiments with ignition targets, confirming the importance of the multi-beam scattering model for optimizing coupling.« less

  2. Data and Analytics to Inform Energy Retrofit of High Performance Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Yang, Le; Hill, David

    Buildings consume more than one-third of the world?s primary energy. Reducing energy use in buildings with energy efficient technologies is feasible and also driven by energy policies such as energy benchmarking, disclosure, rating, and labeling in both the developed and developing countries. Current energy retrofits focus on the existing building stocks, especially older buildings, but the growing number of new high performance buildings built around the world raises a question that how these buildings perform and whether there are retrofit opportunities to further reduce their energy use. This is a new and unique problem for the building industry. Traditional energymore » audit or analysis methods are inadequate to look deep into the energy use of the high performance buildings. This study aims to tackle this problem with a new holistic approach powered by building performance data and analytics. First, three types of measured data are introduced, including the time series energy use, building systems operating conditions, and indoor and outdoor environmental parameters. An energy data model based on the ISO Standard 12655 is used to represent the energy use in buildings in a three-level hierarchy. Secondly, a suite of analytics were proposed to analyze energy use and to identify retrofit measures for high performance buildings. The data-driven analytics are based on monitored data at short time intervals, and cover three levels of analysis ? energy profiling, benchmarking and diagnostics. Thirdly, the analytics were applied to a high performance building in California to analyze its energy use and identify retrofit opportunities, including: (1) analyzing patterns of major energy end-use categories at various time scales, (2) benchmarking the whole building total energy use as well as major end-uses against its peers, (3) benchmarking the power usage effectiveness for the data center, which is the largest electricity consumer in this building, and (4) diagnosing HVAC equipment using detailed time-series operating data. Finally, a few energy efficiency measures were identified for retrofit, and their energy savings were estimated to be 20percent of the whole-building electricity consumption. Based on the analyses, the building manager took a few steps to improve the operation of fans, chillers, and data centers, which will lead to actual energy savings. This study demonstrated that there are energy retrofit opportunities for high performance buildings and detailed measured building performance data and analytics can help identify and estimate energy savings and to inform the decision making during the retrofit process. Challenges of data collection and analytics were also discussed to shape best practice of retrofitting high performance buildings.« less

  3. Benchmarking specialty hospitals, a scoping review on theory and practice.

    PubMed

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  4. How good is the turbid medium-based approach for accounting for light partitioning in contrasted grass--legume intercropping systems?

    PubMed

    Barillot, Romain; Louarn, Gaëtan; Escobar-Gutiérrez, Abraham J; Huynh, Pierre; Combes, Didier

    2011-10-01

    Most studies dealing with light partitioning in intercropping systems have used statistical models based on the turbid medium approach, thus assuming homogeneous canopies. However, these models could not be directly validated although spatial heterogeneities could arise in such canopies. The aim of the present study was to assess the ability of the turbid medium approach to accurately estimate light partitioning within grass-legume mixed canopies. Three contrasted mixtures of wheat-pea, tall fescue-alfalfa and tall fescue-clover were sown according to various patterns and densities. Three-dimensional plant mock-ups were derived from magnetic digitizations carried out at different stages of development. The benchmarks for light interception efficiency (LIE) estimates were provided by the combination of a light projective model and plant mock-ups, which also provided the inputs of a turbid medium model (SIRASCA), i.e. leaf area index and inclination. SIRASCA was set to gradually account for vertical heterogeneity of the foliage, i.e. the canopy was described as one, two or ten horizontal layers of leaves. Mixtures exhibited various and heterogeneous profiles of foliar distribution, leaf inclination and component species height. Nevertheless, most of the LIE was satisfactorily predicted by SIRASCA. Biased estimations were, however, observed for (1) grass species and (2) tall fescue-alfalfa mixtures grown at high density. Most of the discrepancies were due to vertical heterogeneities and were corrected by increasing the vertical description of canopies although, in practice, this would require time-consuming measurements. The turbid medium analogy could be successfully used in a wide range of canopies. However, a more detailed description of the canopy is required for mixtures exhibiting vertical stratifications and inter-/intra-species foliage overlapping. Architectural models remain a relevant tool for studying light partitioning in intercropping systems that exhibit strong vertical heterogeneities. Moreover, these models offer the possibility to integrate the effects of microclimate variations on plant growth.

  5. "Prostatic acid phosphatase?" A comparison of acid phosphatase activities in epithelial cells, granulocytes, monocytes, lymphocytes, and platelets purified by velocity sedimentation in isokinetic gradients of Ficoll in tissue culture medium.

    PubMed Central

    Helms, S. R.; Brattain, M. G.; Pretlow, T. G.; Kreisberg, J. I.

    1977-01-01

    Numerous investigators have found several substrates and inhibitors to be particularly suited for the demonstration of acid phosphatase of prostatic origin. There has been much controversy over the specificity or lack of specificity of several substrates and inhibitors. We have investigated acid phosphatase activities obtained from several kinds of purified cells. None of the substrates or inhibitors which we studied permitted us to discriminate "prostatic" acid phosphatase from acid phosphatase activities obtained from other kinds of cells. PMID:560800

  6. Refraction corrections for surveying

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Optical measurements of range and elevation angle are distorted by the earth's atmosphere. High precision refraction correction equations are presented which are ideally suited for surveying because their inputs are optically measured range and optically measured elevation angle. The outputs are true straight line range and true geometric elevation angle. The 'short distances' used in surveying allow the calculations of true range and true elevation angle to be quickly made using a programmable pocket calculator. Topics covered include the spherical form of Snell's Law; ray path equations; and integrating the equations. Short-, medium-, and long-range refraction corrections are presented in tables.

  7. Origin and Evolution of Prebiotic Organic Matter as Inferred from the Tagish Lake Meteorite

    NASA Technical Reports Server (NTRS)

    Herd, Christopher D.; Blinova, Alexandra; Simkus, Danielle N.; Huang, Yongsong; Tarozo, Rafael; Alexander, Conel M.; Gyngard, Frank; Nittler, Larry R.; Cody, George D.; Fogel, Marilyn L.; hide

    2011-01-01

    The complex suite of organic materials in carbonaceous chondrite meteorites probably originally formed in the interstellar medium and/or the solar protoplanetary disk, but was subsequently modified in the meteorites' asteroidal parent bodies. The mechanisms of formation and modification are still very poorly understood. We carried out a systematic study of variations in the mineralogy, petrology, and soluble and insoluble organic matter in distinct fragments of the Tagish Lake meteorite. The variations correlate with indicators of parent body aqueous alteration and at least some molecules of pre-biotic importance formed during the alteration.

  8. All inclusive benchmarking.

    PubMed

    Ellis, Judith

    2006-07-01

    The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.

  9. The international surface temperature initiative

    NASA Astrophysics Data System (ADS)

    Thorne, P. W.; Lawrimore, J. H.; Willett, K. M.; Allan, R.; Chandler, R. E.; Mhanda, A.; de Podesta, M.; Possolo, A.; Revadekar, J.; Rusticucci, M.; Stott, P. A.; Strouse, G. F.; Trewin, B.; Wang, X. L.; Yatagai, A.; Merchant, C.; Merlone, A.; Peterson, T. C.; Scott, E. M.

    2013-09-01

    The aim of International Surface Temperature Initiative is to create an end-to-end process for analysis of air temperature data taken over the land surface of the Earth. The foundation of any analysis is the source data. Land surface air temperature records have traditionally been stored in local, organizational, national and international holdings, some of which have been available digitally but many of which are available solely on paper or as imaged files. Further, economic and geopolitical realities have often precluded open sharing of these data. The necessary first step therefore is to collate readily available holdings and augment these over time either through gaining access to previously unavailable digital data or through data rescue and digitization activities. Next, it must be recognized that these historical measurements were made primarily in support of real-time weather applications where timeliness and coverage are key. At almost every long-term station it is virtually certain that changes in instrumentation, siting or observing practices have occurred. Because none of the historical measures were made in a metrologically traceable manner there is no unambiguous way to retrieve the true climate evolution from the heterogeneous raw data holdings. Therefore it is desirable for multiple independent groups to produce adjusted data sets (so-called homogenized data) to adequately understand the data characteristics and estimate uncertainties. Then it is necessary to benchmark the performance of the contributed algorithms (equivalent to metrological software validation) through development of realistic benchmark datasets. In support of this, a series of successive benchmarking and assessment cycles are envisaged, allowing continual improvement while avoiding over-tuning of algorithms. Finally, a portal is proposed giving access to related data-products, utilizing the assessment results to provide guidance to end-users on which product is the most suited to their needs. Recognizing that the expertise of the metrological community has been under-utilized historically in such climate data analysis problems, the governance of the Initiative includes significant representation from the metrological community. We actively welcome contributions from interested parties to any relevant aspects of the Initiative work.

  10. Alcohol calibration of tests measuring skills related to car driving.

    PubMed

    Jongen, Stefan; Vuurman, Eric; Ramaekers, Jan; Vermeeren, Annemiek

    2014-06-01

    Medication and illicit drugs can have detrimental side effects which impair driving performance. A drug's impairing potential should be determined by well-validated, reliable, and sensitive tests and ideally be calibrated by benchmark drugs and doses. To date, no consensus has been reached on the issue of which psychometric tests are best suited for initial screening of a drug's driving impairment potential. The aim of this alcohol calibration study is to determine which performance tests are useful to measure drug-induced impairment. The effects of alcohol are used to compare the psychometric quality between tests and as benchmark to quantify performance changes in each test associated with potentially impairing drug effects. Twenty-four healthy volunteers participated in a double-blind, four-way crossover study. Treatments were placebo and three different doses of alcohol leading to blood alcohol concentrations (BACs) of 0.2, 0.5, and 0.8 g/L. Main effects of alcohol were found in most tests. Compared with placebo, performance in the Divided Attention Test (DAT) was significantly impaired after all alcohol doses and performance in the Psychomotor Vigilance Test (PVT) and the Balance Test was impaired with a BAC of 0.5 and 0.8 g/L. The largest effect sizes were found on postural balance with eyes open and mean reaction time in the divided attention and the psychomotor vigilance test. The preferable tests for initial screening are the DAT and the PVT, as these tests were most sensitive to the impairing effects of alcohol and being considerably valid in assessing potential driving impairment.

  11. SpaceCubeX: A Framework for Evaluating Hybrid Multi-Core CPU FPGA DSP Architectures

    NASA Technical Reports Server (NTRS)

    Schmidt, Andrew G.; Weisz, Gabriel; French, Matthew; Flatley, Thomas; Villalpando, Carlos Y.

    2017-01-01

    The SpaceCubeX project is motivated by the need for high performance, modular, and scalable on-board processing to help scientists answer critical 21st century questions about global climate change, air quality, ocean health, and ecosystem dynamics, while adding new capabilities such as low-latency data products for extreme event warnings. These goals translate into on-board processing throughput requirements that are on the order of 100-1,000 more than those of previous Earth Science missions for standard processing, compression, storage, and downlink operations. To study possible future architectures to achieve these performance requirements, the SpaceCubeX project provides an evolvable testbed and framework that enables a focused design space exploration of candidate hybrid CPU/FPGA/DSP processing architectures. The framework includes ArchGen, an architecture generator tool populated with candidate architecture components, performance models, and IP cores, that allows an end user to specify the type, number, and connectivity of a hybrid architecture. The framework requires minimal extensions to integrate new processors, such as the anticipated High Performance Spaceflight Computer (HPSC), reducing time to initiate benchmarking by months. To evaluate the framework, we leverage a wide suite of high performance embedded computing benchmarks and Earth science scenarios to ensure robust architecture characterization. We report on our projects Year 1 efforts and demonstrate the capabilities across four simulation testbed models, a baseline SpaceCube 2.0 system, a dual ARM A9 processor system, a hybrid quad ARM A53 and FPGA system, and a hybrid quad ARM A53 and DSP system.

  12. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  13. CASBaH: the Multiphase Circumgalactic Medium During the Decline of Cosmic Star Formation

    NASA Astrophysics Data System (ADS)

    Burchett, Joseph N.; Tripp, Todd; Prochaska, Jason; Werk, Jessica; Willmer, Christopher; Ford, Amanda Brady; Howk, Chris

    2018-01-01

    The COS Absorption Survey of Baryon Harbors (CASBaH) comprises high-S/N spectra of nine z > 0.9 QSOs with coverage from the far-ultraviolet to the optical. These sightlines access the rich suite of rest-frame extreme-ultraviolet (600 - 1000 Angstroms) spectral transitions, such as Ne VIII, Mg X, and O II/III/IV, in addition to those more well studied at longer wavelengths (O VI, C III, Mg II). We have undertaken a large ground-based spectroscopic follow-up campaign to identify galaxies projected near the QSO sightlines and leverage the myriad diagnostics within the QSO spectra to study the circumgalactic medium (CGM) at 0.2 < z < 1 over the crucial epoch when star formation activity in the Universe was in sharp decline. We will present results from this multiwavelength study characterizing the CGM across multiple ionization states, focusing on the O VI and Ne VIII-probed warm-hot (105-106 K) gas within the halos of our galaxy sample.

  14. Overcoming the Range Limitation of Medium-Duty Battery Electric Vehicles through the use of Hydrogen Fuel-Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, E.; Wang, L.; Gonder, J.

    2013-10-01

    Battery electric vehicles possess great potential for decreasing lifecycle costs in medium-duty applications, a market segment currently dominated by internal combustion technology. Characterized by frequent repetition of similar routes and daily return to a central depot, medium-duty vocations are well positioned to leverage the low operating costs of battery electric vehicles. Unfortunately, the range limitation of commercially available battery electric vehicles acts as a barrier to widespread adoption. This paper describes the National Renewable Energy Laboratory's collaboration with the U.S. Department of Energy and industry partners to analyze the use of small hydrogen fuel-cell stacks to extend the range ofmore » battery electric vehicles as a means of improving utility, and presumably, increasing market adoption. This analysis employs real-world vocational data and near-term economic assumptions to (1) identify optimal component configurations for minimizing lifecycle costs, (2) benchmark economic performance relative to both battery electric and conventional powertrains, and (3) understand how the optimal design and its competitiveness change with respect to duty cycle and economic climate. It is found that small fuel-cell power units provide extended range at significantly lower capital and lifecycle costs than additional battery capacity alone. And while fuel-cell range-extended vehicles are not deemed economically competitive with conventional vehicles given present-day economic conditions, this paper identifies potential future scenarios where cost equivalency is achieved.« less

  15. Fully Flexible Docking of Medium Sized Ligand Libraries with RosettaLigand

    PubMed Central

    DeLuca, Samuel; Khar, Karen; Meiler, Jens

    2015-01-01

    RosettaLigand has been successfully used to predict binding poses in protein-small molecule complexes. However, the RosettaLigand docking protocol is comparatively slow in identifying an initial starting pose for the small molecule (ligand) making it unfeasible for use in virtual High Throughput Screening (vHTS). To overcome this limitation, we developed a new sampling approach for placing the ligand in the protein binding site during the initial ‘low-resolution’ docking step. It combines the translational and rotational adjustments to the ligand pose in a single transformation step. The new algorithm is both more accurate and more time-efficient. The docking success rate is improved by 10–15% in a benchmark set of 43 protein/ligand complexes, reducing the number of models that typically need to be generated from 1000 to 150. The average time to generate a model is reduced from 50 seconds to 10 seconds. As a result we observe an effective 30-fold speed increase, making RosettaLigand appropriate for docking medium sized ligand libraries. We demonstrate that this improved initial placement of the ligand is critical for successful prediction of an accurate binding position in the ‘high-resolution’ full atom refinement step. PMID:26207742

  16. Quantitative evaluation of waste prevention on the level of small and medium sized enterprises (SMEs).

    PubMed

    Laner, David; Rechberger, Helmut

    2009-02-01

    Waste prevention is a principle means of achieving the goals of waste management and a key element for developing sustainable economies. Small and medium sized enterprises (SMEs) contribute substantially to environmental degradation, often not even being aware of their environmental effects. Therefore, several initiatives have been launched in Austria aimed at supporting waste prevention measures on the level of SMEs. To promote the most efficient projects, they have to be evaluated with respect to their contribution to the goals of waste management. It is the aim of this paper to develop a methodology for evaluating waste prevention measures in SMEs based on their goal orientation. At first, conceptual problems of defining and delineating waste prevention activities are briefly discussed. Then an approach to evaluate waste prevention activities with respect to their environmental performance is presented and benchmarks which allow for an efficient use of the available funds are developed. Finally the evaluation method is applied to a number of former projects and the calculated results are analysed with respect to shortcomings and limitations of the model. It is found that the developed methodology can provide a tool for a more objective and comprehensible evaluation of waste prevention measures.

  17. Diffuse gas properties and stellar metallicities in cosmological simulations of disc galaxy formation

    NASA Astrophysics Data System (ADS)

    Marinacci, Federico; Pakmor, Rüdiger; Springel, Volker; Simpson, Christine M.

    2014-08-01

    We analyse the properties of the circumgalactic medium and the metal content of the stars comprising the central galaxy in eight hydrodynamical `zoom-in' simulations of disc galaxy formation. We use these properties as a benchmark for our model of galaxy formation physics implemented in the moving-mesh code AREPO, which succeeds in forming quite realistic late-type spirals in the set of `Aquarius' initial conditions of Milky-Way-sized haloes. Galactic winds significantly influence the morphology of the circumgalactic medium and induce bipolar features in the distribution of heavy elements. They also affect the thermodynamic properties of the circumgalactic gas by supplying an energy input that sustains its radiative losses. Although a significant fraction of the heavy elements are transferred from the central galaxy to the halo, and even beyond the virial radius, enough metals are retained by stars to yield a peak in their metallicity distributions at about Z⊙. All our default runs overestimate the stellar [O/Fe] ratio, an effect that we demonstrate can be rectified by an increase of the adopted Type Ia supernova rate. Nevertheless, the models have difficulty in producing stellar metallicity gradients of the same strength as observed in the Milky Way.

  18. Optimization of contrast resolution by genetic algorithm in ultrasound tissue harmonic imaging.

    PubMed

    Ménigot, Sébastien; Girault, Jean-Marc

    2016-09-01

    The development of ultrasound imaging techniques such as pulse inversion has improved tissue harmonic imaging. Nevertheless, no recommendation has been made to date for the design of the waveform transmitted through the medium being explored. Our aim was therefore to find automatically the optimal "imaging" wave which maximized the contrast resolution without a priori information. To overcome assumption regarding the waveform, a genetic algorithm investigated the medium thanks to the transmission of stochastic "explorer" waves. Moreover, these stochastic signals could be constrained by the type of generator available (bipolar or arbitrary). To implement it, we changed the current pulse inversion imaging system by including feedback. Thus the method optimized the contrast resolution by adaptively selecting the samples of the excitation. In simulation, we benchmarked the contrast effectiveness of the best found transmitted stochastic commands and the usual fixed-frequency command. The optimization method converged quickly after around 300 iterations in the same optimal area. These results were confirmed experimentally. In the experimental case, the contrast resolution measured on a radiofrequency line could be improved by 6% with a bipolar generator and it could still increase by 15% with an arbitrary waveform generator. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1996 revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Tsao, C.L.

    1996-06-01

    This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less

  20. Benchmarking in emergency health systems.

    PubMed

    Kennedy, Marcus P; Allen, Jacqueline; Allen, Greg

    2002-12-01

    This paper discusses the role of benchmarking as a component of quality management. It describes the historical background of benchmarking, its competitive origin and the requirement in today's health environment for a more collaborative approach. The classical 'functional and generic' types of benchmarking are discussed with a suggestion to adopt a different terminology that describes the purpose and practicalities of benchmarking. Benchmarking is not without risks. The consequence of inappropriate focus and the need for a balanced overview of process is explored. The competition that is intrinsic to benchmarking is questioned and the negative impact it may have on improvement strategies in poorly performing organizations is recognized. The difficulty in achieving cross-organizational validity in benchmarking is emphasized, as is the need to scrutinize benchmarking measures. The cost effectiveness of benchmarking projects is questioned and the concept of 'best value, best practice' in an environment of fixed resources is examined.

  1. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  2. A Validation of Object-Oriented Design Metrics as Quality Indicators

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio

    1997-01-01

    This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.

  3. Hybrid Communication Architectures for Distributed Smart Grid Applications

    DOE PAGES

    Zhang, Jianhua; Hasandka, Adarsh; Wei, Jin; ...

    2018-04-09

    Wired and wireless communications both play an important role in the blend of communications technologies necessary to enable future smart grid communications. Hybrid networks exploit independent mediums to extend network coverage and improve performance. However, whereas individual technologies have been applied in simulation networks, as far as we know there is only limited attention that has been paid to the development of a suite of hybrid communication simulation models for the communications system design. Hybrid simulation models are needed to capture the mixed communication technologies and IP address mechanisms in one simulation. To close this gap, we have developed amore » suite of hybrid communication system simulation models to validate the critical system design criteria for a distributed solar Photovoltaic (PV) communications system, including a single trip latency of 300 ms, throughput of 9.6 Kbps, and packet loss rate of 1%. In conclusion, the results show that three low-power wireless personal area network (LoWPAN)-based hybrid architectures can satisfy three performance metrics that are critical for distributed energy resource communications.« less

  4. A Validation of Object-Oriented Design Metrics

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Briand, Lionel; Melo, Walcelio L.

    1995-01-01

    This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (00) design metrics introduced by [Chidamber and Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Lieand Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these 00 metrics are discussed and suggestions for improvement are provided. Several of Chidamber and Kemerer's 00 metrics appear to be adequate to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.

  5. Hybrid Communication Architectures for Distributed Smart Grid Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jianhua; Hasandka, Adarsh; Wei, Jin

    Wired and wireless communications both play an important role in the blend of communications technologies necessary to enable future smart grid communications. Hybrid networks exploit independent mediums to extend network coverage and improve performance. However, whereas individual technologies have been applied in simulation networks, as far as we know there is only limited attention that has been paid to the development of a suite of hybrid communication simulation models for the communications system design. Hybrid simulation models are needed to capture the mixed communication technologies and IP address mechanisms in one simulation. To close this gap, we have developed amore » suite of hybrid communication system simulation models to validate the critical system design criteria for a distributed solar Photovoltaic (PV) communications system, including a single trip latency of 300 ms, throughput of 9.6 Kbps, and packet loss rate of 1%. In conclusion, the results show that three low-power wireless personal area network (LoWPAN)-based hybrid architectures can satisfy three performance metrics that are critical for distributed energy resource communications.« less

  6. Variability in Heat Strain in Fully Encapsulated Impermeable Suits in Different Climates and at Different Work Loads.

    PubMed

    DenHartog, Emiel A; Rubenstein, Candace D; Deaton, A Shawn; Bogerd, Cornelis Peter

    2017-03-01

    A major concern for responders to hazardous materials (HazMat) incidents is the heat strain that is caused by fully encapsulated impermeable (NFPA 1991) suits. In a research project, funded by the US Department of Defense, the thermal strain experienced when wearing these suits was studied. Forty human subjects between the ages of 25 and 50 participated in a protocol approved by the local ethical committee. Six different fully encapsulated impermeable HazMat suits were evaluated in three climates: moderate (24°C, 50% RH, 20°C WBGT), warm-wet (32°C, 60% RH, 30°C WBGT), and hot-dry (45°C, 20% RH, 37°C WBGT, 200 W m-2 radiant load) and at three walking speeds: 2.5, 4, and 5.5 km h-1. The medium speed, 4 km h-1, was tested in all three climates and the other two walking speeds were only tested in the moderate climate. Prior to the test a submaximal exercise test in normal clothing was performed to determine a relationship between heart rate and oxygen consumption (pretest). In total, 163 exposures were measured. Tolerance time ranged from as low as 20 min in the hot-dry condition to 60 min (the maximum) in the moderate climate, especially common at the lowest walking speed. Between the six difference suits limited differences were found, a two-layered aluminized suit exhibited significant shorter tolerance times in the moderate climate, but no other major significant differences were found for the other climates or workloads. An important characteristic of the overall dataset is the large variability between the subjects. Although the average responses seem suitable to be predicted, the variability in the warmer strain conditions ranged from 20 min up to 60 min. The work load in these encapsulated impermeable suits was also significantly higher than working in normal clothing and higher than predicted by the Pandolf equation. Heart rate showed a very strong correlation to body core temperature and was in many cases the limiting factor. Setting the heart rate maximum at 80% of predicted individual maximum (age based) would have prevented 95% of the cases with excessive heat strain. Monitoring of heart rate under operational conditions would further allow individually optimize working times and help in preventing exertional heat stroke. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  7. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  8. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  9. Inflow velocities of cold flows streaming into massive galaxies at high redshifts

    NASA Astrophysics Data System (ADS)

    Goerdt, Tobias; Ceverino, Daniel

    2015-07-01

    We study the velocities of the accretion along streams from the cosmic web into massive galaxies at high redshift with the help of three different suites of AMR hydrodynamical cosmological simulations. The results are compared to free-fall velocities and to the sound speeds of the hot ambient medium. The sound speed of the hot ambient medium is calculated using two different methods to determine the medium's temperature. We find that the simulated cold stream velocities are in violent disagreement with the corresponding free-fall profiles. The sound speed is a better albeit not always correct description of the cold flows' velocity. Using these calculations as a first order approximation for the gas inflow velocities vinflow = 0.9 vvir is given. We conclude from the hydrodynamical simulations as our main result that the velocity profiles for the cold streams are constant with radius. These constant inflow velocities seem to have a `parabola-like' dependency on the host halo mass in units of the virial velocity that peaks at Mvir = 1012 M⊙ and we also propose that the best-fitting functional form for the dependency of the inflow velocity on the redshift is a square root power-law relation: v_inflow ∝ √{z + 1} v_vir.

  10. Toxicological Benchmarks for Screening of Potential Contaminants of Concern for Effects on Aquatic Biota on the Oak Ridge Reservation, Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W., II

    1993-01-01

    One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less

  11. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  12. Asteroid Redirect Crewed Mission Space Suit and EVA System Maturation

    NASA Technical Reports Server (NTRS)

    Bowie, Jonathan T.; Kelly, Cody; Buffington, Jesse; Watson, Richard D.

    2015-01-01

    The Asteroid Redirect Crewed Mission (ARCM) requires a Launch/Entry/Abort (LEA) suit capability and short duration Extra Vehicular Activity (EVA) capability from the Orion spacecraft. For this mission, the pressure garment that was selected, for both functions, is the Modified Advanced Crew Escape Suit (MACES) with EVA enhancements and the life support option that was selected is the Exploration Portable Life Support System (PLSS). The proposed architecture was found to meet the mission constraints, but much more work is required to determine the details of the required suit upgrades, the integration with the PLSS, and the rest of the tools and equipment required to accomplish the mission. This work has continued over the last year to better define the operations and hardware maturation of these systems. EVA simulations have been completed in the NBL and interfacing options have been prototyped and analyzed with testing planned for late 2014. For NBL EVA simulations, in 2013, components were procured to allow in-house build up for four new suits with mobility enhancements built into the arms. Boots outfitted with clips that fit into foot restraints have also been added to the suit and analyzed for possible loads. Major suit objectives accomplished this year in testing include: evaluation of mobility enhancements, ingress/egress of foot restraint, use of foot restraint for worksite stability, ingress/egress of Orion hatch with PLSS mockup, and testing with two crew members in the water at one time to evaluate the crew's ability to help one another. Major tool objectives accomplished this year include using various other methods for worksite stability, testing new methods for asteroid geologic sampling and improving the fidelity of the mockups and crew equipment. These tests were completed on a medium fidelity capsule mockup, asteroid vehicle mockup, and asteroid mockups that were more accurate for an asteroid type EVA than previous tests. Another focus was the design and fabrication of the interface between the MACES and the PLSS. The MACES was not designed to interface with a PLSS, hence an interface kit must accommodate the unique design qualities of the MACES and provide the necessary life support function connections to the PLSS. A prototype interface kit for MACES to PLSS has been designed and fabricated. Unmanned and manned testing of the interface will show the usability of the kit while wearing a MACES. The testing shows viability of the kit approach as well as the operations concept. The design will be vetted through suit and PLSS experts and, with the findings from the testing, the best path forward will be determined. As the Asteroid Redirect Mission matures, the suit/life support portion of the mission will mature along with it and EVA Tools & Equipment can be iterated to accommodate the overall mission objectives and compromises inherent in EVA Suit optimization. The goal of the EVA architecture for ARCM is to continue to build on the previously developed technologies and lessons learned, and accomplish the ARCM EVAs while providing a stepping stone to future missions and destinations.

  13. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    PubMed

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  14. Benchmarking Undedicated Cloud Computing Providers for Analysis of Genomic Datasets

    PubMed Central

    Yazar, Seyhan; Gooden, George E. C.; Mackey, David A.; Hewitt, Alex W.

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5–78.2) for E.coli and 53.5% (95% CI: 34.4–72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5–303.1) and 173.9% (95% CI: 134.6–213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE. PMID:25247298

  15. Medical universities educational and research online services: benchmarking universities' website towards e-government.

    PubMed

    Farzandipour, Mehrdad; Meidani, Zahra

    2014-06-01

    Websites as one of the initial steps towards an e-government adoption do facilitate delivery of online and customer-oriented services. In this study we intended to investigate the role of the websites of medical universities in providing educational and research services following the E-government maturity model in the Iranian universities. This descriptive and cross- sectional study was conducted through content analysis and benchmarking the websites in 2012. The research population included the entire medical university website (37). Delivery of educational and research services through these university websites including information, interaction, transaction, and Integration were investigated using a checklist. The data were then analyzed by means of descriptive statistics and using SPSS software. Level of educational and research services by websites of the medical universities type I and II was evaluated medium as 1.99 and 1.89, respectively. All the universities gained a mean score of 1 out of 3 in terms of integration of educational and research services. Results of the study indicated that Iranian universities have passed information and interaction stages, but they have not made much progress in transaction and integration stages. Failure to adapt to e-government in Iranian medical universities in which limiting factors such as users' e-literacy, access to the internet and ICT infrastructure are not so crucial as in other organizations, suggest that e-government realization goes beyond technical challenges.

  16. Evaluation of Neutron Reactions on Iron Isotopes for CIELO and ENDF/B-VIII.0

    DOE PAGES

    Herman, M.; Trkov, A.; Capote, R.; ...

    2018-02-01

    A new suite of evaluations for 54,56,57,58Fe has been developed in the framework of the CIELO international collaboration. New resolved resonance ranges were evaluated for 54Fe and 57Fe, while modifications were applied to resonances in 56Fe. The low energy part of the 56Fe file is almost totally based on measurements. At higher energies in 56Fe and in the whole fast neutron range for minor isotopes the evaluation consists of model predictions carefully adjusted to available experimental data. We also make use of the high quality and well experimentally-constrained dosimetry evaluations from the IRDFF library. Special attention was dedicated to themore » elastic angular distributions, which were found to affect results of the integral benchmarking. The new set of iron evaluations was developed in concert with other CIELO evaluations and they were tested together in the integral experiments before being adopted for the ENDF/B-VIII.0 library.« less

  17. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE PAGES

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...

    2018-04-30

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  18. A survey of CPU-GPU heterogeneous computing techniques

    DOE PAGES

    Mittal, Sparsh; Vetter, Jeffrey S.

    2015-07-04

    As both CPU and GPU become employed in a wide range of applications, it has been acknowledged that both of these processing units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this paper, we survey heterogeneous computing techniques (HCTs) such as workload-partitioning which enable utilizing both CPU and GPU to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler and applicationmore » level. Further, we review both discrete and fused CPU-GPU systems; and discuss benchmark suites designed for evaluating heterogeneous computing systems (HCSs). Furthermore, we believe that this paper will provide insights into working and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance.« less

  19. Median Robust Extended Local Binary Pattern for Texture Classification.

    PubMed

    Liu, Li; Lao, Songyang; Fieguth, Paul W; Guo, Yulan; Wang, Xiaogang; Pietikäinen, Matti

    2016-03-01

    Local binary patterns (LBP) are considered among the most computationally efficient high-performance texture features. However, the LBP method is very sensitive to image noise and is unable to capture macrostructure information. To best address these disadvantages, in this paper, we introduce a novel descriptor for texture classification, the median robust extended LBP (MRELBP). Different from the traditional LBP and many LBP variants, MRELBP compares regional image medians rather than raw image intensities. A multiscale LBP type descriptor is computed by efficiently comparing image medians over a novel sampling scheme, which can capture both microstructure and macrostructure texture information. A comprehensive evaluation on benchmark data sets reveals MRELBP's high performance-robust to gray scale variations, rotation changes and noise-but at a low computational cost. MRELBP produces the best classification scores of 99.82%, 99.38%, and 99.77% on three popular Outex test suites. More importantly, MRELBP is shown to be highly robust to image noise, including Gaussian noise, Gaussian blur, salt-and-pepper noise, and random pixel corruption.

  20. Foundations for Measuring Volume Rendering Quality

    NASA Technical Reports Server (NTRS)

    Williams, Peter L.; Uselton, Samuel P.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The goal of this paper is to provide a foundation for objectively comparing volume rendered images. The key elements of the foundation are: (1) a rigorous specification of all the parameters that need to be specified to define the conditions under which a volume rendered image is generated; (2) a methodology for difference classification, including a suite of functions or metrics to quantify and classify the difference between two volume rendered images that will support an analysis of the relative importance of particular differences. The results of this method can be used to study the changes caused by modifying particular parameter values, to compare and quantify changes between images of similar data sets rendered in the same way, and even to detect errors in the design, implementation or modification of a volume rendering system. If one has a benchmark image, for example one created by a high accuracy volume rendering system, the method can be used to evaluate the accuracy of a given image.

  1. Evaluation of Neutron Reactions on Iron Isotopes for CIELO and ENDF/B-VIII.0

    NASA Astrophysics Data System (ADS)

    Herman, M.; Trkov, A.; Capote, R.; Nobre, G. P. A.; Brown, D. A.; Arcilla, R.; Danon, Y.; Plompen, A.; Mughabghab, S. F.; Jing, Q.; Zhigang, G.; Tingjin, L.; Hanlin, L.; Xichao, R.; Leal, L.; Carlson, B. V.; Kawano, T.; Sin, M.; Simakov, S. P.; Guber, K.

    2018-02-01

    A new suite of evaluations for 54,56,57,58Fe has been developed in the framework of the CIELO international collaboration. New resolved resonance ranges were evaluated for 54Fe and 57Fe, while modifications were applied to resonances in 56Fe. The low energy part of the 56Fe file is almost totally based on measurements. At higher energies in 56Fe and in the whole fast neutron range for minor isotopes the evaluation consists of model predictions carefully adjusted to available experimental data. We also make use of the high quality and well experimentally-constrained dosimetry evaluations from the IRDFF library. Special attention was dedicated to the elastic angular distributions, which were found to affect results of the integral benchmarking. The new set of iron evaluations was developed in concert with other CIELO evaluations and they were tested together in the integral experiments before being adopted for the ENDF/B-VIII.0 library.

  2. On the impact of approximate computation in an analog DeSTIN architecture.

    PubMed

    Young, Steven; Lu, Junjie; Holleman, Jeremy; Arel, Itamar

    2014-05-01

    Deep machine learning (DML) holds the potential to revolutionize machine learning by automating rich feature extraction, which has become the primary bottleneck of human engineering in pattern recognition systems. However, the heavy computational burden renders DML systems implemented on conventional digital processors impractical for large-scale problems. The highly parallel computations required to implement large-scale deep learning systems are well suited to custom hardware. Analog computation has demonstrated power efficiency advantages of multiple orders of magnitude relative to digital systems while performing nonideal computations. In this paper, we investigate typical error sources introduced by analog computational elements and their impact on system-level performance in DeSTIN--a compositional deep learning architecture. These inaccuracies are evaluated on a pattern classification benchmark, clearly demonstrating the robustness of the underlying algorithm to the errors introduced by analog computational elements. A clear understanding of the impacts of nonideal computations is necessary to fully exploit the efficiency of analog circuits.

  3. Evaluation of Neutron Reactions on Iron Isotopes for CIELO and ENDF/B-VIII.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman, M.; Trkov, A.; Capote, R.

    A new suite of evaluations for 54,56,57,58Fe has been developed in the framework of the CIELO international collaboration. New resolved resonance ranges were evaluated for 54Fe and 57Fe, while modifications were applied to resonances in 56Fe. The low energy part of the 56Fe file is almost totally based on measurements. At higher energies in 56Fe and in the whole fast neutron range for minor isotopes the evaluation consists of model predictions carefully adjusted to available experimental data. We also make use of the high quality and well experimentally-constrained dosimetry evaluations from the IRDFF library. Special attention was dedicated to themore » elastic angular distributions, which were found to affect results of the integral benchmarking. The new set of iron evaluations was developed in concert with other CIELO evaluations and they were tested together in the integral experiments before being adopted for the ENDF/B-VIII.0 library.« less

  4. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  5. Classification of adaptive memetic algorithms: a comparative study.

    PubMed

    Ong, Yew-Soon; Lim, Meng-Hiot; Zhu, Ning; Wong, Kok-Wai

    2006-02-01

    Adaptation of parameters and operators represents one of the recent most important and promising areas of research in evolutionary computations; it is a form of designing self-configuring algorithms that acclimatize to suit the problem in hand. Here, our interests are on a recent breed of hybrid evolutionary algorithms typically known as adaptive memetic algorithms (MAs). One unique feature of adaptive MAs is the choice of local search methods or memes and recent studies have shown that this choice significantly affects the performances of problem searches. In this paper, we present a classification of memes adaptation in adaptive MAs on the basis of the mechanism used and the level of historical knowledge on the memes employed. Then the asymptotic convergence properties of the adaptive MAs considered are analyzed according to the classification. Subsequently, empirical studies on representatives of adaptive MAs for different type-level meme adaptations using continuous benchmark problems indicate that global-level adaptive MAs exhibit better search performances. Finally we conclude with some promising research directions in the area.

  6. Designing Financial Instruments for Rapid Flood Response Using Remote Sensed and Archival Hazard and Exposure Information

    NASA Astrophysics Data System (ADS)

    Lall, U.; Allaire, M.; Ceccato, P.; Haraguchi, M.; Cian, F.; Bavandi, A.

    2017-12-01

    Catastrophic floods can pose a significant challenge for response and recovery. A key bottleneck in the speed of response is the availability of funds to a country or regions finance ministry to mobilize resources. Parametric instruments, where the release of funs is tied to the exceedance of a specified index or threshold, rather than to loss verification are well suited for this purpose. However, designing and appropriate index, that is not subject to manipulation and accurately reflects the need is a challenge, especially in developing countries which have short hydroclimatic and loss records, and where rapid land use change has led to significant changes in exposure and hydrology over time. The use of long records of rainfall from climate re-analyses, flooded area and land use from remote sensing to design and benchmark a parametric index considering the uncertainty and representativeness of potential loss is explored with applications to Bangladesh and Thailand. Prospects for broader applicability and limitations are discussed.

  7. Computational attributes of the integral form of the equation of transfer

    NASA Technical Reports Server (NTRS)

    Frankel, J. I.

    1991-01-01

    Difficulties can arise in radiative and neutron transport calculations when a highly anisotropic scattering phase function is present. In the presence of anisotropy, currently used numerical solutions are based on the integro-differential form of the linearized Boltzmann transport equation. This paper, departs from classical thought and presents an alternative numerical approach based on application of the integral form of the transport equation. Use of the integral formalism facilitates the following steps: a reduction in dimensionality of the system prior to discretization, the use of symbolic manipulation to augment the computational procedure, and the direct determination of key physical quantities which are derivable through the various Legendre moments of the intensity. The approach is developed in the context of radiative heat transfer in a plane-parallel geometry, and results are presented and compared with existing benchmark solutions. Encouraging results are presented to illustrate the potential of the integral formalism for computation. The integral formalism appears to possess several computational attributes which are well-suited to radiative and neutron transport calculations.

  8. A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm.

    PubMed

    Amoshahy, Mohammad Javad; Shamsi, Mousa; Sedaaghi, Mohammad Hossein

    2016-01-01

    Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO's parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate.

  9. A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm

    PubMed Central

    Shamsi, Mousa; Sedaaghi, Mohammad Hossein

    2016-01-01

    Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO’s parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate. PMID:27560945

  10. Real-time image processing of TOF range images using a reconfigurable processor system

    NASA Astrophysics Data System (ADS)

    Hussmann, S.; Knoll, F.; Edeler, T.

    2011-07-01

    During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.

  11. A survey of CPU-GPU heterogeneous computing techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S.

    As both CPU and GPU become employed in a wide range of applications, it has been acknowledged that both of these processing units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this paper, we survey heterogeneous computing techniques (HCTs) such as workload-partitioning which enable utilizing both CPU and GPU to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler and applicationmore » level. Further, we review both discrete and fused CPU-GPU systems; and discuss benchmark suites designed for evaluating heterogeneous computing systems (HCSs). Furthermore, we believe that this paper will provide insights into working and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance.« less

  12. On developing the local research environment of the 1990s - The Space Station era

    NASA Technical Reports Server (NTRS)

    Chase, Robert; Ziel, Fred

    1989-01-01

    A requirements analysis for the Space Station's polar platform data system has been performed. Based upon this analysis, a cluster, layered cluster, and layered-modular implementation of one specific module within the Eos Data and Information System (EosDIS), an active data base for satellite remote sensing research has been developed. It is found that a distributed system based on a layered-modular architecture and employing current generation work station technologies has the requisite attributes ascribed by the remote sensing research community. Although, based on benchmark testing, probabilistic analysis, failure analysis and user-survey technique analysis, it is found that this architecture presents some operational shortcomings that will not be alleviated with new hardware or software developments. Consequently, the potential of a fully-modular layered architectural design for meeting the needs of Eos researchers has also been evaluated, concluding that it would be well suited to the evolving requirements of this multidisciplinary research community.

  13. Supervised Learning Using Spike-Timing-Dependent Plasticity of Memristive Synapses.

    PubMed

    Nishitani, Yu; Kaneko, Yukihiro; Ueda, Michihito

    2015-12-01

    We propose a supervised learning model that enables error backpropagation for spiking neural network hardware. The method is modeled by modifying an existing model to suit the hardware implementation. An example of a network circuit for the model is also presented. In this circuit, a three-terminal ferroelectric memristor (3T-FeMEM), which is a field-effect transistor with a gate insulator composed of ferroelectric materials, is used as an electric synapse device to store the analog synaptic weight. Our model can be implemented by reflecting the network error to the write voltage of the 3T-FeMEMs and introducing a spike-timing-dependent learning function to the device. An XOR problem was successfully demonstrated as a benchmark learning by numerical simulations using the circuit properties to estimate the learning performance. In principle, the learning time per step of this supervised learning model and the circuit is independent of the number of neurons in each layer, promising a high-speed and low-power calculation in large-scale neural networks.

  14. Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.

    PubMed

    Stegmaier, Johannes; Otte, Jens C; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G Ulrich; Strähle, Uwe; Mikut, Ralf

    2014-01-01

    Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.

  15. Atomic layer deposition in a metal–organic framework: Synthesis, characterization, and performance of a solid acid

    DOE PAGES

    Rimoldi, Martino; Bernales, Varinia; Borycz, Joshua; ...

    2017-01-05

    NU-1000, a zirconium-based metal-organic framework featuring mesoporous channels, has been post-synthetically metalated via atomic layer deposition in MOF (AIM) employing dimethylaluminum iso-propoxide ([AlMe 2 iOPr] 2 – DMAI), a milder precursor than widely used trimethylaluminum (AlMe 3 - TMA). The aluminum-modified NU-1000 (Al-NU-1000) has been characterized with a comprehensive suite of techniques that points to the formation of aluminum oxide clusters well dispersed through the framework and stabilized by confinement within small pores intrinsic to the NU-1000 structure. Experimental evidence allows for identification of spectroscopic similarities between Al-NU-1000 and γ-Al 2O 3. Density functional theory modeling provides structures and simulatedmore » spectra the relevance of which can be assessed via comparison to experimental IR and EXAFS data. As a result, the catalytic performance of Al-NU-1000 has been benchmarked against γ-Al 2O 3, with promising results in terms of selectivity.« less

  16. First principles Peierls-Boltzmann phonon thermal transport: A topical review

    DOE PAGES

    Lindsay, Lucas

    2016-08-05

    The advent of coupled thermal transport calculations with interatomic forces derived from density functional theory has ushered in a new era of fundamental microscopic insight into lattice thermal conductivity. Subsequently, significant new understanding of phonon transport behavior has been developed with these methods, and because they are parameter free and successfully benchmarked against a variety of systems, they also provide reliable predictions of thermal transport in systems for which little is known. This topical review will describe the foundation from which first principles Peierls-Boltzmann transport equation methods have been developed, and briefly describe important necessary ingredients for accurate calculations. Samplemore » highlights of reported work will be presented to illustrate the capabilities and challenges of these techniques, and to demonstrate the suite of tools available, with an emphasis on thermal transport in micro- and nano-scale systems. In conclusion, future challenges and opportunities will be discussed, drawing attention to prospects for methods development and applications.« less

  17. Comparative Environmental Performance of Two-Diesel-Fuel Oxygenates: Dibutyl Maleate (DBM) and Triproplyene Glycol Monomethyl Ether (TGME)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Layton, D.W.; Marchetti, A.A.

    2001-10-01

    Many studies have shown that the addition of oxygen bearing compounds to diesel fuel can significantly reduce particulate emissions. To assist in the evaluation of the environmental performance of diesel-fuel oxygenates, we have implemented a suite of diagnostic models for simulating the transport of compounds released to air, water, and soils/groundwater as well as regional landscapes. As a means of studying the comparative performance of DBM and TGME, we conducted a series of simulations for selected environmental media. Benzene and methyl tertiary butyl ether (MTBE) were also addressed because they represent benchmark fuel-related compounds that have been the subject ofmore » extensive environmental measurements and modeling. The simulations showed that DBM and TGME are less mobile in soil because of reduced vapor-phase transport and increased retention on soil particles. The key distinction between these two oxygenates is that DBM is predicted to have a greater potential than TGME for aerobic biodegradation, based on chemical structure.« less

  18. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.

    1991-01-01

    A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  19. Comment on ‘egs_brachy: a versatile and fast Monte Carlo code for brachytherapy’

    NASA Astrophysics Data System (ADS)

    Yegin, Gultekin

    2018-02-01

    In a recent paper (Chamberland et al 2016 Phys. Med. Biol. 61 8214) develop a new Monte Carlo code called egs_brachy for brachytherapy treatments. It is based on EGSnrc, and written in the C++ programming language. In order to benchmark the egs_brachy code, the authors use it in various test case scenarios in which complex geometry conditions exist. Another EGSnrc based brachytherapy dose calculation engine, BrachyDose, is used for dose comparisons. The authors fail to prove that egs_brachy can produce reasonable dose values for brachytherapy sources in a given medium. The dose comparisons in the paper are erroneous and misleading. egs_brachy should not be used in any further research studies unless and until all the potential bugs are fixed in the code.

  20. Approximating the nonlinear density dependence of electron transport coefficients and scattering rates across the gas-liquid interface

    NASA Astrophysics Data System (ADS)

    Garland, N. A.; Boyle, G. J.; Cocks, D. G.; White, R. D.

    2018-02-01

    This study reviews the neutral density dependence of electron transport in gases and liquids and develops a method to determine the nonlinear medium density dependence of electron transport coefficients and scattering rates required for modeling transport in the vicinity of gas-liquid interfaces. The method has its foundations in Blanc’s law for gas-mixtures and adapts the theory of Garland et al (2017 Plasma Sources Sci. Technol. 26) to extract electron transport data across the gas-liquid transition region using known data from the gas and liquid phases only. The method is systematically benchmarked against multi-term Boltzmann equation solutions for Percus-Yevick model liquids. Application to atomic liquids highlights the utility and accuracy of the derived method.

  1. Eco-efficiency of solid waste management in Welsh SMEs

    NASA Astrophysics Data System (ADS)

    Sarkis, Joseph; Dijkshoorn, Jeroen

    2005-11-01

    This paper provides an efficiency analysis of practices in Solid Waste Management of manufacturing companies in Wales. We apply data envelopment analysis (DEA) to a data set compiled during the National Waste Survey Wales 2003. We explore the relative performance of small and medium sized manufacturing enterprises (SME; 10-250 employees) in Wales. We determine the technical and scale environmental and economic efficiencies of these organizations. Our evaluation focuses on empirical data collected from companies in a wide diversity of manufacturing industries throughout Wales. We find significant differences in industry and size efficiencies. We also find correlations that exist among environmental and economic efficiencies. These variations show that improvements can be made using benchmarks from similar and different size industries. Further pursuit of an investigation of possible reasons for these differences is recommended.

  2. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...

  3. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...

  4. Organic contaminants in Great Lakes tributaries: Prevalence and potential aquatic toxicity

    USGS Publications Warehouse

    Baldwin, Austin K.; Corsi, Steven R.; De Cicco, Laura A.; Lenaker, Peter L.; Lutz, Michelle A; Sullivan, Daniel J.; Richards, Kevin D.

    2016-01-01

    Organic compounds used in agriculture, industry, and households make their way into surface waters through runoff, leaking septic-conveyance systems, regulated and unregulated discharges, and combined sewer overflows, among other sources. Concentrations of these organic waste compounds (OWCs) in some Great Lakes tributaries indicate a high potential for adverse impacts on aquatic organisms. During 2010–13, 709 water samples were collected at 57 tributaries, together representing approximately 41% of the total inflow to the lakes. Samples were collected during runoff and low-flow conditions and analyzed for 69 OWCs, including herbicides, insecticides, polycyclic aromatic hydrocarbons, plasticizers, antioxidants, detergent metabolites, fire retardants, non-prescription human drugs, flavors/fragrances, and dyes. Urban-related land cover characteristics were the most important explanatory variables of concentrations of many OWCs. Compared to samples from nonurban watersheds (< 15% urban land cover) samples from urban watersheds (> 15% urban land cover) had nearly four times the number of detected compounds and four times the total sample concentration, on average. Concentration differences between runoff and low-flow conditions were not observed, but seasonal differences were observed in atrazine, metolachlor, DEET, and HHCB concentrations. Water quality benchmarks for individual OWCs were exceeded at 20 sites, and at 7 sites benchmarks were exceeded by a factor of 10 or more. The compounds with the most frequent water quality benchmark exceedances were the PAHs benzo[a]pyrene, pyrene, fluoranthene, and anthracene, the detergent metabolite 4-nonylphenol, and the herbicide atrazine. Computed estradiol equivalency quotients (EEQs) using only nonsteroidal endocrine-active compounds indicated medium to high risk of estrogenic effects (intersex or vitellogenin induction) at 10 sites. EEQs at 3 sites were comparable to values reported in effluent. This multifaceted study is the largest, most comprehensive assessment of the occurrence and potential effects of OWCs in the Great Lakes Basin to date.

  5. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...

  6. Unique Capabilities of the Situational Awareness Sensor Suite for the ISS (SASSI) Mission Concept to Study the Equatorial Ionosphere

    NASA Astrophysics Data System (ADS)

    Habash Krause, L.; Gilchrist, B. E.; Minow, J. I.; Gallagher, D. L.; Hoegy, W. R.; Coffey, V. N.; Willis, E. M.

    2014-12-01

    We present an overview of a mission concept named Situational Awareness Sensor Suite for the ISS (SASSI) with a special focus here on low-latitude ionospheric plasma turbulence measurements relevant to equatorial spread-F. SASSI is a suite of sensors that improves Space Situational Awareness for the ISS local space environment, as well as unique ionospheric measurements and support active plasma experiments on the ISS. As such, the mission concept has both operational and basic research objectives. We will describe two compelling measurement techniques enabled by SASSI's unique mission architecture. That is, SASSI provides new abilities to 1) measure space plasma potentials in low Earth orbit over ~100 m relative to a common potential, and 2) to investigate multi-scale ionospheric plasma turbulence morphology simultaneously of both ~ 1 cm and ~ 10 m scale lengths. The first measurement technique will aid in the distinction of vertical drifts within equatorial plasma bubbles from the vertical motions of the bulk of the layer due to zonal electric fields. The second will aid in understanding ionospheric plasma turbulence cascading in scale sizes that affect over the horizon radar. During many years of ISS operation, we have conducted effective (but not perfect) human and robotic extravehicular activities within the space plasma environment surrounding the ISS structure. However, because of the complexity of the interaction between the ISS and the space environment, there remain important sources of unpredictable environmental situations that affect operations. Examples of affected systems include EVA safety, solar panel efficiency, and scientific instrument integrity. Models and heuristically-derived best practices are well-suited for routine operations, but when it comes to unusual or anomalous events or situations, there is no substitute for real-time monitoring. SASSI is being designed to deploy and operate a suite of low-cost, medium/high-TRL plasma sensors on the ISS Express Logistics Carrier for long-term observations and the Space Station Remote Manipulator System for short-term focused campaigns. The presentation will include a description of the instrument complement and an overview of the operations concept.

  7. A controlled experiment on the impact of software structure on maintainability

    NASA Technical Reports Server (NTRS)

    Rombach, Dieter H.

    1987-01-01

    The impact of software structure on maintainability aspects including comprehensibility, locality, modifiability, and reusability in a distributed system environment is studied in a controlled maintenance experiment involving six medium-size distributed software systems implemented in LADY (language for distributed systems) and six in an extended version of sequential PASCAL. For all maintenance aspects except reusability, the results were quantitatively given in terms of complexity metrics which could be automated. The results showed LADY to be better suited to the development of maintainable software than the extension of sequential PASCAL. The strong typing combined with high parametrization of units is suggested to improve the reusability of units in LADY.

  8. Metrics for comparing plasma mass filters

    NASA Astrophysics Data System (ADS)

    Fetterman, Abraham J.; Fisch, Nathaniel J.

    2011-10-01

    High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter.

  9. Large Scale Spectral Line Mapping of Galactic Regions with CCAT-Prime

    NASA Astrophysics Data System (ADS)

    Simon, Robert

    2018-01-01

    CCAT-prime is a 6-m submillimeter telescope that is being built on the top of Cerro Chajnantor (5600 m altitude) overlooking the ALMA plateau in the Atacama Desert. Its novel Crossed-Dragone design enables a large field of view without blockage and is thus particularly well suited for large scale surveys in the continuum and spectral lines targeting important questions ranging from star formation in the Milky Way to cosmology. On this poster, we focus on the large scale mapping opportunities in important spectral cooling lines of the interstellar medium opened up by CCAT-prime and the Cologne heterodyne instrument CHAI.

  10. An Electron-positron Jet Model for the Galactic Center

    NASA Technical Reports Server (NTRS)

    Burns, M. L.

    1983-01-01

    High energy observations of the galactic center on the subparsec scale seem to be consistent with electron-positron production in the form of relativistic jets. These jets could be produced by an approximately 1,000,000 solar mass black hole dynamo transportating pairs away from the massive core. An electromagnetic cascade shower would develop first from ambient soft protons and then nonlinearly; the shower using itself as a scattering medium. This is suited to producing, cooling and transporting pairs to the observed annihilation region. It is possible the center of our galaxy is a miniature version of more powerful active galactic nuclei that exhibit jet activity.

  11. An electron-positron jet model for the Galactic center

    NASA Technical Reports Server (NTRS)

    Burns, M. L.

    1983-01-01

    High energy observations of the galactic center on the subparsec scale seem to be consistent with electron-positron production in the form of relativistic jets. These jets could be produced by an approximately 1,000,000 solar mass black hole dynamo transporting pairs away from the massive core. An electomagnetic cascade shower would develop first from ambient soft protons and then nonlinearly, the shower using itself as a scattering medium. This is suited to producing, cooling and transporting pairs to the observed annihilation region. It is possible the center of our galaxy is a miniature version of more powerful active galactic nuclei that exhibit jet activity.

  12. An electron-positron jet model for the Galactic center

    NASA Astrophysics Data System (ADS)

    Burns, M. L.

    1983-07-01

    High energy observations of the galactic center on the subparsec scale seem to be consistent with electron-positron production in the form of relativistic jets. These jets could be produced by an approximately 1,000,000 solar mass black hole dynamo transporting pairs away from the massive core. An electomagnetic cascade shower would develop first from ambient soft protons and then nonlinearly, the shower using itself as a scattering medium. This is suited to producing, cooling and transporting pairs to the observed annihilation region. It is possible the center of our galaxy is a miniature version of more powerful active galactic nuclei that exhibit jet activity.

  13. An electron-positron jet model for the galactic center

    NASA Astrophysics Data System (ADS)

    Burns, M. L.

    1983-03-01

    High energy observations of the galactic center on the subparsec scale seem to be consistent with electron-positron production in the form of relativistic jets. These jets could be produced by an approximately 1,000,000 solar mass black hole dynamo transportating pairs away from the massive core. An electromagnetic cascade shower would develop first from ambient soft protons and then nonlinearly; the shower using itself as a scattering medium. This is suited to producing, cooling and transporting pairs to the observed annihilation region. It is possible the center of our galaxy is a miniature version of more powerful active galactic nuclei that exhibit jet activity.

  14. MINERvA CCInclusive Analysis

    NASA Astrophysics Data System (ADS)

    Martinez, David

    2013-04-01

    MINERvA is a few-GeV neutrino scattering experiment that has been taking data in the NuMI beam line at Fermilab since November 2009. The experiment will provide important inputs, both in support of neutrino oscillation searches and as a pure weak probe of the nuclear medium. For this, MINERvA employs a fine-grained detector, with an eight ton active target region composed of plastic scintillator and a suite of nuclear targets composed of helium, carbon, iron, lead and water placed upstream of the active region. In this talk, we present the current status of the charged current inclusive analysis for neutrinos and antineutrinos in plastic scintillator.

  15. The Spectral Element Method for Geophysical Flows

    NASA Astrophysics Data System (ADS)

    Taylor, Mark

    1998-11-01

    We will describe SEAM, a Spectral Element Atmospheric Model. SEAM solves the 3D primitive equations used in climate modeling and medium range forecasting. SEAM uses a spectral element discretization for the surface of the globe and finite differences in the vertical direction. The model is spectrally accurate, as demonstrated by a variety of test cases. It is well suited for modern distributed-shared memory computers, sustaining over 24 GFLOPS on a 240 processor HP Exemplar. This performance has allowed us to run several interesting simulations in full spherical geometry at high resolution (over 22 million grid points).

  16. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1994 Revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Mabrey, J.B.

    1994-07-01

    This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronicmore » Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.« less

  17. Raising Quality and Achievement. A College Guide to Benchmarking.

    ERIC Educational Resources Information Center

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  18. Benchmarking in Education: Tech Prep, a Case in Point. IEE Brief Number 8.

    ERIC Educational Resources Information Center

    Inger, Morton

    Benchmarking is a process by which organizations compare their practices, processes, and outcomes to standards of excellence in a systematic way. The benchmarking process entails the following essential steps: determining what to benchmark and establishing internal baseline data; identifying the benchmark; determining how that standard has been…

  19. Benchmarks: The Development of a New Approach to Student Evaluation.

    ERIC Educational Resources Information Center

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  20. RNAblueprint: flexible multiple target nucleic acid sequence design.

    PubMed

    Hammer, Stefan; Tschiatschek, Birgit; Flamm, Christoph; Hofacker, Ivo L; Findeiß, Sven

    2017-09-15

    Realizing the value of synthetic biology in biotechnology and medicine requires the design of molecules with specialized functions. Due to its close structure to function relationship, and the availability of good structure prediction methods and energy models, RNA is perfectly suited to be synthetically engineered with predefined properties. However, currently available RNA design tools cannot be easily adapted to accommodate new design specifications. Furthermore, complicated sampling and optimization methods are often developed to suit a specific RNA design goal, adding to their inflexibility. We developed a C ++  library implementing a graph coloring approach to stochastically sample sequences compatible with structural and sequence constraints from the typically very large solution space. The approach allows to specify and explore the solution space in a well defined way. Our library also guarantees uniform sampling, which makes optimization runs performant by not only avoiding re-evaluation of already found solutions, but also by raising the probability of finding better solutions for long optimization runs. We show that our software can be combined with any other software package to allow diverse RNA design applications. Scripting interfaces allow the easy adaption of existing code to accommodate new scenarios, making the whole design process very flexible. We implemented example design approaches written in Python to demonstrate these advantages. RNAblueprint , Python implementations and benchmark datasets are available at github: https://github.com/ViennaRNA . s.hammer@univie.ac.at, ivo@tbi.univie.ac.at or sven@tbi.univie.ac.at. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  1. Potential Impacts and Management Implications of Climate Change on Tampa Bay Estuary Critical Coastal Habitats

    NASA Astrophysics Data System (ADS)

    Sherwood, Edward T.; Greening, Holly S.

    2014-02-01

    The Tampa Bay estuary is a unique and valued ecosystem that currently thrives between subtropical and temperate climates along Florida's west-central coast. The watershed is considered urbanized (42 % lands developed); however, a suite of critical coastal habitats still persists. Current management efforts are focused toward restoring the historic balance of these habitat types to a benchmark 1950s period. We have modeled the anticipated changes to a suite of habitats within the Tampa Bay estuary using the sea level affecting marshes model under various sea level rise (SLR) scenarios. Modeled changes to the distribution and coverage of mangrove habitats within the estuary are expected to dominate the overall proportions of future critical coastal habitats. Modeled losses in salt marsh, salt barren, and coastal freshwater wetlands by 2100 will significantly affect the progress achieved in "Restoring the Balance" of these habitat types over recent periods. Future land management and acquisition priorities within the Tampa Bay estuary should consider the impending effects of both continued urbanization within the watershed and climate change. This requires the recognition that: (1) the Tampa Bay estuary is trending towards a mangrove-dominated system; (2) the current management paradigm of "Restoring the Balance" may no longer provide realistic, attainable goals; (3) restoration that creates habitat mosaics will prove more resilient in the future; and (4) establishing subtidal and upslope "refugia" may be a future strategy in this urbanized estuary to allow sensitive habitat types (e.g., seagrass and salt barren) to persist under anticipated climate change and SLR impacts.

  2. Potential impacts and management implications of climate change on Tampa Bay estuary critical coastal habitats.

    PubMed

    Sherwood, Edward T; Greening, Holly S

    2014-02-01

    The Tampa Bay estuary is a unique and valued ecosystem that currently thrives between subtropical and temperate climates along Florida's west-central coast. The watershed is considered urbanized (42 % lands developed); however, a suite of critical coastal habitats still persists. Current management efforts are focused toward restoring the historic balance of these habitat types to a benchmark 1950s period. We have modeled the anticipated changes to a suite of habitats within the Tampa Bay estuary using the sea level affecting marshes model under various sea level rise (SLR) scenarios. Modeled changes to the distribution and coverage of mangrove habitats within the estuary are expected to dominate the overall proportions of future critical coastal habitats. Modeled losses in salt marsh, salt barren, and coastal freshwater wetlands by 2100 will significantly affect the progress achieved in "Restoring the Balance" of these habitat types over recent periods. Future land management and acquisition priorities within the Tampa Bay estuary should consider the impending effects of both continued urbanization within the watershed and climate change. This requires the recognition that: (1) the Tampa Bay estuary is trending towards a mangrove-dominated system; (2) the current management paradigm of "Restoring the Balance" may no longer provide realistic, attainable goals; (3) restoration that creates habitat mosaics will prove more resilient in the future; and (4) establishing subtidal and upslope "refugia" may be a future strategy in this urbanized estuary to allow sensitive habitat types (e.g., seagrass and salt barren) to persist under anticipated climate change and SLR impacts.

  3. The General Concept of Benchmarking and Its Application in Higher Education in Europe

    ERIC Educational Resources Information Center

    Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna

    2009-01-01

    The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…

  4. S66: A Well-balanced Database of Benchmark Interaction Energies Relevant to Biomolecular Structures

    PubMed Central

    2011-01-01

    With numerous new quantum chemistry methods being developed in recent years and the promise of even more new methods to be developed in the near future, it is clearly critical that highly accurate, well-balanced, reference data for many different atomic and molecular properties be available for the parametrization and validation of these methods. One area of research that is of particular importance in many areas of chemistry, biology, and material science is the study of noncovalent interactions. Because these interactions are often strongly influenced by correlation effects, it is necessary to use computationally expensive high-order wave function methods to describe them accurately. Here, we present a large new database of interaction energies calculated using an accurate CCSD(T)/CBS scheme. Data are presented for 66 molecular complexes, at their reference equilibrium geometries and at 8 points systematically exploring their dissociation curves; in total, the database contains 594 points: 66 at equilibrium geometries, and 528 in dissociation curves. The data set is designed to cover the most common types of noncovalent interactions in biomolecules, while keeping a balanced representation of dispersion and electrostatic contributions. The data set is therefore well suited for testing and development of methods applicable to bioorganic systems. In addition to the benchmark CCSD(T) results, we also provide decompositions of the interaction energies by means of DFT-SAPT calculations. The data set was used to test several correlated QM methods, including those parametrized specifically for noncovalent interactions. Among these, the SCS-MI-CCSD method outperforms all other tested methods, with a root-mean-square error of 0.08 kcal/mol for the S66 data set. PMID:21836824

  5. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Cognata, T.; Bue, G.; Makinen, J.

    2011-01-01

    The human thermal database developed at the Johnson Space Center (JSC) is used to evaluate a set of widely used human thermal models. This database will facilitate a more accurate evaluation of human thermoregulatory response using in a variety of situations, including those situations that might otherwise prove too dangerous for actual testing--such as extreme hot or cold splashdown conditions. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models. Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality.

  6. Psychophysiological Sensing and State Classification for Attention Management in Commercial Aviation

    NASA Technical Reports Server (NTRS)

    Harrivel, Angela R.; Liles, Charles; Stephens, Chad L.; Ellis, Kyle K.; Prinzel, Lawrence J.; Pope, Alan T.

    2016-01-01

    Attention-related human performance limiting states (AHPLS) can cause pilots to lose airplane state awareness (ASA), and their detection is important to improving commercial aviation safety. The Commercial Aviation Safety Team found that the majority of recent international commercial aviation accidents attributable to loss of control inflight involved flight crew loss of airplane state awareness, and that distraction of various forms was involved in all of them. Research on AHPLS, including channelized attention, diverted attention, startle / surprise, and confirmation bias, has been recommended in a Safety Enhancement (SE) entitled "Training for Attention Management." To accomplish the detection of such cognitive and psychophysiological states, a broad suite of sensors has been implemented to simultaneously measure their physiological markers during high fidelity flight simulation human subject studies. Pilot participants were asked to perform benchmark tasks and experimental flight scenarios designed to induce AHPLS. Pattern classification was employed to distinguish the AHPLS induced by the benchmark tasks. Unimodal classification using pre-processed electroencephalography (EEG) signals as input features to extreme gradient boosting, random forest and deep neural network multiclass classifiers was implemented. Multi-modal classification using galvanic skin response (GSR) in addition to the same EEG signals and using the same types of classifiers produced increased accuracy with respect to the unimodal case (90 percent vs. 86 percent), although only via the deep neural network classifier. These initial results are a first step toward the goal of demonstrating simultaneous real time classification of multiple states using multiple sensing modalities in high-fidelity flight simulators. This detection is intended to support and inform training methods under development to mitigate the loss of ASA and thus reduce accidents and incidents.

  7. Benchmarking and validation of a Geant4-SHADOW Monte Carlo simulation for dose calculations in microbeam radiation therapy.

    PubMed

    Cornelius, Iwan; Guatelli, Susanna; Fournier, Pauline; Crosbie, Jeffrey C; Sanchez Del Rio, Manuel; Bräuer-Krisch, Elke; Rosenfeld, Anatoly; Lerch, Michael

    2014-05-01

    Microbeam radiation therapy (MRT) is a synchrotron-based radiotherapy modality that uses high-intensity beams of spatially fractionated radiation to treat tumours. The rapid evolution of MRT towards clinical trials demands accurate treatment planning systems (TPS), as well as independent tools for the verification of TPS calculated dose distributions in order to ensure patient safety and treatment efficacy. Monte Carlo computer simulation represents the most accurate method of dose calculation in patient geometries and is best suited for the purpose of TPS verification. A Monte Carlo model of the ID17 biomedical beamline at the European Synchrotron Radiation Facility has been developed, including recent modifications, using the Geant4 Monte Carlo toolkit interfaced with the SHADOW X-ray optics and ray-tracing libraries. The code was benchmarked by simulating dose profiles in water-equivalent phantoms subject to irradiation by broad-beam (without spatial fractionation) and microbeam (with spatial fractionation) fields, and comparing against those calculated with a previous model of the beamline developed using the PENELOPE code. Validation against additional experimental dose profiles in water-equivalent phantoms subject to broad-beam irradiation was also performed. Good agreement between codes was observed, with the exception of out-of-field doses and toward the field edge for larger field sizes. Microbeam results showed good agreement between both codes and experimental results within uncertainties. Results of the experimental validation showed agreement for different beamline configurations. The asymmetry in the out-of-field dose profiles due to polarization effects was also investigated, yielding important information for the treatment planning process in MRT. This work represents an important step in the development of a Monte Carlo-based independent verification tool for treatment planning in MRT.

  8. The ADER-DG method for seismic wave propagation and earthquake rupture dynamics

    NASA Astrophysics Data System (ADS)

    Pelties, Christian; Gabriel, Alice; Ampuero, Jean-Paul; de la Puente, Josep; Käser, Martin

    2013-04-01

    We will present the Arbitrary high-order DERivatives Discontinuous Galerkin (ADER-DG) method for solving the combined elastodynamic wave propagation and dynamic rupture problem. The ADER-DG method enables high-order accuracy in space and time while being implemented on unstructured tetrahedral meshes. A tetrahedral element discretization provides rapid and automatized mesh generation as well as geometrical flexibility. Features as mesh coarsening and local time stepping schemes can be applied to reduce computational efforts without introducing numerical artifacts. The method is well suited for parallelization and large scale high-performance computing since only directly neighboring elements exchange information via numerical fluxes. The concept of fluxes is a key ingredient of the numerical scheme as it governs the numerical dispersion and diffusion properties and allows to accommodate for boundary conditions, empirical friction laws of dynamic rupture processes, or the combination of different element types and non-conforming mesh transitions. After introducing fault dynamics into the ADER-DG framework, we will demonstrate its specific advantages in benchmarking test scenarios provided by the SCEC/USGS Spontaneous Rupture Code Verification Exercise. An important result of the benchmark is that the ADER-DG method avoids spurious high-frequency contributions in the slip rate spectra and therefore does not require artificial Kelvin-Voigt damping, filtering or other modifications of the produced synthetic seismograms. To demonstrate the capabilities of the proposed scheme we simulate an earthquake scenario, inspired by the 1992 Landers earthquake, that includes branching and curved fault segments. Furthermore, topography is respected in the discretized model to capture the surface waves correctly. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies.

  9. Evaluating Productivity Predictions Under Elevated CO2 Conditions: Multi-Model Benchmarking Across FACE Experiments

    NASA Astrophysics Data System (ADS)

    Cowdery, E.; Dietze, M.

    2016-12-01

    As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentration are highly variable and contain a considerable amount of uncertainty.The Predictive Ecosystem Analyzer (PEcAn) is an informatics toolbox that wraps around an ecosystem model and can be used to help identify which factors drive uncertainty. We tested a suite of models (LPJ-GUESS, MAESPA, GDAY, CLM5, DALEC, ED2), which represent a range from low to high structural complexity, across a range of Free-Air CO2 Enrichment (FACE) experiments: the Kennedy Space Center Open Top Chamber Experiment, the Rhinelander FACE experiment, the Duke Forest FACE experiment and the Oak Ridge Experiment on CO2 Enrichment. These tests were implemented in a novel benchmarking workflow that is automated, repeatable, and generalized to incorporate different sites and ecological models. Observational data from the FACE experiments represent a first test of this flexible, extensible approach aimed at providing repeatable tests of model process representation.To identify and evaluate the assumptions causing inter-model differences we used PEcAn to perform model sensitivity and uncertainty analysis, not only to assess the components of NPP, but also to examine system processes such nutrient uptake and and water use. Combining the observed patterns of uncertainty between multiple models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.

  10. Benchmarking reference services: an introduction.

    PubMed

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  11. Taking the Battle Upstream: Towards a Benchmarking Role for NATO

    DTIC Science & Technology

    2012-09-01

    Benchmark.........................................................................................14 Figure 8. World Bank Benchmarking Work on Quality...Search of a Benchmarking Theory for the Public Sector.” 16     Figure 8. World Bank Benchmarking Work on Quality of Governance One of the most...the Ministries of Defense in the countries in which it works ). Another interesting innovation is that for comparison purposes, McKinsey categorized

  12. Benchmarks--Standards Comparisons. Math Competencies: EFF Benchmarks Comparison [and] Reading Competencies: EFF Benchmarks Comparison [and] Writing Competencies: EFF Benchmarks Comparison.

    ERIC Educational Resources Information Center

    Kent State Univ., OH. Ohio Literacy Resource Center.

    This document is intended to show the relationship between Ohio's Standards and Competencies, Equipped for the Future's (EFF's) Standards and Components of Performance, and Ohio's Revised Benchmarks. The document is divided into three parts, with Part 1 covering mathematics instruction, Part 2 covering reading instruction, and Part 3 covering…

  13. A benchmarking method to measure dietary absorption efficiency of chemicals by fish.

    PubMed

    Xiao, Ruiyang; Adolfsson-Erici, Margaretha; Åkerman, Gun; McLachlan, Michael S; MacLeod, Matthew

    2013-12-01

    Understanding the dietary absorption efficiency of chemicals in the gastrointestinal tract of fish is important from both a scientific and a regulatory point of view. However, reported fish absorption efficiencies for well-studied chemicals are highly variable. In the present study, the authors developed and exploited an internal chemical benchmarking method that has the potential to reduce uncertainty and variability and, thus, to improve the precision of measurements of fish absorption efficiency. The authors applied the benchmarking method to measure the gross absorption efficiency for 15 chemicals with a wide range of physicochemical properties and structures. They selected 2,2',5,6'-tetrachlorobiphenyl (PCB53) and decabromodiphenyl ethane as absorbable and nonabsorbable benchmarks, respectively. Quantities of chemicals determined in fish were benchmarked to the fraction of PCB53 recovered in fish, and quantities of chemicals determined in feces were benchmarked to the fraction of decabromodiphenyl ethane recovered in feces. The performance of the benchmarking procedure was evaluated based on the recovery of the test chemicals and precision of absorption efficiency from repeated tests. Benchmarking did not improve the precision of the measurements; after benchmarking, however, the median recovery for 15 chemicals was 106%, and variability of recoveries was reduced compared with before benchmarking, suggesting that benchmarking could account for incomplete extraction of chemical in fish and incomplete collection of feces from different tests. © 2013 SETAC.

  14. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGES

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  15. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    PubMed

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  16. Benchmarking clinical photography services in the NHS.

    PubMed

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  17. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone. Using a ROV to place and remove sensors on the benchmarks will significantly reduce the number of sensors required by the community to monitor offshore strain in subduction zones.

  18. A novel strategy for load balancing of distributed medical applications.

    PubMed

    Logeswaran, Rajasvaran; Chen, Li-Choo

    2012-04-01

    Current trends in medicine, specifically in the electronic handling of medical applications, ranging from digital imaging, paperless hospital administration and electronic medical records, telemedicine, to computer-aided diagnosis, creates a burden on the network. Distributed Service Architectures, such as Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA) and Open Service Access (OSA), are able to meet this new challenge. Distribution enables computational tasks to be spread among multiple processors; hence, performance is an important issue. This paper proposes a novel approach in load balancing, the Random Sender Initiated Algorithm, for distribution of tasks among several nodes sharing the same computational object (CO) instances in Distributed Service Architectures. Simulations illustrate that the proposed algorithm produces better network performance than the benchmark load balancing algorithms-the Random Node Selection Algorithm and the Shortest Queue Algorithm, especially under medium and heavily loaded conditions.

  19. A review on hydrothermal pre-treatment technologies and environmental profiles of algal biomass processing.

    PubMed

    Patel, Bhavish; Guo, Miao; Izadpanah, Arash; Shah, Nilay; Hellgardt, Klaus

    2016-01-01

    The need for efficient and clean biomass conversion technologies has propelled Hydrothermal (HT) processing as a promising treatment option for biofuel production. This manuscript discussed its application for pre-treatment of microalgae biomass to solid (biochar), liquid (biocrude and biodiesel) and gaseous (hydrogen and methane) products via Hydrothermal Carbonisation (HTC), Hydrothermal Liquefaction (HTL) and Supercritical Water Gasification (SCWG) as well as the utility of HT water as an extraction medium and HT Hydrotreatment (HDT) of algal biocrude. In addition, the Solar Energy Retained in Fuel (SERF) using HT technologies is calculated and compared with benchmark biofuel. Lastly, the Life Cycle Assessment (LCA) discusses the limitation of the current state of art as well as introduction to new potential input categories to obtain a detailed environmental profile. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Optimization of the cooling profile to achieve crack-free Yb:S-FAP crystals

    NASA Astrophysics Data System (ADS)

    Fang, H. S.; Qiu, S. R.; Zheng, L. L.; Schaffers, K. I.; Tassano, J. B.; Caird, J. A.; Zhang, H.

    2008-08-01

    Yb:S-FAP [Yb 3+:Sr 5(PO 4) 3F] crystals are an important gain medium for diode-pumped laser applications. Growth of 7.0 cm diameter Yb:S-FAP crystals utilizing the Czochralski (CZ) method from SrF 2-rich melts often encounters cracks during the post-growth cool-down stage. To suppress cracking during cool-down, a numerical simulation of the growth system was used to understand the correlation between the furnace power during cool-down and the radial temperature differences within the crystal. The critical radial temperature difference, above which the crystal cracks, has been determined by benchmarking the simulation results against experimental observations. Based on this comparison, an optimal three-stage ramp-down profile was implemented, which produced high-quality, crack-free Yb:S-FAP crystals.

  1. Benchmarking--Measuring and Comparing for Continuous Improvement.

    ERIC Educational Resources Information Center

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  2. Large-scale risk assessment of polycyclic aromatic hydrocarbons in shoreline sediments from Saudi Arabia: environmental legacy after twelve years of the Gulf war oil spill.

    PubMed

    Bejarano, Adriana C; Michel, Jacqueline

    2010-05-01

    A large-scale assessment of polycyclic aromatic hydrocarbons (PAHs) from the 1991 Gulf War oil spill was performed for 2002-2003 sediment samples (n = 1679) collected from habitats along the shoreline of Saudi Arabia. Benthic sediment toxicity was characterized using the Equilibrium Partitioning Sediment Benchmark Toxic Unit approach for 43 PAHs (ESBTU(FCV,43)). Samples were assigned to risk categories according to ESBTU(FCV,43) values: no-risk (< or = 1), low (>1 - < or = 2), low-medium (>2 - < or = 3), medium (>3 - < or = 5) and high-risk (>5). Sixty seven percent of samples had ESBTU(FCV,43) > 1 indicating potential adverse ecological effects. Sediments from the 0-30 cm layer from tidal flats, and the >30 - <60 cm layer from heavily oiled halophytes and mangroves had high frequency of high-risk samples. No-risk samples were characterized by chrysene enrichment and depletion of lighter molecular weight PAHs, while high-risk samples showed little oil weathering and PAH patterns similar to 1993 samples. North of Safaniya sediments were not likely to pose adverse ecological effects contrary to sediments south of Tanaqib. Landscape and geomorphology has played a role on the distribution and persistence in sediments of oil from the Gulf War. Copyright 2009 Elsevier Ltd. All rights reserved.

  3. Localization of phonons in mass-disordered alloys: A typical medium dynamical cluster approach

    DOE PAGES

    Jarrell, Mark; Moreno, Juana; Raja Mondal, Wasim; ...

    2017-07-20

    The effect of disorder on lattice vibrational modes has been a topic of interest for several decades. In this article, we employ a Green's function based approach, namely, the dynamical cluster approximation (DCA), to investigate phonons in mass-disordered systems. Detailed benchmarks with previous exact calculations are used to validate the method in a wide parameter space. An extension of the method, namely, the typical medium DCA (TMDCA), is used to study Anderson localization of phonons in three dimensions. We show that, for binary isotopic disorder, lighter impurities induce localized modes beyond the bandwidth of the host system, while heavier impuritiesmore » lead to a partial localization of the low-frequency acoustic modes. For a uniform (box) distribution of masses, the physical spectrum is shown to develop long tails comprising mostly localized modes. The mobility edge separating extended and localized modes, obtained through the TMDCA, agrees well with results from the transfer matrix method. A reentrance behavior of the mobility edge with increasing disorder is found that is similar to, but somewhat more pronounced than, the behavior in disordered electronic systems. Our work establishes a computational approach, which recovers the thermodynamic limit, is versatile and computationally inexpensive, to investigate lattice vibrations in disordered lattice systems.« less

  4. An evaluation of HACCP implementation status in UK small and medium enterprises in food manufacturing.

    PubMed

    Fielding, L M; Ellis, L; Beveridge, C; Peters, A C

    2005-04-01

    To reduce foodborne illnesses, hazard and risk-based quality management systems are essential. Small and medium sized companies (SMEs) tend to have a poor understanding of such systems and limited adoption of the Hazard Analysis Critical Control Point system (HACCP). The requirement for full HACCP implementation by 2006 will place an even greater burden on these businesses. The aim of this project is to assess the current levels of understanding of hazards and risks in SMEs in the manufacturing sector. A questionnaire survey was made of 850 SMEs, including microbusinesses. This determined the industry sector and processes carried out, whether the company operated hazard-based quality management and the knowledge of the technical manager regarding the associated hazards and risks. Follow-up visits to the manufacturing plant observed the processes and the operatives to determine their level of understanding. A benchmarking audit was carried out and each company was rated. The results show that the majority of respondents stated that they operated hazard analysis-based quality management. The ability of the respondents to correctly define a hazard or risk or identify different types of hazard was, however, poor. There was no correlation between business type and audit score. The microbusinesses did, however, perform significantly less well than the larger SMEs.

  5. Localization of phonons in mass-disordered alloys: A typical medium dynamical cluster approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, Mark; Moreno, Juana; Raja Mondal, Wasim

    The effect of disorder on lattice vibrational modes has been a topic of interest for several decades. In this article, we employ a Green's function based approach, namely, the dynamical cluster approximation (DCA), to investigate phonons in mass-disordered systems. Detailed benchmarks with previous exact calculations are used to validate the method in a wide parameter space. An extension of the method, namely, the typical medium DCA (TMDCA), is used to study Anderson localization of phonons in three dimensions. We show that, for binary isotopic disorder, lighter impurities induce localized modes beyond the bandwidth of the host system, while heavier impuritiesmore » lead to a partial localization of the low-frequency acoustic modes. For a uniform (box) distribution of masses, the physical spectrum is shown to develop long tails comprising mostly localized modes. The mobility edge separating extended and localized modes, obtained through the TMDCA, agrees well with results from the transfer matrix method. A reentrance behavior of the mobility edge with increasing disorder is found that is similar to, but somewhat more pronounced than, the behavior in disordered electronic systems. Our work establishes a computational approach, which recovers the thermodynamic limit, is versatile and computationally inexpensive, to investigate lattice vibrations in disordered lattice systems.« less

  6. mTM-align: a server for fast protein structure database search and multiple protein structure alignment.

    PubMed

    Dong, Runze; Pan, Shuo; Peng, Zhenling; Zhang, Yang; Yang, Jianyi

    2018-05-21

    With the rapid increase of the number of protein structures in the Protein Data Bank, it becomes urgent to develop algorithms for efficient protein structure comparisons. In this article, we present the mTM-align server, which consists of two closely related modules: one for structure database search and the other for multiple structure alignment. The database search is speeded up based on a heuristic algorithm and a hierarchical organization of the structures in the database. The multiple structure alignment is performed using the recently developed algorithm mTM-align. Benchmark tests demonstrate that our algorithms outperform other peering methods for both modules, in terms of speed and accuracy. One of the unique features for the server is the interplay between database search and multiple structure alignment. The server provides service not only for performing fast database search, but also for making accurate multiple structure alignment with the structures found by the search. For the database search, it takes about 2-5 min for a structure of a medium size (∼300 residues). For the multiple structure alignment, it takes a few seconds for ∼10 structures of medium sizes. The server is freely available at: http://yanglab.nankai.edu.cn/mTM-align/.

  7. Survey of compressions in the SW (1 AU), and after termination shock at Voyager (in sheath & LISM)

    NASA Astrophysics Data System (ADS)

    Berdichevsky, D. B.

    2017-12-01

    Examples of the plasma compression as it is observed in the solar wind at 1 AU with the suite of instruments in the SC Wind, and after the termination shock with both Voyager SC, as well as with Voyager 1 in the local interstellar medium (LISM) are presented. The work will focus on similarities and differences in the observations at the different locations. At priory is fair to mention that the 4 regions differ in several aspects. At 1 AU the solar wind (SW) flow is mostly alfvenic. In the sheath after the termination shock the possibly subsonic solar wind is mostly compressional but fluctuation modes in scales of one hour are much less observed at Voyager 1 than at Voyager 2 path. Finally Burlaga and Ness1 documented the nature of the compressional flow in the `depletion' layer at the start of the LISM as well later in this medium, showing the low plasma-beta character of this LISM region in Voyager 1 path. 1Burlaga L.F., and N. Ness, ApJ, 784, 146 (14pp), 2014.

  8. Contribution of Auditory Learning Style to Students’ Mathematical Connection Ability

    NASA Astrophysics Data System (ADS)

    Karlimah; Risfiani, F.

    2017-09-01

    This paper presents the results of the research on the relation of mathematical concept with mathematics, other subjects, and with everyday life. This research reveals study result of the students who had auditory learning style and correlates it with their ability of mathematical connection. In this research, the researchers used a combination model or sequential exploratory design method, which is the use of qualitative and quantitative research methods in sequence. The result proves that giving learning facilities which are not suitable for the class whose students have the auditory learning style results in the barely sufficient math connection ability. The average mathematical connection ability of the auditory students was initially in the medium level of qualification. Then, the improvement in the form of the varied learning that suited the auditory learning style still showed the average ability of mathematical connection in medium level of qualification. Nevertheless, there was increase in the frequency of students in the medium level of qualification and decrease in the very low and low level of qualification. This suggests that the learning facilities, which are appropriate for the student’s auditory learning style, contribute well enough to the students’ mathematical connection ability. Therefore, the mathematics learning for students who have an auditory learning style should consist of particular activity that is understanding the concepts of mathematics and their relations.

  9. Developing Benchmarks for Solar Radio Bursts

    NASA Astrophysics Data System (ADS)

    Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Domm, P.; Love, J. J.; Pierson, J.

    2016-12-01

    Solar radio bursts can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan has asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The solar radio benchmark team was also asked to define the wavelength/frequency bands of interest. The benchmark team developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks and the basis used to derive them. We will also present the work that needs to be done in order to complete the final, or phase 2 benchmarks.

  10. Benchmarking in national health service procurement in Scotland.

    PubMed

    Walker, Scott; Masson, Ron; Telford, Ronnie; White, David

    2007-11-01

    The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland.

  11. Targeting the affordability of cigarettes: a new benchmark for taxation policy in low-income and-middle-income countries.

    PubMed

    Blecher, Evan

    2010-08-01

    To investigate the appropriateness of tax incidence (the percentage of the retail price occupied by taxes) benchmarking in low-income and-middle-income countries (LMICs) with rapidly growing economies and to explore the viability of an alternative tax policy rule based on the affordability of cigarettes. The paper outlines criticisms of tax incidence benchmarking, particularly in the context of LMICs. It then considers an affordability-based benchmark using relative income price (RIP) as a measure of affordability. The RIP measures the percentage of annual per capita GDP required to purchase 100 packs of cigarettes. Using South Africa as a case study of an LMIC, future consumption is simulated using both tax incidence benchmarks and affordability benchmarks. I show that a tax incidence benchmark is not an optimal policy tool in South Africa and that an affordability benchmark could be a more effective means of reducing tobacco consumption in the future. Although a tax incidence benchmark was successful in increasing prices and reducing tobacco consumption in South Africa in the past, this approach has drawbacks, particularly in the context of a rapidly growing LMIC economy. An affordability benchmark represents an appropriate alternative that would be more effective in reducing future cigarette consumption.

  12. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  14. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  15. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  16. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  17. Correlational effect size benchmarks.

    PubMed

    Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A

    2015-03-01

    Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  18. Medical Universities Educational and Research Online Services: Benchmarking Universities’ Website Towards E-Government

    PubMed Central

    Farzandipour, Mehrdad; Meidani, Zahra

    2014-01-01

    Background: Websites as one of the initial steps towards an e-government adoption do facilitate delivery of online and customer-oriented services. In this study we intended to investigate the role of the websites of medical universities in providing educational and research services following the E-government maturity model in the Iranian universities. Methods: This descriptive and cross- sectional study was conducted through content analysis and benchmarking the websites in 2012. The research population included the entire medical university website (37). Delivery of educational and research services through these university websites including information, interaction, transaction, and Integration were investigated using a checklist. The data were then analyzed by means of descriptive statistics and using SPSS software. Results: Level of educational and research services by websites of the medical universities type I and II was evaluated medium as 1.99 and 1.89, respectively. All the universities gained a mean score of 1 out of 3 in terms of integration of educational and research services. Conclusions: Results of the study indicated that Iranian universities have passed information and interaction stages, but they have not made much progress in transaction and integration stages. Failure to adapt to e-government in Iranian medical universities in which limiting factors such as users’ e-literacy, access to the internet and ICT infrastructure are not so crucial as in other organizations, suggest that e-government realization goes beyond technical challenges. PMID:25132713

  19. Sci-Fri AM: Quality, Safety, and Professional Issues 01: CPQR Technical Quality Control Suite Development including Quality Control Workload Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malkoske, Kyle; Nielsen, Michelle; Brown, Erika

    A close partnership between the Canadian Partnership for Quality Radiotherapy (CPQR) and the Canadian Organization of Medical Physicist’s (COMP) Quality Assurance and Radiation Safety Advisory Committee (QARSAC) has resulted in the development of a suite of Technical Quality Control (TQC) Guidelines for radiation treatment equipment, that outline specific performance objectives and criteria that equipment should meet in order to assure an acceptable level of radiation treatment quality. The framework includes consolidation of existing guidelines and/or literature by expert reviewers, structured stages of public review, external field-testing and ratification by COMP. The adopted framework for the development and maintenance of themore » TQCs ensures the guidelines incorporate input from the medical physics community during development, measures the workload required to perform the QC tests outlined in each TQC, and remain relevant (i.e. “living documents”) through subsequent planned reviews and updates. This presentation will show the Multi-Leaf Linear Accelerator document as an example of how feedback and cross-national work to achieve a robust guidance document. During field-testing, each technology was tested at multiple centres in a variety of clinic environments. As part of the defined feedback, workload data was captured. This lead to average time associated with testing as defined in each TQC document. As a result, for a medium-sized centre comprising 6 linear accelerators and a comprehensive brachytherapy program, we evaluate the physics workload to 1.5 full-time equivalent physicist per year to complete all QC tests listed in this suite.« less

  20. Using benchmarking techniques and the 2011 maternity practices infant nutrition and care (mPINC) survey to improve performance among peer groups across the United States.

    PubMed

    Edwards, Roger A; Dee, Deborah; Umer, Amna; Perrine, Cria G; Shealy, Katherine R; Grummer-Strawn, Laurence M

    2014-02-01

    A substantial proportion of US maternity care facilities engage in practices that are not evidence-based and that interfere with breastfeeding. The CDC Survey of Maternity Practices in Infant Nutrition and Care (mPINC) showed significant variation in maternity practices among US states. The purpose of this article is to use benchmarking techniques to identify states within relevant peer groups that were top performers on mPINC survey indicators related to breastfeeding support. We used 11 indicators of breastfeeding-related maternity care from the 2011 mPINC survey and benchmarking techniques to organize and compare hospital-based maternity practices across the 50 states and Washington, DC. We created peer categories for benchmarking first by region (grouping states by West, Midwest, South, and Northeast) and then by size (grouping states by the number of maternity facilities and dividing each region into approximately equal halves based on the number of facilities). Thirty-four states had scores high enough to serve as benchmarks, and 32 states had scores low enough to reflect the lowest score gap from the benchmark on at least 1 indicator. No state served as the benchmark on more than 5 indicators and no state was furthest from the benchmark on more than 7 indicators. The small peer group benchmarks in the South, West, and Midwest were better than the large peer group benchmarks on 91%, 82%, and 36% of the indicators, respectively. In the West large, the Midwest large, the Midwest small, and the South large peer groups, 4-6 benchmarks showed that less than 50% of hospitals have ideal practice in all states. The evaluation presents benchmarks for peer group state comparisons that provide potential and feasible targets for improvement.

  1. Hospital benchmarking: are U.S. eye hospitals ready?

    PubMed

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added "public" value of benchmarking in health care is questionable.

  2. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    USGS Publications Warehouse

    Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.

    2016-01-01

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics of sediment, and uncertainty in TEB values. Additional evaluations of benchmarks in relation to sediment chemistry and toxicity are ongoing.

  3. 40 CFR 141.172 - Disinfection profiling and benchmarking.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... benchmarking. 141.172 Section 141.172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving 10,000 or More People § 141.172 Disinfection profiling and benchmarking. (a... sanitary surveys conducted by the State. (c) Disinfection benchmarking. (1) Any system required to develop...

  4. 42 CFR 440.390 - Assurance of transportation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...

  5. 42 CFR 440.390 - Assurance of transportation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...

  6. 42 CFR 440.390 - Assurance of transportation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...

  7. 42 CFR 440.390 - Assurance of transportation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...

  8. 42 CFR 440.390 - Assurance of transportation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...

  9. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    ERIC Educational Resources Information Center

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  10. The Isprs Benchmark on Indoor Modelling

    NASA Astrophysics Data System (ADS)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  11. Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ye; Ma, Xiaosong; Liu, Qing Gary

    2015-01-01

    Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less

  12. Benchmarking in Academic Pharmacy Departments

    PubMed Central

    Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O.; Ross, Leigh Ann

    2010-01-01

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation. PMID:21179251

  13. Benchmarking in academic pharmacy departments.

    PubMed

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  14. Evaluating the Effect of Labeled Benchmarks on Children’s Number Line Estimation Performance and Strategy Use

    PubMed Central

    Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen

    2017-01-01

    Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children’s strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders’ NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children’s NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders’ NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children’s benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children’s age and familiarity with the number range, these additional external benchmarks might need to be labeled. PMID:28713302

  15. Evaluating the Effect of Labeled Benchmarks on Children's Number Line Estimation Performance and Strategy Use.

    PubMed

    Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen

    2017-01-01

    Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children's age and familiarity with the number range, these additional external benchmarks might need to be labeled.

  16. Medical school benchmarking - from tools to programmes.

    PubMed

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  17. The Iterative Design Process in Research and Development: A Work Experience Paper

    NASA Technical Reports Server (NTRS)

    Sullivan, George F. III

    2013-01-01

    The iterative design process is one of many strategies used in new product development. Top-down development strategies, like waterfall development, place a heavy emphasis on planning and simulation. The iterative process, on the other hand, is better suited to the management of small to medium scale projects. Over the past four months, I have worked with engineers at Johnson Space Center on a multitude of electronics projects. By describing the work I have done these last few months, analyzing the factors that have driven design decisions, and examining the testing and verification process, I will demonstrate that iterative design is the obvious choice for research and development projects.

  18. Chemical OSSEs in Global Modeling and Assimilation Office (GMAO)

    NASA Technical Reports Server (NTRS)

    Pawson, Steven

    2008-01-01

    This presentation will summarize ongoing 'chemical observing system simulation experiment (OSSE)' work in the Global Modeling and Assimilation Office (GMAO). Weather OSSEs are being studied in detail, with a 'nature run' based on the European Centre for Medium-Range Weather Forecasts (ECMWF) model that can be sampled by a synthesized suite of satellites that reproduces present-day observations. Chemical OSSEs are based largely on the carbon-cycle project and aim to study (1) how well we can reproduce the observed carbon distribution with the Atmospheric Infrared Sounder (AIRS) and Orbiting Carbon Observatory (OCO) sensors and (2) with what accuracy can we deduce surface sources and sinks of carbon species in an assimilation system.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, John T; Kelly, Kenneth J; Duran, Adam W

    Range-extended electric vehicle (EV) technology can be a viable option for reducing fuel consumption from medium-duty (MD) and heavy-duty (HD) engines by approximately 50 percent or more. Such engines have wide variations in use and duty cycles, however, and identifying the vocations/duty cycles most suitable for range-extended applications is vital for maximizing the potential benefits. This presentation provides information about NREL's research on range-extended EV technologies, with a focus on NREL's real-world data collection and analysis approach to identifying the vocations/duty cycles best suited for range-extender applications and to help guide related powertrain optimization and design requirements. The presentation alsomore » details NREL's drive cycle development process as it pertains to package delivery applications.« less

  20. Some Protocols For Optical-Fiber Digital Communications

    NASA Technical Reports Server (NTRS)

    Yeh, Cavour; Gerla, Mario

    1989-01-01

    One works best in heavy traffic, another, in light traffic. Three protocols proposed for digital communications among stations connected by passive taps to pair of uni-directional optical-fiber buses. Mediate round-robin, bounded-delay access to buses by all stations and particularly suited to fast transmission. Partly because transmission medium passive (no relay stations) and partly because protocols distribute control of network among all stations with provision for addition and deletion of stations (no control stations), communication network able to resist and recover from failures. Implicit token propagates in one direction on one bus and in opposite direction on other bus, minimizing interval of silence between end of one round and beginning of next.

Top